The US Supreme Court is looking for where the line of responsibility between Google and Twitter lies – the decision could revolutionize the internet

The US Supreme Court is looking for where the line

Two hearings started last week in the US Supreme Court, as a result of which the operations of the largest Internet companies may change significantly in their home markets.

At the center of both legal cases is the question at which point the companies are responsible for the content published on their platforms.

On Tuesday, the Supreme Court began hearings in the Gonzalez v. Google case, in which the father of a woman who died in the Paris terrorist attacks in 2015 accuses YouTube, owned by Google, of recommending videos of the extremist Islamic terrorist organization Isis. According to the indictment, YouTube’s recommendation algorithm directed users to watch ISIS recruitment videos, which in part led to the Paris terrorist attack and Nohemi Gonzalez’s to death.

On Wednesday, the Supreme Court started hearing the Twitter v. Taamneh case. The background of this case is the terrorist attack that took place in Istanbul in 2017. The family of a man who died in a night club shooting carried out in the name of ISIS accuses Twitter of aiding and abetting a terrorist act, because the company’s service has published content encouraging terrorism.

In both cases, the companies have denied their guilt and invoked Section 230 of the US Telecommunications Act, which protects companies that provide internet services from liability for content that users publish on the services.

Now, guidelines are expected from the Supreme Court on how far the protection of Article 230 extends. If the court ends up in favor of the families’ views, the platform companies’ operating environment will become significantly more cramped and risky.

What section 230?

Article 230 has its roots in the 90s, in the early days of the modern internet. With the proliferation of discussion forums, the number of defamation cases on the Internet increased, and the service provider often ended up in the dock.

At first, platforms operating on the internet were seen, like bookstores, as distributors of content who are not responsible for the content itself. However, in 1995, the New York State Supreme Court ruled that the service provider Prodigy was defamed for a comment made on a forum maintained by the company.

The reason for the verdict was that Prodigy said he was moderating his forums. The court considered that content moderation made Prodigy more of a publisher than a distributor. Thus, it was also responsible for all content shared on the service.

This decision led to a special situation where, in order to avoid responsibility for the content, the service providers should not intervene in the conversation at all.

Congress corrected the situation the following year by adding a section to the telecommunications legislation, which, among other things, guaranteed that content moderation would not lead to platform companies being responsible for unmoderated content as well.

This article became the foundation and security of every internet site that publishes user content.

They want to update the article, but there is no consensus on the means

Section 230 was added to the United States Federal Constitution in 1996 with near-unanimous support.

In recent years, an extraordinary consensus has begun to form in the US Congress that the section is outdated. Under the previous presidential election, as well as a Republican president Donald Trump that the Democratic candidate Joe Biden demand that the law be updated.

Despite the common will, the country’s Congress has not succeeded in preparing a single law amendment for the president to sign. No, although there have been dozens of companies.

The reason for the ineffectiveness is that both parties want to modify the section from completely opposite points of view. In summary, we can say that the Democrats want to limit the protection provided by Section 230, so that platform companies like YouTube and Twitter are forced to intervene more actively in the false and considered dangerous content that spreads on their services. Republicans, on the other hand, want to blow up the section because they feel it allows platform companies to censor conservative views.

Now, Congress’ inaction has led to the fact that the matter has ended up in the Supreme Court of the country. In Tuesday’s hearing, the judges of the Supreme Court brought up several times that fixing the matter would most naturally belong to Congress.

– We are a court of law. We don’t know much about these things, said the judge Elena Kagan.

– In a way, these are not the nine best internet experts, he continued, referring to the nine judges of the Supreme Court.

Despite all this, the Supreme Court itself decided to take these two cases into consideration. This decision alone got Silicon Valley’s technology companies on their toes.

Does recommending content bring responsibility?

On the surface, both legal cases seem clear. Youtube or Twitter are not responsible for the content that spreads on their platforms, and therefore they cannot be sued for this content either.

However, the matter is not quite that simple, as is evident from the fact that the subject is now being discussed in the Supreme Court in the amount of two cases. And there are a few more on the topic from the lists of future readings.

For example, the Gonzalez v. Google case is not only about content, but also about their recommendation. According to the indictment, YouTube makes editorial decisions when it offers a list of suggestions next to the video the user is watching. According to the view, with the recommendation, the responsibility is transferred from the uploader of the content to the platform.

Based on the first hearing date, the Supreme Court is not about to turn to this point of view. The judges tried several times to get clarity on where the line is between a neutral recommendation made by an algorithm and a recommendation comparable to an editorial recommendation.

The lawyer who represented the Gonzalez family here Eric Schnapper could not answer.

During the hearing, several judges considered how drawing such a limit would affect, for example, the answers of search engines. A lawyer representing Google Lisa Blatt pointed out that without content recommendation, using sites like Youtube would be impossible due to the huge amount of content. For example, more than 500 hours of videos are uploaded to YouTube every minute.

Of course, this does not mean that the Supreme Court could, for example, make an interpretation in which “neutral” algorithms receive the protection of Section 230 and algorithms considered discriminatory remain outside the protection, points out the professor of law at the University of California, Santa Clara Eric Goldman in his blog (you will switch to another service). And algorithms are never neutral, he continues.

So Google may win the battle, but lose the war.

Even a subtle adjustment can lead to unpredictable results

It’s also possible that Gonzalez v. Google will settle without a decision. If the Supreme Court concludes in the proceedings regarding Twitter that the company is not responsible for the content that encourages terrorism shared by its users, it may at the same time dismiss the charge brought against Google.

In the case of Twitter, the Supreme Court has to find out whether the platform company can be sued for aiding terrorism if its service contains content that incites this. In practice, the court is considering whether internet services can be compared to banks, which have strict obligations to prevent the flow of terrorist money. How informed should they be about all the content that users upload to their services?

Based on Wednesday, it seems that the court is not going to support the view that Twitter was guilty of providing assistance in connection with the terrorist attack in Istanbul, even though its service contained content from the ISIS organization.

Despite this, the U.S. Supreme Court seems to agree with the country’s Congress that the rules regarding platform companies need to be updated in some way. Even so, lawyers representing tech companies have warned that even subtle tweaking could lead to surprising results.

These surprising results could also be seen outside the United States. Although the platform companies are already adapting to the legislation of different countries regarding content, their operating philosophy is built on top of Article 230. Every change to the law would affect the development of new applications and features.

Defenders of the article have also pointed out that the law protects not only platforms but also users who share other users’ content. This point of view has so far been completely forgotten in court proceedings. And the grip of US law sometimes reaches far beyond the country’s borders.

The Supreme Court is expected to make its decision by the summer.

What thoughts did the story evoke? You can discuss the topic on 27.2. until 11 p.m.

Also listen to Juuso Pekkinen’s podcast (September 28, 2020): Technology giants and misinformation – as a contribution to the future of democracy?

yl-01