Social media as editors of public sphere: YouTube vs. Ombudsman

04-10-2016

When a popular Norwegian writer got suspended from Facebook in early September, on account of “child nudity”, the matter escalated into an international incident. Norway’s largest printed newspaper Aftenposten published the disputed photo on its cover, with the open letter from its editor-in-chief to the global social media CEO, while the Norwegian prime minister quickly followed suit, demanding a more responsible behaviour from the “world’s most powerful editor”. The algorithmic empire struck back by deleting the prime minister’s post, but in the end the old forms of social organization claimed victory. The FB status was restored, suspension removed, and people promised to improve the settings of machines policing the cyberspace.

The photo was taken in 1972 during the Vietnam war, and its author won the Pulitzer Prize. It shows children fleeing after a napalm attack. One girl in the picture is naked, having torn off her burning clothes.

One month before the ‘Norwegian’ incident, a similar misunderstanding between the social networks automated services and humans occurred in Serbia. The Ombudsman Saša Janković reported on Twitter that his YouTube channel got suspended. The notification suggests that the suspension was a result of other users’ reports. Janković clarified that, after his associates sent an appeal, an automated reply was sent on August 12 stating that the account would remain suspended. Again, no explanation was given as to on what grounds the content was flagged, the reply pointing in general only to the Community Guidelines and Terms of Service.

Meanwhile the email account used for uploading videos on YouTube was blocked as well, and the provider responded to the appeal in a similar way, pointing to general rules banning hate speech, threats and alike.

Local Twitter community noticed that right after Janković posted the info on YouTube suspension, trolls ‘knew’ what it was about.

Saša Janković used the YouTube channel for reposting TV shows and news reports on cases the Ombudsman’s office was involved with. The channel was not regularly used and uploaded videos have fairly low views, meaning that wider public have no knowledge of this course of communicating Ombudsman’s activities. Furthermore, since its content is made of TV broadcasting segments produced under strict legal rules, there were no possibilities for this YT channel to have offensive or violent material.

Since SHARE Foundation’s legal and policy director is an observer member of EDRi, we notified the organization’s coordinators of this case. To the joint inquiry, we received assurances from the Google policy team in Brussels that videos and channels are not removed automatically from YouTube, no matter the amount of flagging. As stated, reports of content are reviewed by a human team that deals with each report individually. Users should receive notifications during both suspension and reinstatement, and lack of the latter in Ombudsman’s case was deemed as a mistake, not practice.

The YT team offered clarification of procedures:

“Users whose accounts have been terminated are prohibited from accessing, possessing or creating any other YouTube accounts. When an account is terminated, the account owner will receive an email detailing the reason for the suspension. If you feel an account has been suspended in error, you may appeal the suspension by visiting the following form. We work quickly to remove channels in instances of abuse, or reinstate videos or channels that have been suspended in error, and users should receive notifications during both suspension and reinstatement. Flagged videos and channels are not automatically taken down by the flagging system.”

YT Help Center pages on account terminations offer links to full Community Guidelines and Terms of Service, while any appeals can be submitted according to the explained procedure. The flagging system is also explained in detailAccording to the latest data published by Google, the owner of YouTube, over 90 million people have flagged videos on YouTube in the last 10 years, while in 2015 alone some 92 million videos were removed.

Moral of the story could be found in several conclusions. First, the system that suspensions and removals are based on is not transparent, and it is not clear how the respective decisions are made, or why an appeal is rejected. The limits of freedom of speech in online sphere are set by powerful global platforms, leading to an absurd situation in which the Ombudsman, responsible for protecting the right to freedom of expression of citizens in Serbia, cannot protect his own right to freely receive and impart information on the internet, confronted with human and algorithmic YouTube censors.

Meanwhile, Facebook removed yet another historical image as an expected proof that an intervention by traditional media and politicians in case of the ‘Napalm girl’ didn’t actually solve anything.

Copyright, hate speech or child pornography provisions, translated into the language of algorithms or general guidelines for untrained human moderators, became tools for the abuse of rights in censoring the internet. In securing freedom of speech and free exchange of information, much more is needed than ‘terms of service’.

And although internet is in fact a private space, the fact that rights and freedoms of netizens depend on unclear rules and mechanisms set by private companies is particularly worrisome. Those rules all too often prevail over national and international regulations. Despite efforts made by IT companies to make their practice more transparent and accountable to the users, such as transparency reports, there are still many questions as to what can be posted on our Facebook or YouTube accounts. That is why it is important to support initiatives by activists and advocates for digital rights in Europeand in the US, so that policies of removal user generated content is pressured into line with legal norms and standards of human rights within the digital environment.