JUSTICE - No. 65

33 Fall 2020 20. Kang-Xing Jin,“Keeping People Safe and Informed About the Coronavirus,” F ACEBOOK (July 2, 2020), available at https://about.fb.com/news/2020/07/coronavirus/ ; “YouTube Community Guidelines enforcement January to March 2020,”G OOGLE T RANSPARENCY R EPORT , available at https://transparencyreport.google.com/youtube policy/ removals?hl=en 21. European Union, “European Commission and IT Companies announce Code of Conduct on illegal online hate speech” (May 31, 2016), available at http://europa. eu/rapid/press release_IP 16 1937_en.htm 22. Didier Reynders,“Fifth evaluation on the Code of Conduct on Countering Illegal Hate Speech Online,” E UROPEAN C OMMISSION (June 2020), available at https://ec.europa . eu/info/sites/info/files/codeofconduct_2020_factsheet_12. pdf 23. Facebook, “Transparency - Content Restrictions Based on Local Law,” F ACEBOOK (2020), available at https:// transparency.facebook.com/content-restrictions 24. Chris Sonderby, “Our Continuing Commitment to Transparency,” F ACEBOOK (May 12, 2020), available at https://about.fb.com/news/2020/05/transparency‑report/ With COVID-19, increased isolation associated with virus prevention means an increased opportunity for online grooming and susceptibility to grooming for hatred. The volume of material on the internet is such that internet providers rely on algorithms to flag content. Yet algorithms do not fare well with hate speech, since it is typically necessary to understand the content and context to appreciate whether a particular statement or discourse constitutes incitement to hatred. Old tropes framed in new contexts require human understanding to see them for what they are. This is complicated by the shutdown of workplaces because of the coronavirus. YouTube and Facebook, for instance, report that because of COVID-19, they are relying more on automated systems. 20 The upshot is that rather than relying on either governments or internet providers or civil society in isolation to prevent hate speech, we need to rely on a combination of all three. Here are a few examples of how the European Commission and some social media platforms have attempted to address hate speech issues. The European Commission adopted, in agreement with Facebook, Twitter, YouTube and Microsoft, a code of conduct on countering illegal hate speech online. The companies agreed to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary. 21 Illegal hate speech encompasses antisemitism as indicated by the International Holocaust Remembrance Alliance working definition of antisemitism noted above. Organizations located in 27 European Union member states were accepted as trusted flaggers or reporters to notify the companies of alleged illegal hate speech content and report to the commission on reactions. According to a January 2020 fact sheet from the European Commission, there are now 39 trusted flaggers. 22 The fact sheet concluded that the situation is better, but that there is room for improvement. It also indicated that antisemitism constituted 7.1% of the grounds of reported hate speech. The internal complaints procedures of internet providers does not entail jurisprudence. The providers only publish transparency reports that are supposed to show how they enforce their policies, although it is difficult to understand from the reports just what is happening. The author focuses on three providers: Google, which also ownsYouTube, Facebook and Twitter. The transparency reports for Facebook and Google note government requests for removal. The reports give only glimpses of how the providers address complaints against content. For instance, Facebook reported that: “We received a request from the Federal Service for Supervision of Communications, Information Technology and Mass Media (Roskomnadzor) to review a post that imposed a swastika and the Nazi ‘SS’ symbol over the Russian coat of arms.” Its conclusion was that “The reported posts did not violate Facebook’s Community Standards. Based on the request from Roskomnadzor, we restricted access to the content within Russia.” Facebook did not explain why the posts did not violate its own Community Standards. 23 Facebook transparency reports are issued every six months. The report released on May 12, 2020 relates to the last half of 2019. 24 Unsurprisingly, it does not include anything about COVID-19. While Google provides summary examples of what is blocked, Facebook, aside from the occasional reference, does not indicate what is blocked and why. The report has a community standards section, a hate speech subsection, and a hard questions blog. The only mentions of antisemitism are these: “Do not post:” Content targeting a person or group of people ... with: ... Designated dehumanizing comparisons,

RkJQdWJsaXNoZXIy MjgzNzA=