JUSTICE - No. 66
21 Spring 2021 the application of procedural aspects of international human rights law. And finally, the policies themselves governing the work of the thousands of Facebook content moderators, and shaping the artificial intelligence (AI) that assists them, are only indirectly affected by the decisions of the Board. The reason is that it reviews the application of the values and policies of the company and has no authority or power to change the policies themselves (although it may make recommendations to change certain aspects of existing policies). Indeed, the prevalence of hate speech and antisemitic speech on online platforms appears to be facilitated by under-inclusive or otherwise flawed anti-hate speech policies that fall short of international standards. For example, even after the change of course regarding Holocaust denial, Facebook’s community standards still lack certain important features that could have rendered its efforts to combat hate speech more effective. First, the standards are not formulated on the basis of the language of the ICCPR or other international legal instrument, but rather on a somewhat idiosyncratic understanding of hate speech as “direct attack against people on the basis of what we call protected characteristics.” 24 This omits serious harm caused by indirect attacks and harm to individuals not belonging to what Facebook considers to be a protected group, such as members of a political movement. Second, the existence of different policy silos for different forms of problematic speech — disinformation, violence, hate —complicates efforts to grasp the full negative social cost of offensive expressions. The over-inclusiveness of some aspects of existing Facebook hate speech policies — as shown by the Oversight Board’s early decisions to revoke takedown decisions — and the under-inclusiveness of some other aspects, as discussed above, underscore the need to review the content moderation policies in accordance with international human rights standards. Until this is done, content moderation on platforms entails a false choice between freedom of expression and censorship; rather it should offer a choice between compliance and non-compliance with the relevant international human rights law norms. Protection Gaps Facilitating Antisemitic Speech Examination of prevalent antisemitic expressions found on social media illustrates the protection gaps found in the existing hate speech policies of the main online platforms, which results in offensive speech slipping through the cracks. One example of the under- inclusiveness of the policies is the common use on social media of the slur “ZioNazi.” Since the term Zionists is often used on social media as a substitute for the term Jews, calling large numbers of descendants of the principal victims of National-Socialism“Nazis” should have been flagged as a form of revictimization and be subject to some form of content moderation. It is not clear, however, whether existing platform policies are sufficiently sensitive to cover imperfect proxies (such as Jews/Zionists) which may be used to circumvent existing community standards, and whether the notion of revictimization is applied by the platforms in a multi- generational context. Other prevalent antisemitic social media posts involve conspiracy theories regarding prominent Jews such as the Rothschilds or George Soros, despite their clear relation to traditional antisemitic tropes associating Jews with money, and some of the platforms still allow posting of false information about the Holocaust amounting to Holocaust denial or distortion. 25 The prevalence of such online contents appears to be facilitated by the non-inclusion of indirect attacks within the scope of the hate speech definitions used by the platforms and their failure to link their hate speech to their disinformation policies. The problem of inadequate platform response to online antisemitic speech appears to also be related to the algorithmic nature of their hate speech policies, which tend to outlaw specific types of expressions involving particular types of metaphors (e.g., comparing members of certain groups to animals) or images. While such an approach appears to reflect the constraints on the number and attention of human “cleaners” and the current stage of development of AI and natural language processing, it might result in missing the bigger picture of the harm intended and caused by certain types of offensive expressions. Moreover, the specific elements covered by existing community standards allow technically sophisticated antisemites to formulate expressions that carefully bypass existing hate speech rules. 24. “Facebook Community Standards: 12. Hate Speech,” F ACEBOOK , available at https://www.facebook.com/ communitystandards/hate_speech 25. James Spiro, “Battling online Holocaust denial and anti-Semitism is a year-round job,”C ALCALIST , April 8, 2021, available at https://www.calcalistech.com/ctech/ articles/0,7340,L-3903522,00.html
RkJQdWJsaXNoZXIy MjgzNzA=