JUSTICE - No. 65

36 No. 65 JUSTICE the deadline for implementation of the guideline was September 19, 2020, it is too early to expect such procedures to be engaged in this area. While this initiative began in 2018, before the coronavirus, COVID-19 related incitement to hatred in general, and the spike in antisemitism associated with COVID-19 in particular, will hopefully influence how member states respond to the directive and guidelines. Many major internet providers are U.S. based. The First Amendment of the U.S. Constitution, which guarantees freedom of expression, does not apply to them, because it applies only to state actors and not to the private sector. 37 Nonetheless, many of these providers are imbued with an American cultural free speech absolutist ethic. They do not approach human rights holistically, balancing the right to freedom of expression against the right to freedom from incitement to hatred. Rather, they give priority to freedom of expression. The First Amendment, though not legally binding on them, expresses a value which is first in their hearts. One important factor that allowed the development of the internet was its freedom from all control in its early stages. However, freedom from control at the beginning is different from freedom from control today. The internet is mature enough to withstand legislative oversight. It has generated problems which it is naive to think its originators can resolve on their own. Moreover, legislation on content jolts providers out of their U.S. based free speech absolutist perspective. Modified immunity or safe harbor legislation would also help clear the minds of internet providers. In the U.S., there is a blanket safe harbor provision that: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 38 The safe harbor provision (currently under debate) has a particular component which provides immunity from civil liability for action taken in good faith by internet providers to restrict access to material that the provider considered objectionable. 39 The blanket safe harbor provision goes too far. It is a general immunity. Where dissemination ceases to be innocent, there should be internet provider liability for noxious content. 40 To be able to rely on a defense of innocent dissemination, internet providers should: 1. Provide a complaints system that generates a response within a reasonable period of time; and 2. On notice, remove, or take reasonable steps to remove, hate speech in general, and antisemitic material in particular, using the widely accepted working definition and examples of antisemitism. If an internet provider either had no complaints system, or took forever to answer complaints, or ignored complaints of unequivocal violations of its own terms of service on hate speech, the defense of innocent dissemination would not be available. The debate about safe harbor legislation has flared up in the coronavirus era because of Twitter's activities. Twitter has terms of service similar to those of Facebook and YouTube. They ban hate speech with generalities, but do not provide much in the way of specifics. Antisemitism is not mentioned. 41 Twitter also has transparency reports similar to those of Facebook andYouTube. They are not that transparent at all, in terms of how individual complaints, about antisemitism or any other matter, are decided. 42 Twitter has a notice or labelling policy, allowing for adding notices to an account or tweet. 43 A notice may indicate that the tweet violates the rules but that there is a legitimate public interest in its availability. Twitter, in May 2020, updated its notice or labelling policy largely to address the problem of tweets containing disputed or misleading information about COVID-19. The policy introduced new labels for tweets of concern —misleading information, disputed claims, and unverified claims. The new policy states: “We will continue to introduce new labels to provide context around different types of unverified claims and rumors as needed. ... We'll learn a lot as we use these new labels, and are open to adjusting as we explore labeling different 37. For a recent reaffirmation of this principle, see the United States Supreme Court decision Manhattan Community Access Corp. v. Halleck (06/17/2019) 587 U.S.___ (2019) 38. Sec. 230, Communications Decency Act 1996, 47 U.S.C. 39. Ibid. , sec. 230(c)(2)(A). 40. Peter Leonard, “Safe Harbors in Choppy Waters — Building a Sensible Approach to Liability of Internet Intermediaries in Australia,”3 J OURNAL OF I NTERNATIONAL M EDIA & E NTERTAINMENT L AW 221 (2010). 41. Twitter Help Center “Hateful Conduct Policy,”T WITTER , available at https://help.twitter.com/en/rules‑and‑policies/ hateful‑conduct‑policy 42. “Twitter Transparency Report January to June 2019,” T WITTER , available at https://transparency.twitter.com/ en.html 43. Twitter Help Center,“Notices on Twitter and what they mean,” T WITTER , available at https://help.twitter.com/en/ rules‑and‑policies/notices‑on‑twitter

RkJQdWJsaXNoZXIy MjgzNzA=