51 Spring 2023 on the victims. It is not only a feeling-based impact (i.e., making Jews feel diminished or offended), but also an action-based impact: the interviewees have taken measures to conceal their identities or enhance their online privacy and security. In Chapter 12, Günther Jikeli, Damir Cavar, Weejeong Jeong, Daniel Miehling, Pauravi Wagh and Denizhan Pak suggest an annotation method to build and use an algorithmic definition of antisemitism for use by AI technology to identify antisemitism. The authors used a sample of Twitter to create what they refer to as a “Gold Standard” of annotation. The attempt to create an antisemitism-related algorithm is interesting, although the authors do acknowledge problems related to human annotation and the fact that many antisemitic posts that do not include explicit text about Jews can fly under the radar. I also note that social media platforms rarely address antisemitic discourse since many posts are not textually antisemitic. The authors note that their project can be of value to other parties – an important mention that may nudge others to further develop the standard and contribute to the creation of an algorithm for antisemitism. In Chapter 13, Yfat Barak-Cheney and Leon Saltiel map the intervention methods used by Civil Society Organizations (CSOs) for dealing with antisemitism on social media. They review several related projects which research and report as well as develop AI software for combating antisemitism online. One example is the “Decoding Antisemitism” project led by Dr. Matthias J. Becker and Prof. Helena Mihaljević. CSOs’ effort is very important, since governments and social media companies are often slow to respond to such issues or demonstrate a lack of interest in dealing with online hate speech. This raises the question of accountability: who should be responsible for dealing with online antisemitism – governments, companies, CSOs or individuals? CSOs and individuals’ role in reporting antisemitism does not mean that governments and companies should be relieved of this responsibility. Is one’s need to rely on local governments or social media companies for protection too much to ask in liberal progressive democracies in the 21st century? Finally, in Chapter 14, Michael Bossetta presents an overall perspective. First, as noted, although antisemitic content is disturbing, it is only a small fraction of the online content available and should be regarded as an issue that should be dealt with, but not taken out of proportion. Second, counter narratives against antisemitism are not often considered a research topic. Most efforts attempt to define and mark antisemitic content, but disregard counter-posts and comments about it. Third, Bossetta notes that the sheer amount of online antisemitism is less important than the potential outcome – radicalizing people to the extent that they translate ideas into actions. Although I partially agree with this argument, I note that as more antisemitic content is available, the more there will be opportunity for widespread and greater and faster radicalization. n Dr. Lev Topor is a visiting ISGAP scholar at the Woolf Institute, Cambridge, a senior research fellow at the Center for Cyber Law and Policy, University of Haifa, a former research fellow at the International Institute for Holocaust Research, Yad Vashem. He is the author of “Phishing for Nazis: Conspiracies, Anonymous Communications and White Supremacy Networks on the Dark Web” (Routledge, 2023) and the co-author, with Prof. Jonathan Fox, of “Why Do People Discriminate Against Jews?” (Oxford University Press, 2021).
RkJQdWJsaXNoZXIy MjgzNzA=