Jeff Kopstein

Explicit antisemitic language used online may be somewhat easy to spot, but implied or suggested meanings can be harder to identify and thus able to circumvent censorship and social stigma. A new study by researchers from seven universities – including University of California, Irvine political science professor Jeffrey Kopstein (pictured) – dove deeply into two deplatformed subreddits popular among QAnon members, finding that while overtly antisemitic posts were limited to a small fraction of community members, implicit content could be traced to more than a third of the groups’ users. Findings, published online in PLOS One, have implications for tracking fast-moving changes in community encoded language related to antisemitism as well as other group-based forms of hate, says Kopstein.

“Explicit antisemitic utterances come at a cost ranging from social ostracism to deplatforming, so they’re frequently expressed in veiled ways online. Implicit antisemitic content and conspiracy narratives about Jews have been on the rise, especially on moderated platforms,” says Kopstein. “This is a dangerous language game that can lead to escalation, dehumanization, and desensitization that can turn rhetoric into open intergroup contempt and to discriminatory views and norms.”

The language game he references works as follows: Overt language is used to present and establish meanings of implicit antisemitic terms and narratives – also known as “dog whistles.” An ingroup, in this case, the two subreddit communities, recognizes and then uses the secret meanings while keeping others, including platform moderators, in the dark.

“At the post and even at the sentence level, these co-occurrences operate to provide the ingroup with a roadmap or dictionary for interpreting the meaning of implicit terms and generalized conspiracy narratives when they occur without direct reference to Jews,” says lead author Dana Weinberg, sociology professor at Queens College, City University of New York.

To understand the pervasiveness of this tactic, Kopstein and coauthor David Frey, history professor and founding director of the Center for Holocaust and Genocide Studies at the United States Military Academy at West Point – both of whom are recognized experts in areas related to antisemitism and the Holocaust – assembled a list of implicit expressions strongly associated with antisemitic tropes and conspiracy narratives. The 10-person research team combined this list with terms from the hate speech dictionary and Anti-Defamation League database of slogans, terms and symbols used by white-nationalist groups, labeling each reference as explicit or implicit. The resulting word bank includes 892 explicit and 278 implicit terms. 

Using content network analysis and qualitative coding, they mapped connections between overt antisemitic keywords and indirect references on QAnon subreddits r/CBTS_Stream and r/greatawakening, both of which were deplatformed in 2018 due to threats of violence. Of the two communities’ 34,500 users, researchers found that less than 7 percent authored submissions or comments with explicit antisemitic content, but nearly all of which included implicit antisemitic language. Roughly four times as many users – 27.95 percent – posted content flagged for implicit, but not explicit, terms. Taken together, more than one-third, 34.79 percent, of users in the two subreddits expressed antisemitic content using implicit terminology.

“With this study, we’ve provided a generalized method for examining how hate is subtly expressed in online communities,” says Kopstein. “We’ve also shown how implicit references and generalized conspiracy narratives provide a vehicle for spreading and engaging antisemitic content with seeming impunity, readily reinscribing intended antisemitic meanings for receptive new audiences. ”

“If we want to combat online hate, we need to know how it actually works, how it draws people in, and how it spreads,” he adds. “This research takes us part of the way there.”

Additional coauthors include Meyer D. Levy, Ph.D., and graduate students Nikola Baci and Yunis Ni, Queens College-CUNY; April Edwards, cyber science professor, United States Naval Academy; Peter Antonaros, applied math graduate student, Columbia University; Noah D. Cohen, criminal justice graduate student, John Jay College, City University of New York; and Javier A. Fernandez, sociology graduate student, Princeton University.

Funding for this work was provided by the Air Force Research Laboratory under award FA 8650-22-2-6469 and the David Berg Foundation.

For the full study, please visit https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0318988.

-Heather Ashbach, UC Irvine Social Sciences