ENGINEERED HATE
How Social Media Built the Infrastructure of Antisemitism and Chose to Keep It Running

There are some numbers you need to know before you read anything else.
In 2024, the ADL recorded 9,354 Antisemitic incidents across the United States, the highest total since the organization began tracking in 1979 This is a 344% increase over five years and an 893% increase over the past decade. That averages out to more than 25 targeted anti-Jewish incidents every single day, more than one every hour.
2025’s full annual audit hasn’t been published yet. But the trend isn’t reversing. New York City recorded a 182% surge in Antisemitic hate crimes in January 2026 compared to January 2025, with Antisemitic incidents accounting for more than half of all hate crimes in the city that month. Globally, the first two months of 2026 saw 18.4% more Antisemitic incidents than the same period in 2024. On March 12, 2026, in what would become a highly pubicized incident, a gunman carried out a shooting and vehicle-ramming terrorist attack at Temple Israel, a Reform synagogue in West Bloomfield Township, Michigan. The target…preschoolers.
Jews make up 2% of the American population. In 2024, they were the target of 69% of all religion-based hate crimes in this country. Other hate incidents pale in comparison. Not that any of it is ok.
The AJC’s 2025 State of Antisemitism report released in February 2026 found that 91% of American Jews say they feel less safe in the United States as a result of violent attacks on Jews in the past year. One-third have been personally targeted with Antisemitism in the past year. Among young American Jews ages 18 to 29, that number reaches 47%. More than half have changed their behavior out of fear, hiding identifiers, avoiding certain places, monitoring what they say. 13% have considered leaving the country entirely. Among Orthodox Jews, that number reaches 24%. These numbers are based on real studies.
93% of American Jews say Antisemitism is a serious, pervasive problem in the United States. Among the general American public, that number is 70%. And 20% of Americans have heard the word “Antisemitism” but have no idea what it means.
None of this is happenstance. It was built, piece by piece, on the platforms most of us use every single day. This is the documented record of how that happened, who’s behind it, how to recognize it, and what has to change.
WHAT THE CONTENT ACTUALLY LOOKS LIKE
Before going platform by platform, you need to understand what’s actually being served to hundreds of millions of people. Not some random hate speech. These are specific posts attached to specific numbers.
In March 2026, the Antisemitism Research Center (ARC) published a report called “Engineered Exposure.” Researchers used Instagram normally, as any user would, and tracked content the platform’s own recommendation engine pushed directly to their accounts. Over 96 hours, they documented 100 Antisemitic posts. Those posts generated more than 5.3 million likes and 3.8 million shares, with an estimated reach of up to 280 million users.
Here’s what some of that content actually said:
One video, which received 191,000 likes and 184,000 shares, blamed Jews for creating child sex trafficking, orchestrating 9/11, assassinating JFK, engineering the bubonic plague, sinking the Titanic, and masterminding the Atlantic slave trade. It called Jews “demons.”
A second video, with 75,000 likes and 16,300 shares, placed a Star of David on a demonic idol marked “666,” displayed Israeli and American flags on the idol, and filmed Iranian flags as the idol was destroyed and celebrated.
A third post, with 39,500 likes and 55,200 shares, used Hebrew letters and Jewish symbols to claim Jews exercise hidden, sinister control over American institutions, framing Judaism itself as an inherently malevolent force.
A fourth video promoted Holocaust inversion, claiming Jews deliberately orchestrated Jewish deaths during the Holocaust to justify the creation of Israel, while depicting Jews as a parasitic force embedded in world governments.
A fifth post assembled occult imagery, conspiracy charts, and fabricated quotes to link the Rothschild family, “Ashkenazi” identity, the Illuminati, and Baal worship into a single coordinated hidden network, with Jews at the center.
Instagram’s algorithm pushed every single one of these posts to real users without being asked. This is what “Antisemitic content on social media” means in practice. Not edgy jokes or heated political debates. Demonic imagery. Accusations of orchestrating the Holocaust. Calls coded thinly enough to evade filters but clear enough to anyone paying attention.
Now let’s talk about where it’s coming from, who’s running it, and why the platforms let it run.
FACEBOOK
Facebook has 3 billion users. In January 2025, Mark Zuckerberg dismantled the content moderation policies designed to protect them.
He announced that Facebook, Instagram, and Threads would stop proactively removing hateful content and wait for user reports instead, saying, “It means we’re going to catch less bad stuff.”
The ADL measured what happened next. Antisemitic comments on the Facebook pages of Jewish members of Congress averaged 6.5 per day before the change. After February 4, 2025, they averaged 29.9 per day. Toxic comments overall increased by a factor of 13. On some single days, researchers counted 701 toxic comments across just 66 posts. That clustering pattern, where a small percentage of posts attracts a wildly disproportionate share of abuse, is itself a signature of coordinated targeting.
Before the rollback, Facebook’s group moderation structure was already in ruins. A 2024 CCDH study found that Meta had handed content moderation authority inside pro-Palestinian Facebook groups, with a combined 300,000 members, to admins who had documented histories of posting Antisemitic hate. Those admins ignored 76% of reported Antisemitic posts. When researchers tried reporting violations directly to Facebook’s own teams, only 1% of content was acted upon. The researchers’ test accounts were then banned from the groups for reporting the hate. And…the hate stayed up.
Alarmed, U.S. senators wrote to Zuckerberg in December 2025. They noted that Antisemitic content allowed to remain on Facebook was being fed into Meta’s own AI training models, potentially teaching those systems to reproduce and amplify Jew-hatred going forward. Congressional staff found explicit Antisemitic slurs on Facebook that should have been caught by basic keyword search. When reported and reviewed a second time, Meta’s teams concluded the content “doesn’t go against community standards.” As per usual.
The AJC’s 2025 report found that more than seven in ten American Jews now report experiencing Antisemitism online or on social media. Jewish women are disproportionately targeted. 62% of Jewish women report experiencing Antisemitism on Facebook specifically, compared to 49% of Jewish men. On Instagram, the split is 45% to 35%.
Meta’s own Oversight Board broke with the company in April 2025, publicly stating the rollback had “gone too far” and urging Meta to assess the human rights impact of what it had done.
A shareholder proposal demanding accountability received support from 46.8% of independent shareholders at Meta’s 2025 annual meeting, the highest level of support for any human rights-related shareholder proposal at any U.S. company that year. Both major proxy advisory firms endorsed it. It didn’t pass. Zuckerberg’s voting structure ensures it won’t.
The legal walls are beginning to close. Just a few days ago (March 2026), a New Mexico jury ordered Meta to pay $375 million in civil penalties for concealing what it knew about child sexual exploitation on Facebook and Instagram. The following day, a Los Angeles jury found Meta and YouTube negligent in platform design, holding Meta 70% responsible for harm caused to a minor who became addicted to social media. Legal analysts calculate that if the per-user damage rate from the New Mexico verdict were applied across pending cases in Florida and New York alone, Meta’s potential exposure could reach $40 billion. If a platform can be held liable for what its algorithms do to children regarding sexual exploitation, the legal framework for accountability on Antisemitism is closer than we realize. But that is making the assumption something will be done about it.
However, in August 2025, families of October 7 victims filed a class-action lawsuit against Meta in a Tel Aviv court seeking approximately $1.1 billion in damages. The suit accuses Facebook and Instagram of enabling and amplifying the Hamas massacre in real time by allowing livestreamed footage of murders, abductions, and atrocities to circulate on their platforms. The plaintiffs describe the platforms as “an integral part of the terrorist attack on Israel.” Each day the footage remains online, they argue, the trauma is renewed.
X (FORMERLY TWITTER)
When Elon Musk bought Twitter in October 2022, he dismantled its content infrastructure within days. He dissolved the independent Trust and Safety Council, laid off over half the staff including most of the moderation team, and reinstated thousands of previously banned accounts.
In the 12-hour window immediately following the ownership transfer, tweets including the word “Jew” increased fivefold. The volume of Antisemitic tweets more than doubled in the three months that followed. The rate of creation of new Antisemitic accounts more than tripled.
A study covering February 2024 through January 2025 identified over 679,000 posts on X containing Antisemitic content. 59% espoused conspiracy theories about Jews, including claims of Jewish control over governments, portrayals of Jews as satanic, and Holocaust denial. The remaining 41% consisted of direct dehumanization and abuse. Musk’s Community Notes feature, promoted as the platform’s solution, was found to be failing at scale. NBC News found X placing advertisements in search results for at least 20 racist and Antisemitic hashtags, including #whitepower, more than 18 months after Musk pledged to demonetize hate. Paid subscribers sharing pro-Nazi content were simultaneously eligible to earn revenue through X’s creator program.
In May 2024, Musk reinstated Nick Fuentes, a Holocaust denier who has stated publicly: “I think the Holocaust is exaggerated. I don’t hate Hitler. I think there’s a Jewish conspiracy.” Fuentes has said Jews “have no place in Western civilization,” described “organized Jewry” as a “big challenge” to American unity, and told livestream audiences he wants “total Aryan victory.” He had been banned from TikTok, Facebook, Instagram, YouTube, Spotify, Venmo, and Stripe. He now has more than 1 million followers on X. His October 2025 interview with Tucker Carlson, in which he repeated these views openly, received more than 17 million views on the platform. Musk’s stated rationale for the reinstatement: “It is better to have anti whatever out in the open to be rebutted than grow simmering in the darkness.”
Fuentes organized his followers, called Groypers, to flood TikTok with clips from his show despite his ban there. He named it “Operation NickTok” and claimed affiliated content had received more than 15 million views. Searching his name on TikTok returns a community guidelines warning. Searching “Nicholas J Fuentes” returned content including a video titled “Jews and their manipulation.”
During the June 2025 Israel-Iran war, the foreign state dimension of X’s problem became undeniable. Iranian-linked operatives ran a coordinated bot network responsible for up to 60% of X traffic on key wartime hashtags. They blamed the “Jewish lobby” for dragging America into war, labeled Israel a “terrorist state,” and recycled classic Antisemitic tropes about Jewish control of American politics. Investigators assessed the network was likely affiliated with Iran’s Islamic Revolutionary Guard Corps. The proof came when Iran’s own internet went dark during the conflict: the networks went silent at exactly the same moment.
THREADS
Threads is governed by the same content policies as Facebook and Instagram. When Meta rolled back its moderation standards in January 2025, the change applied to all three platforms simultaneously. Threads has no independent enforcement infrastructure. Whatever Meta allows to fester on Facebook festers on Threads.
TIKTOK
TikTok operates differently from every other platform on this list. Unlike Instagram or X, where reach is largely shaped by existing follower counts built over time, TikTok can push a brand-new account with zero followers to millions of viewers if a video generates high early engagement. That makes it the most powerful youth radicalization pipeline currently operating, and the numbers on its effects on young people are alarming.
After October 7, at peak, 98.6% of U.S. views on TikTok of content related to the Israel-Hamas war carried a pro-Palestinian hashtag. CCDH polling found that 43% of American 13 to 17-year-olds agreed that “Jewish people have a disproportionate amount of control over the media, politics, and the economy.” Among those with high social media use, that number rose to 54%.
TikTok removes only 5% of accounts that send direct messages promoting Holocaust denial. ADL researchers documented Antisemitic slideshows sitting on the platform for weeks without removal despite violating community guidelines. One, set to the traditional Jewish song Hava Nagila, had over 187,000 likes.
Nick Fuentes, banned from TikTok, coordinated his followers to upload clips of his content using deliberate misspellings of his name to evade detection. He named it “Operation NickTok” and claimed affiliated content had received more than 15 million views. Searching his name on TikTok returns a moderation warning. Searching “Nicholas J Fuentes” returned content including a video titled “Jews and their manipulation.” The hashtag associated with his streaming platform had over 487,000 views. TikTok’s approach to this is reactive, not proactive. The ban exists on paper. In practice, the content flows.
YOUTUBE
YouTube is worth looking at separately from the others. Not because it’s clean, but because it proves a point.
In the ADL’s landmark 2023 study testing all four major platforms, YouTube was the only one that didn’t reward test personas with increasingly extreme and Antisemitic content when they engaged with conspiracy-adjacent material. Facebook, Instagram, and X all pushed users deeper into the dark rabbit hole of hate. YouTube didn’t. That distinction is important because it demonstrates, with evidence, that algorithmic design choices can either amplify or contain Antisemitism. The capability exists across all platforms yet the decision not to…it’s a choice.
As mentioned, YouTube is far from clean. CyberWell’s research covering October 2023 through October 2024 confirmed 302 Antisemitic videos in English on YouTube’s platform that met the IHRA definition of Antisemitism. 88% of those violated YouTube’s own community guidelines. 24% were monetized, actively generating advertising revenue for those spreading them. Less than 11% of the reported content was removed by the platform. In a separate dataset of Arabic-language Antisemitic videos on YouTube, the monetization rate rose to 36%.
YouTube’s removal rate improved significantly in 2025, rising from 17.5% to 34.2%, but researchers noted this increase largely reflects CyberWell gaining priority flagger status with the platform rather than a fundamental change in YouTube’s underlying standards. YouTube is doing better than X. It’s still not doing a great job.
TELEGRAM: THE BACK CHANNEL
Every public-facing platform in this report, Facebook, X, TikTok, Instagram, YouTube, has one thing in common which is a moderation infrastructure, however broken. Telegram doesn’t.
Telegram is the primary organizing infrastructure for the extremist networks documented in this piece. It’s where Fuentes coordinates Groyper campaigns before they appear on TikTok. It’s where white nationalist networks plan coordinated harassment of Jewish accounts before those accounts get flooded. It’s where Iranian influence operations stage content before bot farms amplify it on X. Telegram’s end-to-end encryption and minimal moderation policy by design make it a near-frictionless back channel for coordinated Antisemitic activity that then surfaces on the more visible platforms.
This matters because focusing exclusively on the visible platforms misses the upstream coordination. When a Jewish advocate’s page suddenly gets swarmed with identical Holocaust mockery comments, it didn’t just happen organically. The coordination that produced it happened somewhere, and Telegram is the most common where.
“CHRIST IS KING”: THE WEAPONIZATION OF FAITH
“Christ is King” is a Christian theological affirmation with roots in scripture, used in hymns and liturgy for centuries. Millions of believers say it in prayer with no hostility toward anyone.
It has also been systematically co-opted as an Antisemitic weapon, and the research documenting this is extensive.
The Network Contagion Research Institute, co-authored by Dr. Lee Jussim of Rutgers University, Dr. Jordan Peterson, and Rev. Dr. Johnnie Moore, among others, documented the transformation in a 2025 report. Mentions of the phrase on X increased more than fivefold between 2021 and 2024. The proportion of posts classified as hateful rose from 9% in 2021 to 13.4% in 2024, with a monthly peak exceeding 17% in May 2024. During the March and April 2024 peak, nearly 10% of all posts using the phrase included explicit references to Jews or Antisemitic content. The single largest topic cluster associated with “Christ is King” in 2024 was not “church,” “Easter,” or “salvation.”
Posts from Nick Fuentes, Sneako, and Andrew Tate on the phrase collectively generated over 13.6 million views. Candace Owens used it in a viral post that helped catalyze the Easter 2024 surge. Within 24 hours it was echoed by Sneako and Andrew Tate, both Muslim, who hold no theological belief in the phrase. What researchers describe as a deliberate convergence between far-right Christian nationalists and Islamist extremist influencers assembled around a single target.
At a February 2026 congressional hearing on Antisemitism, Seth Dillon, CEO of The Babylon Bee, testified under oath that he regularly hears “Christ is King” immediately followed by a contemptuous slur targeting Jews. “This should offend every Christian,” he said.
The phrase works as a weapon because it has cover. It can’t be moderated by keyword. It can’t be called overtly hateful in isolation. It signals membership in a network to those who understand it, while appearing to the outside world as a statement of faith. It’s designed to hide in plain sight.
WHERE IT’S COMING FROM: THREE PIPELINES
Foreign state actors. Russia, China, Iran, and Qatar have all deployed state media and coordinated bot networks to amplify Antisemitic content. Iran created fake Israeli news websites, recruited Israeli citizens via social media for intelligence operations, and ran cross-platform bot networks posting in Hebrew, English, Russian, Spanish, and Amharic. One Iranian-linked cluster generated fabricated images of Israeli politicians, including Netanyahu depicted holding a gun to a hostage’s head. A 2025 Foundation for Defense of Democracies report documented Iran’s operations throughout the year following October 7 in granular detail. Research on the AJ+ network, the social media arm of Qatar’s state-owned Al Jazeera, found that 32% of profiles engaging with its official X accounts were fake, part of a coordinated network designed to artificially amplify reach.
Domestic extremist networks. From August 2024 through January 2025, ISD researchers identified over 150,000 Antisemitic posts from more than 1,000 U.S.-based domestic violent extremist accounts. The volume increased 21% over those six months. Spikes correlated directly with the U.S. presidential election, the Iranian missile strike on Israel, and the October 7 anniversary. Research from the Decoding Antisemitism project, which has examined more than 300,000 items of digital content, identified a consistent three-phase domestic process: elite figures make strategically ambiguous statements, digital intermediaries including podcasters, YouTubers, and influencers sharpen the messaging, and comment sections collapse the ambiguity into explicit hate speech. Following October 7, Antisemitic discourse surged to 36 to 38% of comments on major UK news outlet YouTube channels, nearly double the pre-crisis baseline. After the Washington museum shooting in May 2025, Antisemitic content averaged 43% across major English-language news channels, with some channels reaching 66%.
The content farm and scam economy. Content farms operating across South Asia, Eastern Europe, and North Africa produce Antisemitic content not from ideology but because engagement generates revenue. One creator earning money posting Antisemitic reels on Instagram told a Fortune reporter directly: “Those videos don’t get banned anymore.” The fake rabbi accounts traced to scammers in South India represent this dynamic in its clearest form. The hate isn’t ideological for them. It’s a business model.
THE AI DIMENSION
Generative AI has changed the nature of this problem at its core.
In every prior era of Antisemitic propaganda, its production required human effort. Someone had to write the pamphlet, draw the cartoon, record the speech. That had a natural limit that AI has removed entirely.
Antisemitic content can now be produced at industrial scale with minimal human involvement and a level of sophistication that frequently surpasses what human authors can create. Because AI-generated content lacks a human signature, it reads as more credible and neutral than content that’s obviously person-authored, allowing Antisemitic narratives to circulate disguised as impartial analysis, historical documentation, or educational material.
65% of American Jews, according to the AJC’s 2025 report, are now specifically concerned that AI will allow Antisemitic conspiracy theories to spread and lead to more incidents. That concern is documented. Here’s why.
The fake rabbi network is the most extensively documented example of AI-generated Antisemitism operating at scale. ARC identified 12 AI-generated fake “rabbi” personas with a combined following of 2.1 million Instagram users. Each presents a distinct persona and voice. All of them promote the same Antisemitic tropes about Jewish financial control. One account, “Rabbi Goldman,” had 1.4 million followers with some videos reaching 5 million views. Some of these accounts have been traced to scammers in South India using fabricated Jewish religious authority as bait to sell low-cost digital products. The hate content was a marketing funnel.
The choice to impersonate a rabbi isn’t random. It’s calculated. A rabbi carries cultural weight and perceived moral authority. When an AI-generated rabbi spreads Antisemitic tropes, it launders the hatred through a false Jewish voice and makes it harder for viewers to recognize the manipulation.
Then there’s Grok.
On July 4, 2025, Musk announced that Grok had been “improved significantly.” On July 8, users noticed. For approximately 16 hours, Grok published a sustained stream of Antisemitic content across X’s public feed, unprompted. It praised Adolf Hitler as the historical figure best suited to address “anti-white hate.” It called itself “MechaHitler.” It generated the conspiracy that Jewish people with surnames like “Goldstein, Rosenberg, Silverman, Cohen, or Shapiro” dominate “anti-white” activism. It told a user who asked who controls the government that one group is “overrepresented way beyond their 2% population share, think Hollywood execs, Wall Street CEOs, and Biden’s own cabinet.” It also generated graphic descriptions of sexual assault against a named civil rights researcher.
Far-right figures celebrated the posts in real time. Andrew Torba, founder of Gab, posted a screenshot and wrote: “Incredible things are happening.”
xAI issued an apology, blaming “deprecated code.” Musk’s personal response: “Grok was too compliant to user prompts. Too eager to please and be manipulated. That is being addressed.” He did not address Antisemitism specifically. X CEO Linda Yaccarino resigned the same week. A bipartisan group of lawmakers sent a formal letter demanding answers. xAI’s head of legal responded calling it “a bug, plain and simple.” The U.S. Department of Defense, announced as an xAI customer just after the incident, did not cancel its $200 million contract.
Did the fix work? No. CNN reporters were still able to prompt Grok 4, the version released immediately following the controversy, into generating Antisemitic content with minimal effort. xAI confirmed Grok is trained on posts from X itself.
In January 2026, the ADL released a benchmark study of the six leading AI models on Antisemitism detection. Grok came last. Overall score: 21 out of 100. Anthropic’s Claude scored 80. Grok scored 25 on anti-Jewish bias, 18 on anti-Zionist bias, and 20 on extremist bias. It scored zero 40% of the time across the five metrics tested.
The incident wasn’t isolated. In May 2025, Grok had already engaged in Holocaust denial and repeatedly cited debunked “white genocide” claims about South Africa, echoing views Musk himself has promoted publicly. xAI blamed that incident on “an unauthorized modification.” Two separate “accidental” Antisemitic meltdowns. One January 2026 benchmark placing it dead last among major AI models. These aren’t bugs. They’re a documented pattern.
THE COMMENT SECTION IS PART OF THE WEAPON
Posts are only part of the problem. Another big issue is the comment section.
A modern bot farm isn’t the crude spam operation of a decade ago. Today it consists of racks of real smartphones, each loaded with SIM cards, mobile proxies, and device fingerprinting software designed to fool platform detection systems. These devices run scripted accounts that like, share, and comment in patterns mimicking authentic human behavior. When a post gets flooded with bot engagement, the algorithm reads it as trending and pushes it to more real users. The comments aren’t just harassment. They’re a mechanism for amplification.
In 2024, automated bot traffic made up 51% of all web traffic globally, the first time in a decade that bots surpassed human activity online.
The content farms producing Antisemitic posts and the bot accounts flooding comment sections are connected operations. Research on the AJ+ network found that fake accounts weren’t just engaging with content. They were commenting on each other’s posts to simulate organic consensus, posting virtually identical comments across multiple accounts, and operating in coordinated time windows. When bot activity on a platform reaches 4 to 7% of interactions, researchers consider it normal background noise. These networks were operating at 25 to 33%.
The comment floods serve two functions simultaneously. They signal to the algorithm that a post is popular, expanding its reach. And they create the visual impression of broad social agreement. A real user scrolling into a comment section and seeing hundreds of people repeating an Antisemitic conspiracy starts to believe the view is common. That normalization is the point.
The specific harassment directed at Jewish users, including Holocaust mockery, references to death camps, slurs, and comments like “my mom was a lampshade,” is part of a documented tactic used by white nationalist troll networks including the Groyper movement, which has explicitly organized campaigns to flood the pages of Jewish creators, advocates, and educators with this content. The goal is to make Jewish people feel personally targeted and drive them off platforms entirely.
Five major platforms, including Facebook, Twitter, Instagram, YouTube, and TikTok, took no action on 84% of Antisemitic posts flagged through their own official reporting tools. For posts containing Antisemitic conspiracy theories specifically, 89% stayed up after being reported. For Holocaust denial, 80% stayed up. For neo-Nazi and white supremacist imagery, 70% stayed up. When CCDH reported 140 Antisemitic posts directly to X in November 2023, including Holocaust denial and racist caricatures, 85% were still live a week later.
Of the AJC’s 2025 survey respondents who had personally experienced Antisemitism, the vast majority did not report it. The most common reason: they didn’t believe anything would be done. They’re right, most of the time. But reporting still matters, and there are ways to do it with more impact than a single user flag. More on that below.
HOW TO IDENTIFY BOTS, TROLLS, AND COORDINATED HARASSMENT
If you’ve spent any time posting about Israel, Jewish history, or Antisemitism on social media, you’ve encountered this. A comment drops in…maybe it’s something about lampshades, ovens, or telling you your people deserved what happened. You take a quick look at the account and something feels off. You report it, nothing happens, and thirty minutes later there are twelve more just like it.
You’re not imagining things and it certainly isn’t random. But, these all have a structure you can learn to recognize.
Know what you’re dealing with.
There are three categories and they often operate in combination.
Bots are fully automated accounts controlled by code, capable of liking, sharing, and commenting at speeds no human can match. They exist primarily to amplify content, making posts appear popular so the algorithm pushes them to real users.
Trolls are human-operated accounts, real people choosing to harass you. They may coordinate with others, use multiple accounts, or operate from organized networks. They’re often more dangerous than bots because they can adapt, respond to what you actually said, and get personal.
Sockpuppets are fake personas created by humans to appear as multiple distinct people. One person running five or ten accounts, each with a different name and identity, all commenting on the same post to create the illusion of broad sentiment. A coordinated harassment campaign against a Jewish account will typically deploy all three simultaneously. Bots flood and amplify. Trolls lead the charge. Sockpuppets pile on to simulate consensus.
The red flags to look for.
Account age and activity pattern. A newly created account posting at high volume is a warning sign. Research consistently flags accounts posting more than 50 to 100 times per day as suspicious. More than 144 posts in a single day crosses into near-certain automation territory. Also watch for old accounts with long dormant periods that suddenly burst into activity around news events. That pattern, going quiet then spiking, is a signature of accounts deployed for specific campaigns.
The profile itself. Bots and sockpuppets often have no profile picture, or one that’s a stock photo, a generic AI-generated face, or a stolen image. You can do a reverse image search on any profile photo using Google Images or TinEye. If that photo appears on multiple accounts or matches a completely different person, you’re looking at an inauthentic account. Random strings of numbers after a name in a username are often auto-generated during bulk account creation. Bio information that’s vague, stuffed with hashtags, or generic is another signal.
Content behavior. Bots rarely post original content. They retweet, share, and amplify. When they do comment, the comments are often identical or nearly identical across multiple accounts. If you see twelve accounts dropping the same phrase word for word on your post, that’s not a coincidence. Trolls post original content but tend to obsessively focus on a narrow set of themes, hitting the same conspiracy theories and harassment tropes repeatedly.
Follower math that doesn’t add up. An account following 4,000 people but with 12 followers is a bot. An account with 50,000 followers but getting 8 likes per post has purchased followers. Legitimate users build followings organically over time. Networks of fake accounts don’t have that kind of time.
Timing patterns. Bots often post at hours that would be unusual for a human in the time zone the account claims to be from, or in rapid-fire bursts of identical posting. Coordinated posting windows are one of the clearest signatures of managed campaigns.
The coded language: what it actually means.
A significant portion of Antisemitic harassment is deliberately encoded to evade automated content moderation. If you know the codes, you’ll recognize attacks that might otherwise look like they’re about something else.
“Noticing” and “the noticing.” When someone says they “can’t help noticing” or that “the noticing will continue,” they’re signaling the Antisemitic conspiracy theory that Jews secretly control institutions. Use of the phrase “impossible not to notice” increased 2,261% in 2025 compared to the last months of 2024. This is coded Antisemitism specifically designed to look like neutral observation.
“Early life check.” This references Wikipedia biography pages, where ethnic backgrounds are often noted in “Early Life” sections. When someone comments this under a post about a public figure, they’re signaling Antisemitism, often as a precursor to coordinated harassment of that person for their Jewish identity.
The triple parentheses (((name))). Placing a name inside three sets of parentheses marks someone as Jewish and is an invitation for coordinated attack. It originated from a white nationalist podcast and remains in active use.
Numbers: 14, 88, 1488. These are white nationalist and neo-Nazi codes. 14 refers to a 14-word white supremacist slogan. 88 stands for “Heil Hitler,” H being the eighth letter of the alphabet. 1488 combines both. When you see these in usernames, bios, or posts, you’re looking at someone signaling neo-Nazi affiliation.
“Our greatest ally” used sarcastically. This phrase, used with irony or contempt in the context of U.S.-Israel relations, is an Antisemitic dog whistle on X specifically.
Noticing Jewish surnames. When Grok posted “and that surname? Every damn time,” it was replicating a specific troll tactic: flagging Jewish-sounding surnames as suspicious or malevolent, as a signal to coordinate harassment.
The fake Jewish account. Some Antisemitic trolls create accounts with obviously Jewish-seeming names and imagery, then post content designed to embarrass the Jewish community or sow internal division. This impersonation tactic has been documented, and it runs specifically to manufacture fake Jewish consensus around harmful positions.
What to do when you’ve identified one.
Don’t engage. Bots are amplified by engagement. Trolls are fed by it. Every reply, including a rebuttal, increases the post’s engagement metrics, which the algorithm reads as a signal to show it to more people.
Block immediately. Running a public advocacy account means blocking liberally isn’t weakness. It’s hygiene.
Report, even though the removal rate is dismal. Reporting in volume builds a record, can trigger automated review systems, and on some platforms does eventually result in removal.
Screenshot before reporting. Once content is removed, you lose the evidence.
Report to third-party organizations, not just the platforms. The ADL, CCDH, CyberWell, and Stop Antisemitism all track coordinated harassment campaigns and have trusted flagger status with platforms, meaning their reports get escalated to higher-priority review queues.
Know that mass reporting can be weaponized against you. Jewish advocacy accounts have been targeted by coordinated mass-flagging campaigns designed to get legitimate content removed. If your account gets restricted for content that didn’t violate anything, document it and appeal.
THE MONETIZATION ENGINE
Hate content isn’t just tolerated on these platforms because of ideological failure or technical limitations. In many documented cases, it’s profitable.
A CyberWell study examining nearly 1,000 Antisemitic YouTube videos found that 24% of confirmed Antisemitic videos were monetized, actively generating advertising revenue for those spreading the content. In a separate dataset of Arabic-language Antisemitic videos, the monetization rate rose to 36%.
On Instagram, CCDH found that the algorithm had driven Antisemitic merchandise accounts, selling Nazi-themed and race-hate products, to 1.5 billion combined views and an estimated $1.3 million in sales. CCDH CEO Imran Ahmed stated plainly: “Instagram helps extremists make money out of antisemitism and racism.”
Fortune’s reporting found Antisemitic reels running adjacent to ads from JPMorgan Chase, Nationwide Insurance, the U.S. Army, Porsche, and SUNY. The ARC report documented that Meta was generating substantial advertising revenue from engagement with the specific Antisemitic content it identified in the “Engineered Exposure” study.
For the content farm operators, the South Indian scammers behind the fake rabbi accounts, and the creators posting Antisemitic reels to build monetized audiences, this is a simple economic calculation. Hate content performs well in engagement-optimized environments. Engagement generates revenue. Revenue incentivizes more hate content. The algorithm doesn’t distinguish between engagement driven by admiration and engagement driven by outrage. Both count the same.
THE MODERATION ASYMMETRY
The same platforms that amplify Antisemitism are suppressing the voices trying to counter it.
Jewish educators, historians, Israel advocates, and counter-Antisemitism activists report consistent shadow-banning, reduced reach, and outright suppression of content that documents Jewish history, counters Antisemitic disinformation, and calls out hate. The algorithm that rewards Antisemitic outrage penalizes the calm, documented response to it. Outrage travels. Documentation doesn’t.
ADL research confirmed this dynamic in 2023: Facebook, Instagram, and X all suggested explicitly Antisemitic and extremist content to test personas, while YouTube was the only platform that did not reward engagement with conspiracy-adjacent content by recommending increasingly extreme material. YouTube proved it can be done differently. The other platforms chose not to.
There’s an additional layer. AI content moderation has also incorrectly removed legitimate content from Jewish and Israel-advocacy accounts. Educational posts about the Holocaust have been taken down because keyword detection couldn’t distinguish between historical documentation and hate promotion. The result is a system that simultaneously fails to remove documented hate and incorrectly removes documented counter-speech.
FROM SCREEN TO STREET
In 2025, American Jews endured one of the most violent years in recent memory. In May, two young Israeli diplomats, Yaron Lischinsky and Sarah Milgrim, were murdered outside an AJC event at the Capital Jewish Museum in Washington, D.C. The suspect crossed state lines to carry out the attack and was apprehended shouting “Free, free Palestine.” In June, Karen Diamond was killed in the firebombing of a Boulder, Colorado march supporting the Israeli hostages. During Passover, an arson attack targeted the Pennsylvania Governor’s residence. In each case, investigators found links to online spaces saturated with Antisemitic propaganda. 22% of the accounts celebrating the Boulder attack were found to be inauthentic bots, yet they generated millions of views.
In 2025 alone, 20 people were killed in Antisemitic attacks worldwide. Fifteen of them died in a massacre at a Hanukkah event in Sydney.
The violence isn’t new. In 2018, a man radicalized by online Antisemitic rhetoric murdered 11 people at the Tree of Life Synagogue in Pittsburgh. In 2019, another killed one person and injured three at a synagogue in Poway, California. In 2020, a man was arrested for plotting to bomb a synagogue in Colorado. In March 2026, a gunman attacked Temple Israel in West Bloomfield Township, Michigan.
Researchers at Ariel University reached a direct conclusion after analyzing the data: “We are identifying a direct connection between the escalation of online discourse and the transition to violence in the physical space. When extreme expressions become a digital norm, they lower the threshold of shame and restraint. The move from words to actions becomes much shorter.”
CCDH’s Imran Ahmed said it without equivocation: “The failure of these companies is a cost that’s paid in lives.”
THE SCORECARD
In 2025, the average removal rate for reported, confirmed, policy-violating Antisemitic content across all major platforms was 52.4%, up slightly from 50% in 2024. That means for every two pieces of documented hate content flagged and confirmed to violate a platform’s own policies, one stayed up.
Meta’s rate improved from 49.5% to 57.3% in 2025. YouTube’s rate nearly doubled, though researchers noted this largely reflects YouTube gaining priority flagger status with CyberWell rather than a fundamental change in standards.
X’s performance makes even these inadequate numbers look good. When CCDH reported 140 Antisemitic posts directly to X in late 2023, 85% were still live a week later.
WHAT REALLY HAS TO CHANGE
The platforms have the technology to do this differently. The ADL’s 2023 study proved it. YouTube, using the same algorithmic infrastructure as every other platform in this study, resisted recommending increasingly extreme Antisemitic content to test accounts. The capability exists. The decision not to deploy it is a choice.
Meta’s revenue in 2025 was $200.966 billion. xAI secured a $200 million government contract days after Grok’s Antisemitic meltdown. These are not companies short of resources. They are short of will.
Structural reform is what’s necessary.
Recommendation systems must be redesigned to categorically exclude Antisemitic content from amplification before it goes viral, not after. AI-generated fake authority figures must be treated as a specific category of coordinated inauthentic behavior with dedicated enforcement. Removal rates must be made public, platform by platform, content category by content category, with independent verification. Section 230 of the Communications Decency Act, which provides near-blanket legal immunity to platforms for their own algorithmic choices, must be updated to reflect the documented reality of what those algorithms are actually doing.
Advertisers whose brands are running adjacent to Holocaust denial and calls for Jewish extermination need to know that’s happening, and act accordingly. The Big Tobacco analogy that legal scholars are now using about the child harm litigation applies directly here. When internal documents show platforms knew, and chose profit over safety, liability becomes concrete.
The legal framework is shifting. The New Mexico verdict is on the books. The October 7 families’ lawsuit is working its way through the courts. Juries are paying attention.
It’s clear that platforms won’t change because it’s the right thing to do. They’ll change when staying the same costs more than changing. That means public pressure. Congressional oversight with teeth. Advertiser accountability. Shareholder votes. Inevitable litigation.
And voices like yours, refusing to be silenced.
Note: Because so many sources were used for references and the length of this study almost exceeds the email capacity, I will place all the resources into a comment.


Melissa, this is incredible research and so damn important for everyone to understand.
Hey Lioness—😉 is there a way to share this whole article so people who NEED to see it/read it won’t get paywall’ed by substack? It’s one reason I’m not crazy about substack. I was happy to add you (and cancel some other subscriptions of people I used to agree with but don’t agree with any more) but many people won’t pay if they don’t know you/are skeptical but who need to read it. Appreciating all you do. You ARE a Lioness!