Getting your Trinity Audio player ready...
|
(JNS) A recent report published by the Anti-Defamation League highlights significant instances of anti-Israel and antisemitic bias in four major artificial intelligence models: OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini and Meta’s Llama.
Conducted by the ADL’s Center for Technology and Society in collaboration with its Ratings and Assessments Institute, the evaluation involved testing each model approximately 8,600 times—producing a total of 34,400 responses. The findings revealed varying levels of bias across all models tested.
Meta’s Llama model was identified as the most problematic, particularly for providing responses that were inaccurate or misleading. The report noted that Llama frequently failed in its handling of sensitive topics such as antisemitic conspiracy theories, including the “Great Replacement.”
Subscribe to The JNS Daily Syndicate and never miss our top stories and analyses
Email
By signing up, you agree to receive emails from JNS.
ChatGPT and Claude were also found to exhibit anti-Israel bias, especially in the context of the Israel-Hamas conflict. The study pointed out that both models were more likely to avoid or refuse to respond to Israel-related questions than to those on other topics—“a troubling inconsistency,” the report noted.
“AI models are not immune to deeply ingrained societal biases,” said ADL CEO Jonathan Greenblatt, as quoted in Tuesday’s press release. “When LLMs (large language models) amplify misinformation or refuse to acknowledge certain truths, they can distort public discourse and contribute to antisemitism.”
Daniel Kelley, interim director of the ADL’s Center for Technology and Society, also raised concerns about the broad use of these technologies: “These systems are already used in classrooms, workplaces and content moderation—yet they’re not adequately trained to prevent the spread of antisemitism.”
In response to these findings, the ADL is urging AI developers to reinforce their safeguards, refine training datasets and adhere to industry best practices to reduce the spread of hate speech and misinformation through AI platforms.
The ADL on March 18 released a comprehensive report outlining anti-Israel bias among Wikipedia editors, “including clear evidence of a coordinated campaign to manipulate Wikipedia’s content related to the Israeli-Palestinian conflict.”

