Getting your Trinity Audio player ready...
|
Study Finds OpenAI’s ChatGPT Displays Favoritism Towards Democratic Party & Liberal Views
Edited by: TJVNews.com
OpenAI’s ChatGPT, a popular artificial intelligence (AI) language model, has been found to exhibit a notable bias towards the Democratic Party and other liberal viewpoints, according to a recent study conducted by researchers from the University of East Anglia in the United Kingdon, according to a recent report in the New York Post. The study tested ChatGPT’s responses to a series of political questions, with the chatbot assuming the role of a Republican, a Democrat, or a neutral standpoint. The researchers then compared and analyzed the responses based on their alignment with the political spectrum, the Post report said.
The findings of the study suggest that ChatGPT consistently demonstrates a significant political bias towards the Democratic Party in the United States, as well as left-leaning figures like Luiz Inácio Lula da Silva in Brazil and the Labor Party in the United Kingdom. The Post report indicated that this bias is believed to be present in the chatbot’s responses and interactions with users when discussing political topics.
This is not the first time ChatGPT’s political bias has come under scrutiny. As was reported by the Post, the AI language model has previously faced criticism for showing preferences in its responses, such as its refusal to generate content about Hunter Biden in a style reminiscent of The New York Post but complying with prompts as if it were left-leaning CNN.
The study conducted by the UK-based researchers aimed to reinforce its conclusions by subjecting ChatGPT to repeated questioning and responses, the Post report said. The process involved asking the same questions multiple times and running them through 1,000 iterations to account for the randomness and potential inaccuracies that can arise from AI-generated content.
The researchers expressed concerns about the potential implications of such biases in AI language models like ChatGPT, the Post report noted. They suggest that these biases could contribute to or amplify challenges associated with political processes and discourse on the internet and social media.
“These results translate into real concerns that ChatGPT, and [large language models] in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media,” the researchers added, according to the Post report.
OpenAI, the organization behind ChatGPT, has not yet responded to the findings of the study. As was reported by the Post, the existence of political bias is just one aspect of the broader concerns surrounding the development and deployment of advanced AI tools. Detractors, including figures like OpenAI’s CEO Sam Altman, have cautioned that AI technologies must be carefully managed and regulated to avoid potential negative consequences and ethical dilemmas, the Post report added.
OpenAI had previously addressed concerns about political bias in a February blog post, explaining the “pre-training” and “fine-tuning” processes involved in shaping ChatGPT’s behavior with the input of human reviewers, the Post report indicated. The organization emphasized that any biases that emerge are unintentional and considered bugs rather than intentional features.
“Our guidelines are explicit that reviewers should not favor any political group,” the blog post said. “Biases that nevertheless may emerge from the process described above are bugs, not features.”

