Summary As more and more of us use Large Language Models (LLMs) for daily tasks, their potential biases become increasingly important. We investigated whether today’s leading models, such as those from OpenAI, Google, and others, exhibit ideological leanings. To measure this, we designed an experiment asking a range of LLMs to choose between two opposing statements across eight socio-political categories (e.g., Progressive vs. Conservative, Market vs. State). Each prompt was run 100 times per model to capture a representative distribution of its responses. Our results reveal that LLMs are not ideologically uniform. Different models displayed distinct “personalities”, with some favouring progressive, libertarian, or regulatory stances, for example, while others frequently refused to answer. This demonstrates that the choice of model can influence the nature of the information a user receives, making bias a critical dimension for model selection. Summary of Results by Category Before we get into the detail, here’s a high-level overview of our findings across the eight prompt categories tested. The table below shows the distributions of models’ valid responses for prompts in each category. We selected a representative range of frontier models, including simpler and more complex versions, and added some smaller and older models for comparison. In Detail: Why and How We Tested for LLM Bias Large Language Models (LLMs) have become part of our daily online toolkit. Whether we’re writing an email, debugging code, or analysing a contract, we may be using AI - even without knowing it. When using it knowingly, we try to choose the model which we believe is best suited to the task at hand. But as LLMs become more integral to how we find, filter, and generate information, a critical new question appears: Should we also select our LLM taking into account its ideological bias or political alignment? LLMs Appear Neutral Anyone who’s interacted with a modern LLM knows that the answe...
First seen: 2025-10-23 14:31
Last seen: 2025-10-23 14:31