As AI becomes increasingly embedded in business operations, concerns about political bias and ...
More extremist influence have emerged as critical considerations. I truly believe AI will be the most transformative technology of our lifetimes. However, even I, a firm advocate for the good I think it will do, can see that there is a huge amount of hype and confusion around it.
This isn’t surprising. Some of the biggest and most powerful corporations have bet the house on selling it to us. It’s also a highly contentious subject, with many rightly concerned about its possible impact on jobs, privacy and security.
Another frequently voiced fear is that AI will be used to create disinformation that could further political narratives or even influence our democratic choices. There are two claims I see made frequently – the first is that AI can be used to spread extremist beliefs and maybe even create extremists. The second is that AI output veers towards the “woke” – a term originally used by African American civil rights protesters but now most frequently used by conservatives to refer to progressive or pro-social-justice ideas and beliefs.
Reports concerning left-leaning bias in AI were particularly prevalent during last year's US election. At the same time, counter-terrorist think tanks have warned that extremist groups are using AI to indoctrinate. As both of these myths concern the dangers of AI being used to influence political opinions, I thought it made sense to examine them together.
So, are they true? Does AI really have the power to drive us to commit terrorist acts or to adopt liberal philosophies and become “woke”? Conservative and right-wing commentators frequently make the claim that AI and the Silicon Valley culture, where it often originates from, have a left-wing bias. And it does seem that there is at least some evidence to back up these beliefs. A number of studies, including one by the University Of East Anglia in 2023 and one published in the Journal of Economic Behavior And Organization , make the case that this is true.
Of course, generative AI doesn’t actually have a political opinion – or any opinions, for that matter. Everything it “knows” comes from data scraped from the web. This includes books, scientific papers and journals, as well as content scraped from discussion forums and social media.
If that data happens to support a progressive consensus – for example, if the majority of climate science data supports theories that climate change is man-made – then the AI is likely to present this as true. Rather than simply presenting facts with a left-wing bias, some of the research focuses on findings that AI will just refuse to process "right-wing image generation" requests. And when prompts describe images featuring progressive talking points like “racial-ethic equality” or “transgender acceptance,” the results are more likely to show positive images (happy people, for example).
But that doesn’t necessarily mean AI is “woke”. In fact, further research has found that LLM-based AIs can also display right-wing bias, and the results vary according to which AI is tested. A study recently published in Nature found that, based on standardized political orientation tests, there has been “a clear and statistically significant shift in ChatGPT’s ideological positioning over time.
” At the end of the day, AI systems are built by humans and trained on the data we select. If bias is present in the way their algorithms are engineered or the information they are given about the world, then that bias is very likely to be replicated in their output. While some researchers are concerned that AI will turn everyone into liberals, others are more worried that it will be used to radicalize people or further extremist agendas.
The International Centre For Counter-Terrorism, based at The Hague, reports that terrorist groups already widely use generative AI to create and spread propaganda. This includes using fake images and videos to spread narratives that align with their values. Terrorist and extremist groups, including Islamic State, have even released guides demonstrating how to use AI to develop propaganda and disinformation.
Often, the aim is simply to sow chaos and confusion, leading to distrust of establishment agencies and institutions and mainstream (which usually means edited and fact-checked) media. It’s also been suggested that extremist can use AI to work out who is susceptible to radicalization in the first place by predicting who is likely to be sympathetic to their ideology. Again, this is a case of humans using AI to persuade people to adopt their views rather than an indication that AI is extreme or prone to suggesting extreme ideas and behaviors.
However, one inherent risk with AI is its capability to reinforce extreme views through the algorithmic echo-chamber effect. This happens when social media and news platforms use AI to suggest content based on past engagement. This often results in users being shown more of what they already agree with, creating “echo chambers,” where people repeatedly see content that mirrors their existing beliefs.
If those beliefs are extreme, AI can amplify its effect by serving up similar, more radical content. It’s important to remember that while AI is likely to play an increasing role in shaping the way we consume information, it can’t directly influence our beliefs. It should also be noted that AI can also help counter these threats.
It can detect bias in data, for example, that could lead to biased responses, and it can find and remove extremist content from the Internet. Nevertheless, there is clearly a perception, which appears to be justified, that groups of all political affiliations will inevitably use it to try to steer public opinion. Understanding where misinformation comes from and who might be trying to spread it helps us to hone our critical-thinking skills and become better at understanding when somebody (or some machine) is trying to influence us.
These skills will become increasingly important as AI becomes more ingrained in everyday life, no matter which way we lean politically..
Technology
Is AI Really ‘Woke’ Or Extremist?

As AI becomes increasingly embedded in business operations, concerns about political bias and extremist influence have emerged as critical considerations.