AI is a Dangerous Weapon in the Age of Misinformation

featured-image

AI is being misused to spread misinformation, create harmful deepfakes, and commit financial fraud. From celebrity hoaxes to bank runs and eCommerce scams, the consequences are alarming. As lawmakers and tech firms scramble to regulate, the debate over open vs. controlled AI intensifies.

AI’s capacity to spread misinformation is reaching disturbing new heights—perhaps the most alarming form being its weaponization through deepfakes. Recently, Scarlett Johansson called for a ban on deepfake technology after a video surfaced online featuring an AI-generated version of her. The video also included other Jewish actors like Jerry Seinfeld and Mila Kunis, all wearing t-shirts with the word “Kanye” and an image of a middle finger bearing a Star of David at its center.

“...



I also firmly believe that the potential for hate speech multiplied by A.I. is a far greater threat than any one person who takes accountability for it.

We must call out the misuse of A.I., no matter its messaging, or we risk losing a hold on reality,” the actress told People Magazine.

People are frequently turning to AI to speed things up, or create sensation, especially in the media industry. Last year, a Wyoming reporter was caught using AI to create fake quotes and stories. Creating sensational stories with the help of AI has already proven dangerous.

The anti-migrant violence in UK was born from online misinformation.After the incident of three girls being tragically stabbed in the UK, rioters created AI-generated images that incited hatred and spread harmful stereotypes. As per The Guardian, far-right groups also made use of AI music generators to create songs with xenophobic content.

This content spread across online apps like TikTok via efficient recommendation algorithms.Last October, according to Wired, AI-powered search engines like Google, Microsoft, and Perplexity were found promoting scientific racism in search results.Remember when Elon Musk’s xAI released Grok-2, an image generator using Flux with almost no safeguards? This feature allowed users to create uncensored deepfakes, like Vice President Kamala Harris and Donald Trump posing as a couple, sparking deep concerns, is this unprecedented creative freedom, or a dangerous threat to democracy and the integrity of public discourse?Deepfakes of Taylor Swift, female politicians, and children that went viral last year are forcing tech companies to sit up and take notice.

Henry Ajder, a generative AI expert who has studied deepfakes for nearly a decade told The Algorithm, “We are at an inflection point where the pressure from lawmakers and awareness among consumers is so great that tech companies can’t ignore the problem anymore.” ==“We are at an inflection point where the pressure from lawmakers and awareness among consumers is so great that tech companies can’t ignore the problem anymore.”== — Henry Ajder, Gen AI expert.

For example, Google said it is taking steps to keep explicit deepfakes from appearing in search results. Watermarks and protective shields haven’t actually worked so far. But regulation is being upped.

For example, the UK banned both creation and distribution of nonconsensual explicit deepfakes. The EU has its AI Act and the US has been pushing for the Defiance Act.Meanwhile, startups like Synthesia promise hyperrealistic deepfakes with full bodies that move and hands that wave.

Deepfakes are just getting a whole lot more realistic. How will we stop the evil side of this?AI is Aiding in Financial FraudAI-generated fake news spread on social media is heightening the risks of bank runs, according to a new British study that says lenders must improve monitoring to detect when disinformation risks impact customer behavior. Other kinds of fraud are also rampant.

Also, Juniper Research predicts that the value of eCommerce fraud will rise from US$44.3 B in 2024 to US$107 B in 2029, a growth of 141%.All thanks to AI, which is fueling the sophistication of attacks across the eCommerce ecosystem, with the use of deepfakes created using AI to defeat verification systems being a key threat.

This threat, combined with rising levels of ‘friendly fraud’, where fraud is committed by the customer themselves, such as refund fraud, is increasingly threatening merchant profitability.AI is helping fraudsters stay ahead of security measures and commit sophisticated attacks on a larger scale. AI is also making higher-quality attacks possible at an unprecedented frequency by creating credible messages and synthetic identities.

Should AI be as open as the internet?Meta’s AI chief, Yann LeCun, has urged that AI should be as open as the internet since eventually, all our interactions with the digital world are going to be mediated by AI assistants. LeCun explained that platforms like ChatGPT and Llama will constitute a repository of all human knowledge and culture, creating a shared infrastructure like the internet today.https://youtu.

be/Z6X6OZODzMU?embedable=trueHe said that we cannot have a small number of AI assistants (OpenAI’s ChatGPT and alike) controlling the digital diet of every citizen across the world. “This will be extremely dangerous for diversity of thought, for democracy, for just about everything,” he added.As AI becomes more and more human-like, we must remember that it is still not human.

As Microsoft’s Satya Nadella told Bloomberg Technology, AI is software and it doesn’t display human intelligence.“It has got intelligence if you want to give it that moniker, but it’s not the same intelligence that I have,” he says.:::infoNavanwita Bora Sachdev, Editor, The Tech Panda:::.