ANI Sues OpenAI For Copyright Infringement

In a first for India, ANI has sued OpenAI for allegedly using its copyrighted news content to train ChatGPT. ANI also claims the chatbot produces fake statements attributed to it, highlighting a growing conflict between AI development and journalistic integrity.The post ANI Sues OpenAI For Copyright Infringement appeared first on MEDIANAMA.

featured-image

Explainer Briefly Slides News agency Asian News International (ANI) has sued OpenAI for copyright infringement, reported Hindustan Times on November 19. ANI alleged that the AI startup used its “original news content” in an unauthorised manner, by using it to train its large language models (LLMs). The suit also alleges that OpenAI’s chatbot, ChatGPT, is capable of producing ANI’s content verbatim in response to user queries.

Thirdly, ANI claimed that OpenAI attributed events and statements that never happened to the news agency, which posed a threat to its reputation and could lead to the spread of fake news. This is the first instance of an Indian news publisher suing an AI company for copyright infringement. Background: ANI’s allegations reflect those made by other news organisations against OpenAI and other generative AI startup.



OpenAI is the subject of atleast nine lawsuits filed by various writers and news organisations, including the New York Times and the Center For Investigative Reporting. AI developers train Large Language Models (LLMs) like OpenAI’s GPT-4 using datasets whose composition raises major concerns. They typically scrape vast quantities of data from the internet to train these AI models, often without obtaining knowledge or consent from the creators.

The New York Times had argued that ChatGPT relied upon Times journalism when answering questions about current affairs, and provided examples of ChatGPT answering queries with near-verbatim excerpts from paywalled articles. A WIRED Investigation in June, also cast a shadow over fellow AI chatbot and news aggregator Perplexity. Rather than simply scraping and summarising news articles, Perplexity instead relied upon incomplete data like URLs, extracts, and metadata to hallucinate the content of news reports.

Also Read:.