Tech giants Google, Meta, and X have submitted their official statements on deepfake regulation to a central committee formed by the Indian government. Their inputs were part of a consultation process convened by the Ministry of Electronics and Information Technology (MeitY), following a Delhi High Court order. According to a report by Indian Express, the companies outlined their policies for handling AI-generated and manipulated media, emphasizing their existing frameworks and ongoing efforts.
BackgroundDeepfakes have been a growing concern in India, prompting legal and regulatory discussions. The Delhi High Court’s intervention came after multiple petitions raised alarms about the unchecked spread of deepfake technology. One such case, filed in November 2024 by professional model Kanchan Nagar, called for stricter rules to prevent the use of deepfakes in advertising and other commercial activities without the rightful consent or compensation for original artists.
Represented by advocate Mumtaz Bhalla, the petition argued that unregulated deepfake usage exploits artists’ identities for profit.Additionally, there are at least three other pending petitions before the Delhi High Court, pressing for a deepfake regulation. Among them is a plea by India TV’s Chairman and Editor-in-Chief, Rajat Sharma, who highlighted the potential dangers of deepfakes, such as spreading misinformation, misleading the public, and threatening democratic processes.
His petition urged the government to restrict access to deepfake-generating software and enforce stricter controls on AI-manipulated media. MeitY’s Status Report on Deepfake RegulationIn response to the growing concerns about the misuse of deepfake technology, MeitY submitted a comprehensive status report to the Delhi High Court in March 2025. The report acknowledged the significant risks associated with deepfake technology, particularly its potential use in misinformation, defamation, cybercrime, and violation of personal rights.
The report emphasized the urgent need for regulatory measures, enhanced detection technologies, and public awareness programs to tackle this emerging threat. As part of the status report, MeitY outlined the two key meetings that have taken place so far:First Meeting (December 20, 2024)The Committee on deepfake-related issues held its inaugural meeting under the chairmanship of Ms. Kavita Bhatia, MeitY.
The key outcomes were:Committee Mandate: The committee will review international regulations and consult stakeholders to create recommendations on legislative, technical, and awareness frameworks.International Best Practices: The committee discussed the European Union’s regulations and the need for a global approach to deepfake regulation.Forensic Tools: The committee emphasized developing accurate AI detection tools and collaborating with technology firms to trace the origins of malicious content.
Public Awareness: The committee highlighted the need for large-scale public education to identify deepfakes.Second Meeting (January 21, 2025)The second meeting, held on January 21, 2025, focused on gathering views on deepfake regulation. Stakeholders, including Google, Meta, X, and civil society representatives, discussed the following:Challenges in Detection: Participants highlighted the limitations of current detection tools, especially in recognizing diversified accents and audio-only deepfakes.
Legal Frameworks: Many agreed that existing laws, such as the IT Act and IT Rules, are sufficient but require stronger enforcement mechanisms.Industry Policies: Google, Meta, and X outlined their policies for countering deepfakes. Google introduced a policy in November 2023 requiring creators to disclose AI-generated content.
Meta’s policy, introduced in April 2024, mandates the disclosure of AI-generated media. X emphasized that not all AI content is harmful, focusing regulations on malicious intent.Regional Outreach: The committee proposed capacity-building programs, especially in rural areas, to address the lack of awareness about deepfakes.
Collaboration: A call was made for public-private partnerships to develop standardized detection frameworks and regulatory norms.Google’s ResponseGoogle informed the committee that it has had a policy in place since November 2023 to counter deepfake content. The company stated that it actively uses AI to detect and remove harmful, manipulated media.
It also mandates that content creators disclose AI-generated elements in their work. Additionally, Google provides a mechanism for individuals to report unauthorized deepfake usage of their likeness, allowing affected users to request content removal.Meta’s StandMeta, which introduced its AI labeling policy in April 2024, said that it requires users to disclose AI-generated content when uploading media, including advertisements.
The company clarified that its policies apply broadly to all forms of digital alterations, not just deepfakes. A Meta representative told the committee, “Many of our policies are technology-neutral, meaning they apply regardless of whether the alteration is a deepfake or another form of digital modification.” Meta also revealed that it is working on safeguards to protect the identities of celebrities from being misused through deepfake technology.
X’s PerspectiveX, formerly known as Twitter, took a slightly different stance, cautioning against treating all AI-generated content as harmful. The company stated that it has a “synthetic and manipulated media policy” under which deceptive and harmful deepfakes are removed. However, it argued that not all AI content is misleading.
A company spokesperson told the committee, “It is important to draw that distinction going forward.” X maintained that deepfake regulations should focus on malicious intent rather than restricting creative applications of AI technology.Next StepsIn a recent update to the Delhi High Court, MeitY stated that its deepfake sub-committee requires an additional three months to complete consultations and finalize its report.
The Union government informed the court that the committee, originally given three months in November 2024, now needs more time to ensure thorough deliberations.Furthermore, the Delhi High Court directed MeitY to consider suggestions from creative professionals, artists, and the Advertising Standards Council of India (ASCI) while formulating rules and regulations related to deepfakes.The MeitY panel is expected to continue consultations over the next three months.
It will hear testimonies from individuals affected by deepfake misuse and finalize its recommendations, which will shape India’s regulatory approach to deepfake technology and its impact on digital media, public trust, and individual rights.Why It MattersDeepfake technology poses a significant global challenge, with many countries already implementing strict regulations. In Spain, the AI Act imposes heavy fines for unlabeled AI-generated content, including deepfakes, requiring clear disclosure from the moment of exposure.
The UK has taken a step further, making the creation and sharing of explicit deepfakes a criminal offense, especially in cases where they harm individuals, particularly women and girls.In India, deepfakes have already been used in political campaigns, spreading misinformation and manipulating public opinion during elections. This not only undermines democratic processes but also risks the privacy and dignity of individuals.
As deepfake technology becomes more accessible, its potential for misuse grows, making it crucial for India to establish regulatory measures.MeitY’s ongoing consultation on deepfake regulations is an important step, but it is essential that India takes swift action to protect its citizens and ensure that deepfakes do not compromise elections, media integrity, or personal rights.Also Read:YouTube Introduces New ‘Captured with a Camera’ Label to Combat DeepfakesYouTube To Launch AI Tools To Help Creators Detect Their DeepfakesSouth Korea Investigates Telegram Over Alleged Role in Deepfake Porn DistributionThe post MeitY’s Deepfake Panel Hears From Google, Meta, X and Civil Society Amid Rising Misuse Concerns appeared first on MEDIANAMA.
.
Technology
MeitY’s Deepfake Panel Hears From Google, Meta, X and Civil Society Amid Rising Misuse Concerns

Multiple petitions calling for deepfake regulation in India led the Delhi HC to intervene and ask for a status report from MeitY on the problem, where stakeholders like Meta, Google, and X submitted statements. The post MeitY’s Deepfake Panel Hears From Google, Meta, X and Civil Society Amid Rising Misuse Concerns appeared first on MEDIANAMA.