Google urges for less stringent AI copyright rules as part of Trump’s ‘AI Action Plan’

featured-image

GOOGLE published a policy proposal in response to Trump’s call for an AI action plan. Following OpenAI’s footsteps, Google is requesting the Trump administration to weaken the copyright restrictions on AI training. Many observers see Google’s stance as another sign that tech firms are joining forces to shape federal rules around emerging artificial intelligence technologies.

Central to Google’s recommendations is the need to weaken intellectual property barriers. The company stresses that “fair use and text-and-data mining exceptions” should be recognized as “critical” for continued AI research. Google argues that text and data mining exceptions are critical for AI research Google’s policy proposal suggests that training models on copyrighted but publicly available data should be allowed without heavy constraints, insisting this practice does not cause meaningful damage to rights holders.



Critics of this approach, however, have filed lawsuits against Google for drawing on copyrighted data without compensation. As of this writing, the U.S.

courts have not definitively settled whether fair use rules shield AI firms in these cases. Along with loosening copyright limits, Google’s plan focuses on shaping export policies for AI chips and related hardware. The company warns that recently introduced rules under the Biden administration could affect U.

S. competitiveness in global AI markets. Google points to “disproportionate burdens” on cloud computing providers, contrasting its position with Microsoft’s outlook.

In January, Microsoft said it felt “confident” it could comply with the same rules that Google now claims might slow innovation and business activities. Current export rules, designed to restrict the flow of advanced AI hardware to disfavored nations, do include exceptions for select companies that require major clusters of high-end chips. Yet Google remains concerned that these limitations may harm the long-term economic interests of the United States by hindering AI progress, especially in an era of growing international competition.

The policy paper encourages a balanced approach that secures national safety while allowing U.S. businesses to thrive overseas.

The proposal requested the government to preserve grant awards The proposal also underscores the importance of “long-term, sustained” investment in foundational research and development. Citing the risks of cost-cutting, Google advises the government to preserve vital grant awards and boost support for “early-market R&D,” so scientists and organizations can access sophisticated computing resources and advanced models. This stance follows the new administration’s focus on recalibrating national priorities, although recent moves to trim parts of the federal budget have worried research institutions.

Google’s argument for more consistent regulation at the federal level reflects growing tension among lawmakers. Just over two months into 2025, the volume of proposed AI-related bills in the United States stands at 781, highlighting the fragmented policy environment. In its document, Google points to the array of state-level AI laws as proof that the present approach is too scattered.

The company favors a unified privacy and security framework to prevent developers and users from getting tangled in conflicting requirements across multiple states. On the issue of liability, Google maintains that AI developers cannot always oversee how models are deployed. Some policymakers have floated imposing usage liability on developers, but the company counters that, in many instances, misuse arises from activities outside the developer’s direct control.

California’s defeated SB 1047 was one such measure that spelled out responsibilities for AI creators, including mandated safety tests and clarity on when a developer might be held accountable for harm. The tech firm opposed that legislation, arguing that model users—rather than the creators—are often better placed to assess downstream risks and maintain system safety. Notably, the proposal criticizes emerging disclosure mandates such as those in the European Union’s upcoming AI Act.

This legislation would require developers to give detailed information on how their models are built and trained, as well as guidance on possible dangers or performance gaps. Google warns that those requirements may inadvertently expose trade secrets or inspire adversaries to “jailbreak” AI systems. Similar measures have already cropped up in parts of the U.

S., like California’s AB 2013, which compels AI developers to release a public summary of the training data sources. *** This article was originally published on Cryptopolitan.

com ..