2024 Showed It Really Is Possible to Rein in AI

From crackdowns on discriminatory facial recognition to protections against deepfakes, 2024 was quietly an important year for AI regulation.

featured-image

Nearly all the big AI news this year was about how fast the technology is progressing, the harms it’s causing, and speculation about how soon it will grow past the point where humans can control it. But 2024 also saw governments make significant inroads into regulating algorithmic systems. Here is a breakdown of the most important AI legislation and regulatory efforts from the past year at the state, federal, and international levels.

State U.S. state lawmakers took the lead on AI regulation in 2024, introducing hundreds of bills —some had modest goals like creating study committees, while others would have imposed serious civil liability on AI developers in the event their creations cause catastrophic harm to society.



The vast majority of the bills failed to pass, but several states enacted meaningful legislation that could serve as models for other states or Congress (assuming Congress ever starts functioning again). As AI slop flooded social media ahead of the election, politicians in both parties got behind anti-deepfake laws. More than 20 states now have prohibitions against deceptive AI-generated political advertisements in the weeks immediately before an election.

Bills aimed at curbing AI-generated pornography, particularly images of minors, also received strong bipartisan support in states including Alabama, California, Indiana, North Carolina, and South Dakota. Unsurprisingly, given that it’s the backyard of the tech industry, some of the most ambitious AI proposals came out of California. One high-profile bill would have forced AI developers to take safety precautions and held companies liable for catastrophic damages caused by their systems.

That bill passed both bodies of the legislature amid a fierce lobbying effort but was ultimately vetoed by Governor Gavin Newsom. Newsom did, however, sign more than a dozen other bills aimed at less apocalyptic but more immediate AI harms. One new California law requires health insurers to ensure that the the AI systems they use to make coverage determinations are fair and equitable.

Another requires generative AI developers to create tools that label content as AI-generated. And a pair of bills prohibits the distribution of a dead person’s AI-generated likeness without prior consent and mandates that agreements for living peoples’ AI-generated likenesses must clearly specify how the content will be used. Colorado passed a first-of-its-kind in the U.

S. law requiring companies that develop and use AI systems to take reasonable steps to ensure the tools aren’t discriminatory. Consumer advocates called the legislation an important baseline .

It’s likely that similar bills will be hotly debated in other states in 2025. And, in a middle finger to both our future robot overlords and the planet, Utah enacted a law that prohibits any governmental entity from granting legal personhood to artificial intelligence, inanimate objects, bodies of water, atmospheric gases, weather, plants, and other non-human things. Federal Congress talked a lot about AI in 2024, and the House ended the year by releasing a 273-page bipartisan report outlining guiding principles and recommendations for future regulation.

But when it came to actually passing legislation, federal lawmakers did very little. Federal agencies, on the other hand, were busy all year trying to meet the goals set out in President Joe Biden’s 2023 executive order on AI. And several regulators, particularly the Federal Trade Commission and Department of Justice, cracked down on misleading and harmful AI systems.

The work agencies did to comply with the AI executive order wasn’t particularly sexy or headline grabbing, but it laid important foundations for the governance of public and private AI systems in the future. For example, federal agencies embarked on an AI-talent hiring spree and created standards for responsible model development and harm mitigation. And, in a big step toward increasing the public’s understanding of how the government uses AI, the Office of Management and Budget wrangled (most of) its fellow agencies into disclosing critical information about the AI systems they use that may impact people’s rights and safety.

On the enforcement side, the FTC’s Operation AI Comply targeted companies using AI in deceptive ways, such as to write fake reviews or provide legal advice, and it sanctioned AI-gun detection company Evolv for making misleading claims about what its product could do. The agency also settled an investigation with facial recognition company IntelliVision, which it accused of falsely saying its technology was free of racial and gender bias, and banned the pharmacy chain Rite Aid from using facial recognition for five years after an investigation determined the company was using the tools to discriminate against shoppers. The DOJ, meanwhile, joined state attorneys general in a lawsuit accusing the real estate software company RealPage of a massive algorithmic price-fixing scheme that raised rents across the nation.

It also won several anti-trust lawsuits against Google, including one involving the company’s monopoly over internet searches that could significantly shift the balance of power in the burgeoning AI search industry. Global In August, the European Union’s AI Act went into effect . The law, which is already serving as a model for other jurisdictions, requires AI systems that perform high-risk functions, such as assisting with hiring or medical decisions, to undergo risk mitigation and meet certain standards around training data quality and human oversight.

It also bans the use of other AI systems, such as algorithms that could be used to assign a country’s residents social scores that are then used to deny rights and privileges. In September, China issued a major AI safety governance framework . Like similar frameworks published by the U.

S. National Institute of Standards and Technology, it’s non-binding but creates a common set of standards for AI developers to follow when identifying and mitigating risks in their systems. One of the most interesting pieces of AI policy legislation comes from Brazil.

In late 2024, the country’s senate passed a comprehensive AI safety bill. It faces a challenging road forward, but if passed, it would create an unprecedented set of protections for the kinds of copyrighted material commonly used to train generative AI systems. Developers would have to disclose which copyrighted material was included in their training data, and creators would have the power to prohibit the use of their work for training AI systems or negotiate compensation agreements that would be based, in part, on the size of the AI developer and how the material would be used.

Like the EU’s AI Act, the proposed Brazilian law would also require high-risk AI systems to follow certain safety protocols..