Google removes language on weapons from public AI principles

Removing the ‘harm’ clause may have implications for the type of work Google will pursue, said a former Google AI researcher.

featured-image

Alphabet’s Google has removed a key passage about applications it will not pursue from its publicly listed artificial intelligence principles, which guide the tech giant’s work in the industry. The company’s AI Principles previously included a passage titled “applications we will not pursue,” such as “technologies that cause or are likely to cause overall harm”, including weapons, according to screenshots viewed by Bloomberg. That language is no longer visible on the page.

“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” Google said in a blog post Tuesday. “And we believe that companies, governments, and organisations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.” Removing the “harm” clause may have implications for the type of work Google will pursue, said Margaret Mitchell, a former Google AI researcher.



“Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people,” she said. Google did not immediately respond to a request for comment on its specific plans..