Google on Tuesday updated its ethical guidelines around artificial intelligence, removing commitments not to apply the technology to weapons or surveillance.
The company’s AI principles previously included a section listing four “Applications we will not pursue.” As recently as Thursday, that included weapons, surveillance, technologies that “cause or are likely to cause overall harm,” and use cases contravening principles of international law and human rights, according to a copy hosted by the Internet Archive.
A spokesperson for Google declined to answer specific questions about its policies on weapons and surveillance, but referred to a blog post published Tuesday by the company’s head of AI, Demis Hassabis, and its senior vice president for technology and society, James Manyika.
The executives wrote that Google was updating its AI principles because the technology had become much more widespread and there was a need for companies based in democratic countries to serve government and national security clients.
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” Hassabis and Manyika wrote. “And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
Google’s updated AI principles page includes provisions that say the company will use human oversight and take feedback to ensure that its technology is used in line with “widely accepted principles of international law and human rights.” The principles also say the company will test its technology to “mitigate unintended or harmful outcomes.”
Hassabis joined Google in 2014 after it acquired DeepMind, the AI start-up he co-founded. He said in a 2015 interview that the terms of the acquisition stipulated that DeepMind technology would never be used for military or surveillance purposes.
Google’s principles restricting national security use cases for AI had made it an outlier among leading AI developers.
Late last year, ChatGPT-maker OpenAI said it would work with military manufacturer Anduril to develop technology for the Pentagon. Anthropic, which offers the chatbot Claude, announced a partnership with defense contractor Palantir to help U.S. intelligence and defense agencies access versions of Claude through Amazon Web Services.
Google’s rival tech giants Microsoft and Amazon have long partnered with the Pentagon. Amazon founder Jeff Bezos owns The Washington Post.
“Google’s announcement is more evidence that the relationship between the U.S. technology sector and (Defense Department) continues to get closer, including leading AI companies,” said Michael Horowitz, a political science professor at the University of Pennsylvania who previously worked on emerging technologies for the Pentagon.
Artificial intelligence, robotics and related technologies are increasingly important to the U.S. military, he said, and the U.S. government has worked with tech companies to understand how such technology can be used responsibly. “It makes sense that Google has updated its policy to reflect the new reality,” Horowitz said.
Lilly Irani, a professor at the University of California at San Diego and a former Google employee, said the statements by executives Tuesday were not new.
Former Google CEO Eric Schmidt has for years stoked fears that the United States was at risk of being overtaken in AI by China, Irani said, and the company’s original AI principles also claimed it would respect international law and human rights. “But we’ve seen those limits routinely broken by the U.S. state,” she said. “This is an opportunity to remind ourselves of the emptiness of Google’s promises.”
Google’s policy change is a company shift in line with a view within the tech industry, embodied by President Donald Trump’s top adviser Elon Musk, that companies should be working in service of U.S. national interests. It also follows moves by tech giants to publicly disavow their previous commitment to race and gender equality and workforce diversity, policies opposed by the Trump administration.
The search giant was also among the tech companies and executives that made million-dollar donations to Trump’s inauguration committee. Google CEO Sundar Pichai was among the tech leaders granted prominent seats at the ceremonies on Inauguration Day.
Google and AI technology have recently been at the heart of geopolitical tensions between the United States and China. Shortly after Trump’s tariffs on Chinese imports took effect Tuesday, Beijing announced an antitrust probe into Google, along with retaliatory tariffs on some American products.
Last week, Chinese start-up DeepSeek stoked fears that the United States was losing its lead in AI after it released a free version of an AI assistant app similar to OpenAI’s ChatGPT.
Google first published its AI principles in 2018 after employees protested a contract with the Pentagon applying Google’s computer vision algorithms to analyze drone footage. Thousands of workers signed an open letter to Google’s CEO that stated, “We believe that Google should not be in the business of war.”
In addition to introducing the AI principles, Google also opted not to renew the Pentagon contract, known as Project Maven.
Investors and executives behind Silicon Valley’s rapidly expanding defense sector frequently invoke Google employee pushback against Project Maven as a turning point within the industry.
Some Google workers have also protested an ongoing cloud computing contract the company has with the government of Israel, alleging it could see Google’s AI technology contribute to policies that harm Palestinians. The Post recently reported that internal documents show that Google provided Israel’s Defense Ministry and the Israel Defense Forces greater access to its AI tools after Hamas attacked Israel on Oct. 7, 2023.