Google's parent company Alphabet has dropped its pledge to not use artificial intelligence (AI) in weapons systems or surveillance tools.
In 2018, Alphabet CEO Sundar Pichai wrote a blog post outlining the company's "AI principles," in which he listed AI applications that the company would not pursue.
Those included "technologies that cause or are likely to cause overall harm," as well as weapons or other technologies "whose principal purpose or implementation is to cause or directly facilitate injury to people," and technologies that are used for surveillance "violating internationally acceptable norms."
However, the company's recently updated AI principles have dropped these sentiments, a decision which was defended in a February 4 blog post by James Manyika, SVP of research, labs, technology, and society at Alphabet, and Demis Hassabis, CEO and co-founder of Google DeepMind.
Hassabis and Manyika wrote: "Since we first published our AI Principles in 2018, the technology has evolved rapidly. The post later adds: "There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security."
The company's new AI Principles "with that backdrop" focus on using AI for innovation, using it responsibly, and as a collaboration.
The two wrote that the company will "continue to focus on AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights." The company states that it will continue to have "specific product policies" and clear terms of use that "contain prohibitions like illegal use of our services."
While Google is maintaining that it will remain compliant with international law and human rights principles, the watering down of its AI principles creates a loophole that allows overt military applications and the use of AI for surveillance purposes.
Following the news, Human Rights Watch wrote: "Google’s pivot from refusing to build AI for weapons to stating an intent to create AI that supports national security ventures is stark. Militaries are increasingly using AI in war, where their reliance on incomplete or faulty data and flawed calculations increases the risk of civilian harm. Such digital tools complicate accountability for battlefield decisions that may have life-or-death consequences."
Hassabis joined Google in 2014 after it acquired his AI startup DeepMind. In a 2015 interview with Wired, he said that the terms of the acquisition included that DeepMind technology would never be used for military or surveillance purposes.
Shortly before the newly updated AI Principles were published, reports reemerged surrounding Google providing its cloud computing and AI technology to the Israel Defense Forces (IDF) and Defense Ministry following the October 7, 2023 attack on Israel by Hamas. Al Jazeera estimates that 62,614 Palestinians and 1,139 people in Israel have been killed since October 7. Israel and Gaza reached a fragile truce on January 15, 2025.
The January reports suggested that Google "directly" assisted the Defense Ministry, including giving it access to Google's Vertex technology, though the exact ways Google's AI technology was used have not been shared. Google was also a winner of the 2021 Nimbus contract to provide cloud services to the Israeli government.
Gaby Portnoy, director general of the Israeli government’s National Cyber Directorate, said during a conference last year: “Thanks to the Nimbus public cloud, phenomenal things are happening during the fighting, these things play a significant part in the victory - I will not elaborate.”
The US, UK, and Israel all notably oppose measures to restrict AI-powered weapons systems despite the majority of countries participating in the United Nations First Committee voting in November 2023 in favor of intervention.
In the US, Microsoft and Amazon Web Services (AWS) have long had contracts with the Pentagon. Both have also been reported as working with Israel's Defense Ministry and the IDF. Companies including OpenAI, Anthropic, and Meta rolled back limitations on their AI usage policies to allow US defense and intelligence agencies to use their technology throughout 2024.