![](https://www.thegatewaypundit.com/wp-content/uploads/2025/02/military-ai-robot-by-grok-1200x630.jpg)
We live in a world where AI is already widely used in a variety of weapons systems by a number of countries.
Drones and UAVs are a perfect example, with AI selecting and engaging targets without human intervention, as well as Loitering Munitions (kamikaze drones) identifying and engaging targets. In development are also ‘swarming technologies’, in which multiple AI-controlled drones operate in coordination.
But there’s much more: Missile Defense Systems use AI for automatic detection and engagement of incoming missiles or aircraft; AI-Enabled Targeting Systems that identify targets in conflict zones; autonomous Naval Systems (unmanned ships), and even DARPA’s Air Combat Evolution (ACE) Program in which AI can pilot an actual F-16 in flight.
On top of it all, there are AI-enhanced logistics and Decision Support optimizing resource allocation and tactical decisions.
So, it would make no sense, really, for a top-tier player in the AI landscape, like Google, to be opting to remain out of this ongoing revolution in weapons and surveillance systems.
![](https://www.thegatewaypundit.com/wp-content/uploads/2025/02/military-ai-robot-by-grok-22.jpg)
Gizmodo reported:
“Google dropped a pledge not to use artificial intelligence for weapons and surveillance systems on Tuesday. And it’s just the latest sign that Big Tech is no longer concerned with the potential blowback that can come when consumer-facing tech companies get big, lucrative contracts to develop police surveillance tools and weapons of war.”
Google was revealed in 2018 to have a contract with the US Department of Defense for a ‘Project Maven’, using AI for drone imaging.
“Shortly after that, Google released a statement laying out ‘our principles’, which included a pledge to not allow its AI to be used for technologies that “cause or are likely to cause overall harm’, weapons, surveillance, and anything that, ‘contravenes widely accepted principles of international law and human rights’.”
![](https://www.thegatewaypundit.com/wp-content/uploads/2024/04/sundar-pinchi-1200x630.jpeg)
But Google has announced it has made ‘updates’ in their AI Principles – now, all the previous vows not to use AI for weapons and surveillance are gone.
There are now three principles listed, starting with ‘Bold Innovation’.
“We develop AI that assists, empowers, and inspires people in almost every field of human endeavor; drives economic progress; and improves lives, enables scientific breakthroughs, and helps address humanity’s biggest challenges,” the website reads in the kind of Big Tech corporate speak we’ve all come to expect.”
They, at this point, promise to develop AI ‘where the likely overall benefits substantially outweigh the foreseeable risks’.
When it comes to ‘ethics of AI’, Google defends ‘employing rigorous design, testing, monitoring, and safeguards to mitigate unintended or harmful outcomes and avoid unfair bias’.
Read more:
Google Scraps Diversity Hiring Targets — Will Also ‘Review’ All Its DEI Programs