19 C
Usa River
Tuesday, February 4, 2025

Google Lifts a Ban on Using Its AI for Weapons and Surveillance

Must read

Advertisements


Google announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue “technologies that cause or are likely to cause overall harm,” “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” “technologies that gather or use information for surveillance violating internationally accepted norms,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.”

The changes were disclosed in a note appended to the top of a 2018 blog post unveiling the guidelines. “We’ve made updates to our AI Principles. Visit AI.Google for the latest,” the note reads.

In a blog post on Tuesday, a pair of Google executives cited the increasingly widespread use of AI, evolving standards, and geopolitical battles over AI as the “backdrop” to why Google’s principals needed to be overhauled.

Google first published the principles in 2018 as it moved to quell internal protests over the company’s decision to work on a US military drone program. In response, it declined to renew the government contract and also announced a set of principles to guide future uses of its advanced technologies, such as artificial intelligence. Among other measures, the principles stated Google would not develop weapons, certain surveillance systems, or technologies that undermine human rights.

But in an announcement on Tuesday, Google did away with those commitments. The new webpage no longer lists a set of banned uses for Google’s AI initiatives. Instead, the revised document offers Google more room to pursue potentially sensitive use cases. It states Google will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.” Google also now says it will work to “mitigate unintended or harmful outcomes.”

“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” wrote James Manyika, Google senior vice president for research, technology and society and Demis Hassabis, CEO of Google DeepMind, the company’s esteemed AI research lab. “And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

They added that Google will continue to focus on AI projects “that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights.”


Got a Tip?

Are you a current or former employee at Google? We’d like to hear from you. Using a nonwork phone or computer, contact Paresh Dave on Signal/WhatsApp/Telegram at +1-415-565-1302 or [email protected], or Caroline Haskins on Signal at +1 785-813-1084 or at [email protected]


US President Donald Trump’s return to office last month has galvanized many companies to revise policies promoting equity and other liberal ideals. Google spokesperson Alex Krasov says the changes have been in the works much longer.

Google lists its new goals as pursuing bold, responsible, and collaborative AI initiatives. Gone are phrases such as “be socially beneficial” and maintain “scientific excellence.” Added is a mention of “respecting intellectual property rights.”



Source link

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Advertisements

Latest article