Google scraps AI guidelines on surveillance and weapons

Tech corporation revises AI ethical stance amid evolving industry landscape
google
photo credit: shutterstock.com/Tada Images

Google has wiped its guidelines about not using AI for surveillance and weapons. This significant policy shift comes amid growing discussions about AI regulation and corporate responsibility in the tech sector.

The tech giant’s parent company, Alphabet, had previously vowed that it would never develop technology that would “cause or are likely to cause overall harm”. This commitment had been a cornerstone of Google’s ethical AI framework since its inception.


However, the guidelines have now been amended, and the pledge has been removed. This change represents a major shift in Google’s approach to AI development and implementation, reflecting broader industry trends toward more flexible AI governance frameworks.

The section stated that Google would never use applications “that gather or use information for surveillance violating internationally accepted norms.” This specific guideline had been widely cited as an example of responsible AI development practices in the tech industry.


The company now says it will apply “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.” This new approach emphasizes procedural safeguards rather than absolute prohibitions.

The AI principles were first published in 2018, with Google senior vice president James Manyika and Sir Demis Hassabis, who spearheads the firm’s AI lab, Google DeepMind, explaining that with the rise in the use of artificial intelligence, they were long overdue an update. The original principles had guided Google’s AI development for nearly six years.

“Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organizations and individuals use to build applications,” they said in a blog post. Industry statistics show that global AI adoption has increased by over 270% in the past four years.

“It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers.” Market research indicates that AI technologies now influence over 60% of global digital interactions.

Google – which previously faced criticism for its controversial $1.2 billion cloud computing and AI agreement between the Israeli government – highlighted that they need to work together with governments and organizations to “create AI that protects people, promotes global growth, and supports national security.” This stance reflects a broader industry trend toward public-private partnerships in AI development.

“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” the pair added. “And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.” This statement aligns with increasing calls for democratic oversight of AI development.

The policy revision comes at a time when global investment in AI has reached unprecedented levels, with projections indicating a market value of $190 billion by 2025. Industry experts suggest that this guideline change could influence similar policy decisions across the tech sector.

The modification of these guidelines has sparked discussions among AI ethics experts and industry observers. Some argue that the change reflects a more pragmatic approach to AI development, while others express concern about potential implications for privacy and human rights.

This policy shift occurs against the backdrop of increasing global competition in AI development, with various nations and corporations racing to establish dominance in the field. According to recent surveys, over 80% of major tech companies are actively developing AI capabilities for various applications.

The decision also raises questions about the balance between technological advancement and ethical considerations in AI development. Recent studies indicate that public concern about AI applications in surveillance and security has grown by 45% in the past two years.

Also read
Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Read more about: