The Independent.in – News, Breaking News, International News

Tech

Google drops its pledge to not to pursue AI for weapons and surveillance

Google updated its AI Ethics Policy to permit weapon and surveillance use, a shift from its prior anti-harm technology stance.

Google has dropped its pledge to avoid Artificial Intelligence (AI) applications for weapons and surveillance in its latest revision of its AI Ethics Policy, marking a significant departure from the company’s previous stance, which included a pledge not to pursue AI technologies that could cause overall harm or violate internationally accepted norms.

The updated policy states that Google will use AI responsibly and in alignment with “widely accepted principles of international law and human rights.” However, explicit language regarding weapons and surveillance has been omitted.

The revision of Google’s AI ethics policy follows the inauguration of U.S. President – Donald Trump, an event attended by the CEO of Alphabet Inc. – Sundar Pichai alongside other tech industry leaders such as the Executive Chairman of Amazon – Jeff Bezos and the Chairman and CEO of Meta – Mark Zuckerberg. Shortly after taking office, Trump rescinded an executive order issued by Former President – Joe Biden that mandated AI developers to share safety test results with the Government before launching new technologies.

The CEO of Google Deepmind – Demis Hassabis and the Senior Vice President of SVP of Research, Labs, Technology & Society – James Manyika, said in a blogspot, “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.”

They further wrote, “And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

Google first introduced its AI Principles in 2018 following internal protests over its participation in the United States (U.S.) Department of Defense’s Project Maven. The project, which involved using AI to enhance drone strike targeting capabilities, sparked employee backlash, leading to resignations and a petition urging the company to distance itself from military applications. As a result, Google opted not to renew its contract with the Pentagon and later withdrew from bidding on a U.S. $ 10 billion cloud computing contract due to concerns over its ethical guidelines.

With AI regulation and national security concerns at the forefront of global discussions, Google’s updated policy signals a shift that could have broad implications for how technology companies balance ethical commitments with business and governmental partnerships.

Besides, last year, OpenAI said it would work with military manufacturer – Anduril to develop technology for the Pentagon. Anthropic, which offers the chatbot – Claude, announced a partnership with defense contractor – Palantir to help U.S. Intelligence and Defense agencies access versions of Claude through Amazon Web Services. Google’s rival tech giants, Microsoft and Amazon, have long partnered with the Pentagon.

The true light is that of knowledge and information. We are a group of informed citizens, some are journalists by profession, who are here to share our opinion and take of world. While we know we are not always right, we always try to have a perspective that is backed by first hand information. We would love to hear from you on how we can do better, just post your comments on any of the articles that you think can be improved.

Copyright © 2020 The Independent.in

To Top