Google’s AI Evolution: Gemini Takes Center Stage Amid Ethical Concerns

Google’s AI Evolution: Gemini Takes Center Stage Amid Ethical Concerns

Google's AI platform, Gemini, now prominently features at the top of Google search results, delivering AI-written summaries to users. The technology has made its way onto Google Pixel phones, marking a significant move in the company's AI strategy. This development comes as Alphabet Inc, Google's parent company, revises its stance on the ethical use of artificial intelligence, stirring conversations about the future of AI in sensitive applications.

In 2018, Google faced internal dissent when it chose not to renew a contract with the U.S. Pentagon for AI work. This decision followed resignations and a petition signed by thousands of employees who feared the "Project Maven" initiative could lead to AI being used for lethal purposes. Google's founders, Sergei Brin and Larry Page, had once adopted the motto "Don't be evil," but the company's restructuring in 2015 introduced a new guiding principle: "Do the right thing."

In a recent blog post, James Manyika and Demis Hassabis from Google defended their decision to update the company's AI principles. They argued that as AI technology has evolved, so too must the guiding principles that frame its use.

"Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications." – Demis Hassabis and James Manyika

Originally published just before Alphabet's end-of-year financial report, the blog post coincided with financial results that fell short of market expectations, impacting the company's share price. Despite this, Google remains committed to its AI projects, planning to invest $75 billion this year—29% more than Wall Street analysts anticipated. This investment focuses on enhancing infrastructure for AI, supporting AI research, and developing applications like AI-powered search tools.

Alphabet no longer guarantees that it will avoid using AI for developing weapons or surveillance tools. This shift reflects an acknowledgement that the original AI principles from 2018 required updates to match the rapid advancement of technology.

"We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights," – Demis Hassabis and James Manyika

As billions incorporate AI into their daily routines, Google faces ongoing internal challenges from staff who occasionally oppose executive decisions regarding AI's trajectory. The conversation surrounding ethical AI use continues to evolve within the company as it balances technological progress with moral considerations.

Tags