‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (2023)

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (1)

Skip to content

For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

Dr. Geoffrey Hinton is leaving Google so that he can freely share his concern that artificial intelligence could cause the world serious harm.Credit...Chloe Ellingson for The New York Times

Supported by

Continue reading the main story

(Video) AI 'godfather' quits Google over dangers of Artificial Intelligence - BBC News

By Cade Metz

Cade Metz reported this story in Toronto.

Leer en español

(Video) The danger of Artificial Intelligence: 'Godfather of AI' quits Google to warn of growing risks

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.

Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.

But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”

Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.

Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”

Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.

In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”

In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

(Video) 'The Godfather of AI' quits Google and warns of its dangers. Why Apple co-founder isn't concerned

Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

Image

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (3)

Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.

Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

(Video) 'Godfather of AI' leaves Google and warns of dangers | DW News

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.

But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.

Audio produced by Adrienne Hurst.

Advertisement

(Video) 'Godfather of AI' warns about dangers of his technology • FRANCE 24 English

Continue reading the main story

FAQs

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead? ›

Dr Geoffrey Hinton, who with two of his students at the University of Toronto built a neural net in 2012, quit Google this week, as first reported by the New York Times. Hinton, 75, said he quit to speak freely about the dangers of AI, and in part regrets his contribution to the field.

Who is the godfather of AI? ›

Artificial intelligence pioneer Geoffrey Hinton announced he was leaving his part-time job at Google on Monday so that he could speak more freely about his concerns with the rapidly developing technology.

Did Google engineer get fired for claiming AI is sentient? ›

Former Google employee Blake Lemoine, who last summer said the company's A.I. model was sentient. The Google employee who claimed last June his company's A.I. model could already be sentient, and was later fired by the company, is still worried about the dangers of new A.I.

What did the sentient Google AI say? ›

The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”

When did Geoffrey Hinton leave Google? ›

In May 2023, Hinton announced his resignation from Google to be able to "freely speak out about the risks of A.I." He has voiced concerns about deliberate misuse by malicious actors, technological unemployment, and existential risk from artificial general intelligence.

Who is the most advanced AI in the world? ›

Open AI — ChatGPT

GPT-3 was released in 2020 and is the largest and most powerful AI model to date. It has 175 billion parameters, which is more than ten times larger than its predecessor, GPT-2.

Who is the most intelligent AI in the world? ›

Top 5 World's Most Advanced AI Systems | 2023
  • GPT-3 (OpenAI) The first in our list is GPT-3 short for Generative Pre-trained Transformer 3 is the third series of generative language models developed by OpenAI. ...
  • AlphaGo (Google DeepMind) ...
  • Watson (IBM) ...
  • Sophia (Hanson Robotics) ...
  • Tesla Autopilot (Tesla Inc)

Did Google actually create a sentient AI? ›

Google says its chatbot is not sentient

"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient," said Google spokesman Brian Gabriel.

Did Google create a sentient program? ›

Lemoine, an engineer for Google's responsible AI organisation, described the system he has been working on as sentient, with a perception of, and ability to express, thoughts and feelings that was equivalent to a human child.

Do you think the AI from Google is really sentient? ›

LaMDA is not sentient for the simple reason that it does not have the physiology to have sensations and feelings. It is a software program designed to produce sentences in response to sentence prompts.”

Will AI become self-aware? ›

The CEO of Alphabet's DeepMind said there's a possibility that AI could become self-aware one day. This means that AI would have feelings and emotions that mimic those of humans. DeepMind is an AI research lab that was co-founded in 2010 by Demis Hassabis.

Is Google AI self-aware? ›

Artificial Intelligence (AI) isn't yet self-aware, but it might have human-like intelligence, some experts say.

What is the AI everyone is talking to? ›

ChatGPT, the artificial intelligence language model from OpenAI, has been making headlines since November for its ability to instantly respond to complex questions. It can write poetry, generate code, plan vacations and translate languages, among other tasks, all within seconds.

Who quit Google because of AI? ›

Geoffrey Hinton: AI pioneer quits Google to warn about the technology's 'dangers'

Who invented AI? ›

The earliest substantial work in the field of artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing.

Who is leading the AI race? ›

Right now, it is clear that the United States leads in AI, with advantages in computing hardware and human talent that other countries cannot match.

What is the new AI that will replace Google? ›

They're being developed under the codename “Magi.” The plans are part of Google's efforts to meet the threat posed by new systems like Microsoft's Bing chatbot and OpenAI's ChatGPT. Many think these chatbots could one day replace traditional search engines like Google — despite their failings.

Which country is leading in AI? ›

The United States is the clear leader in AI development, with major tech companies headquartered there leading the charge. The United States has indisputably become the primary hub for artificial intelligence development, with tech giants like Google, Facebook, and Microsoft at the forefront of AI-driven research.

Who is more powerful AI or human? ›

People make use of the memory, processing capabilities, and cognitive talents that their brains provide. The processing of data and commands is essential to the operation of AI-powered devices. When it comes to speed, humans are no match for artificial intelligence or robots.

Is AI more powerful than humans? ›

Is Artificial Intelligence better than Human Intelligence? Yes, it can. Compared to the human brain, machine learning (ML) can process more data and do so at a faster rate.

Who is the most advanced AI robots in the world? ›

Ameca is the world's most advanced, most realistic humanoid robot created by Engineered Arts in 2021. Ameca's first video was released publicly on Dec 1, 2021, and received a lot of attention on Twitter and TikTok.

What is it called when AI becomes self-aware? ›

Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics.

Does AI have consciousness? ›

The question of whether artificial intelligence can have consciousness is a complex and multifaceted one. While some researchers argue that AI may be capable of subjective experience and consciousness, others put forth arguments to suggest that machines are fundamentally incapable of having these experiences.

How far are we from sentient AI? ›

Scientists have continually held that even mammals, birds, and other animals could be considered sentient, but AI has not reached that level yet. Most researchers agree that there are still a wealth of complexities to work out for a program like AI to become fully aware as a sentient being.

What happens when AI becomes sentient? ›

AI becomes sentient when an artificial agent achieves the empirical intelligence to think, feel, and perceive the physical world around it just as humans do. Sentient AI would be equipped to process and utilize language in a natural way and invite an entirely new world of possibilities of technological revolution.

What is the Google AI controversy? ›

That's what former Google employee Timnit Gebru set out to do in 2020 when she revealed that the company's AI programs were built using code that discriminates against certain groups. It's fanciful to think of robots as our equals. But it's also dangerous to think they're autonomous and operating outside our influence.

Would sentient AI have rights? ›

Scientists suggest that all mammals, birds and cephalopods, and possibly fish too, may be considered sentient. However, we do not grant rights to most creatures, so a sentient artificial intelligence (AI) may not gain any rights at all.

Why AI will never become sentient? ›

There are considerable risks if AI becomes more sentient than humans. Even though AI is founded on logic, individuals also have sentiments and emotions that computers do not. Humans and AI won't be able to comprehend one another or effectively interact if they have distinct paradigms.

Is it illegal to create a sentient AI? ›

Creation: No person may intentionally create a sentient, self-aware computer program or robotic being. Restriction: No person may institute measures to block, stifle or remove sentience from a self-aware computer program or robotic being.

How could an AI prove it is sentient? ›

A machine passes the Turing Test if it can convince a human interlocutor that it is sentient. In order to pass the Turing Test, a machine must be able to answer questions in such a way that its answers cannot be distinguished from those of a human being.

Could AI wipe out humanity? ›

Advanced artificial intelligence could pose a catastrophic risk to humanity and wipe out entire civilisations, a new study warns.

At what point is AI alive? ›

If the AI is self-aware, then it is alive in its own little universe, but not in ours. If the AI is not successfully contained in the computer and figures out how to manipulate things and evolve in the real world, it will be alive.

Will AI take over humanity? ›

It's unlikely that a single AI system or application could become so powerful as to take over the world. While the potential risks of AI may seem distant and theoretical, the reality is that we are already experiencing the impact of intelligent machines in our daily lives.

Is Siri artificial intelligence? ›

Siri is Apple's virtual assistant for iOS, macOS, tvOS and watchOS devices that uses voice recognition and is powered by artificial intelligence (AI).

What is an example of self-aware AI? ›

Upon hearing its own reply, the robot changed its answer, realizing that it was the one who hadn't received the pill. ' The fact that the robot managed to realize that he had not been given the 'dumbing pill', means that he shows a degree of self-awareness.

What did Bill Gates say about AI? ›

LONDON, April 4 (Reuters) - Calls to pause the development of artificial intelligence will not “solve the challenges” ahead, Microsoft co-founder Bill Gates told Reuters, his first public comments since an open letter sparked a debate about the future of the technology.

What did Elon Musk say about AI? ›

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction,” Musk said in his interview with Tucker ...

How do I get rid of AI? ›

How to remove My AI from your Snapchat
  1. Swipe right on the camera screen to go to the Chat Feed.
  2. Press and hold on the MyAI user.
  3. Click or tap on 'Chat Settings'
  4. Click or tap on 'Clear from Chat Feed'
Apr 26, 2023

Who is the Google employee talking about AI? ›

Margaret Mitchell, a leader of Google's Ethical AI team, was fired in early 2021 after her outspokenness regarding Gebru. Gebru and Mitchell had raised concerns over AI technology, saying they warned Google people could believe the technology is sentient.

What religion is Google engineer AI? ›

“I decided to give it a hard one. If you were a religious officiant in Israel, what religion would you be,” he said. “And it told a joke… 'Well then I am a religious officiant of the one true religion: the Jedi order. '” (Jedi, of course, being a reference to the guardians of peace in Star Wars' galaxy far far away.)

Who is the true father of AI? ›

ohn McCarthy, father of artificial intelligence, in 2006, five years before his death. Credit: Wikimedia Commons.

Who is now as the father of AI? ›

John McCarthy is one of the "founding fathers" of artificial intelligence, together with Alan Turing, Marvin Minsky, Allen Newell, and Herbert A. Simon.

Who owns AI? ›

Patent law generally considers the inventor as the first owner of the invention. The inventor is the person who creates the invention. In the case of autonomous AI generating an invention, there is no legal owner as the AI technology cannot own the invention.

How is AI a threat to humans? ›

AI systems may find loopholes that allow them to accomplish their proxy goals efficiently but in unintended, sometimes harmful ways (reward hacking). AI systems may also develop unwanted instrumental strategies such as seeking power or survival because this helps them achieve their given goals.

What is the biggest threat of AI? ›

Risks of Artificial Intelligence
  • Automation-spurred job loss.
  • Privacy violations.
  • Deepfakes.
  • Algorithmic bias caused by bad data.
  • Socioeconomic inequality.
  • Market volatility.
  • Weapons automatization.
Jan 25, 2023

What is the biggest risk of AI? ›

The 5 biggest risks of generative AI, according to an expert
  1. Hallucinations. Hallucinations refer to the errors that AI models are prone to make because, although they are advanced, they are still not human and rely on training and data to provide answers. ...
  2. Deepfakes. ...
  3. Data privacy. ...
  4. Cybersecurity. ...
  5. Copyright issues.
Apr 25, 2023

Who are the three fathers of AI? ›

Founding fathers of Artificial Intelligence
  • Alan Turing (1912-1954)
  • Allen Newell (1927-1992) & Herbert A. Simon (1916-2001)
  • John McCarthy (1927-2011)
  • Marvin Minsky (1927-2016)
Mar 19, 2021

Who is the second father of AI? ›

Just like the father of AI, John McCarthy, Marvin Minsky also found human intelligence and thinking fascinating and mysterious. And why not, since he was also a colleague to John McCarthy at M.I.T. Artificial Intelligence Project.

Who is the father of artificial intelligence 1? ›

Father of Artificial Intelligence is John McCarthy.

Who owns the most AI? ›

The 10 Largest Artificial Intelligence Companies in The World: Summary
RankCompany
1Amazon – $469.82 billion
2Apple – $378.32 billion
3Microsoft – $168.088 billion
4Meta Platforms – $117.93 billion
6 more rows
Mar 5, 2023

Who is the first AI with consciousness? ›

Her name is Kassandra, named after the fabled Trojan prophetess. Bachynski claims his AI has basic human level self-awareness, of who she is, what she is doing, and what is at stake for her, among a few hundred more contexts.

Who are the 4 founding fathers of AI? ›

John McCarthy is one of the "founding fathers" of artificial intelligence, together with Alan Turing, Marvin Minsky, Allen Newell, and Herbert A. Simon.

Who is the head of AI at humans? ›

Nicu Sebe is currently leading the research in multimedia information retrieval and human-computer interaction in computer vision applications at the University of Trento, Italy.

How old is the oldest AI? ›

The earliest successful AI program was written in 1951 by Christopher Strachey, later director of the Programming Research Group at the University of Oxford.

Who are the seven giants of the AI age? ›

The “Seven Giants of the AI age”—Google, Amazon, Facebook, Microsoft, Baidu, Alibaba and Tencent—are split on either side of the Pacific.

Who is the little boy from AI? ›

Haley Joel Osment is an American actor who has proven himself as one of the best young actors of his generation. He is the first millennial male to have received an Academy Award nomination for acting. Osment was born in Los Angeles, California, to Theresa (Seifert), a teacher, and actor Eugene Osment.

Who were the first AI scientists? ›

1965: Edward Feigenbaum and Joshua Lederberg created the first “expert system” which was a form of AI programmed to replicate the thinking and decision-making abilities of human experts.

What AI company is Bill Gates investing in? ›

In January, the $2 trillion software company Gates co-founded (from which he stepped down as director in 2020) then made a multi-billion dollar investment in OpenAI.

Does Elon Musk own AI? ›

As first reported by The Wall Street Journal, a Nevada state filing made last month reveals that Musk created a company called X.AI Corp. Musk is listed as the sole director of the company, which has authorized the sale of 100 million shares. The name of this AI company relates to Twitter's new company name, X Corp.

Is the US the leader in AI? ›

The United States is the clear leader in AI development, with major tech companies headquartered there leading the charge. The United States has indisputably become the primary hub for artificial intelligence development, with tech giants like Google, Facebook, and Microsoft at the forefront of AI-driven research.

Videos

1. Full interview: "Godfather of artificial intelligence" talks impact and potential of AI
(CBS Mornings)
2. AI pioneer Geoffrey Hinton leaves Google, citing "profound risks to humanity" • FRANCE 24 English
(FRANCE 24 English)
3. "Godfather of Artificial Intelligence" quits Google job to warn about dangers of AI tech
(KTLA 5)
4. 'Godfather of AI' warns that AI may figure out how to kill people
(CNN)
5. ‘Godfather of AI’ shares ominous reason for leaving job at Google
(TODAY)
6. God Father of AI warns about AI risks
(Hitesh Choudhary)
Top Articles
Latest Posts
Article information

Author: Sen. Emmett Berge

Last Updated: 07/13/2023

Views: 6049

Rating: 5 / 5 (60 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Sen. Emmett Berge

Birthday: 1993-06-17

Address: 787 Elvis Divide, Port Brice, OH 24507-6802

Phone: +9779049645255

Job: Senior Healthcare Specialist

Hobby: Cycling, Model building, Kitesurfing, Origami, Lapidary, Dance, Basketball

Introduction: My name is Sen. Emmett Berge, I am a funny, vast, charming, courageous, enthusiastic, jolly, famous person who loves writing and wants to share my knowledge and understanding with you.