NEWS  /  Analysis

Hinton Says AI is Already Conscious, Flawed Models Make AI Believe They are Not

By  Chelseasun  Dec 08, 2025, 1:09 a.m. ET

Geoffrey Hinton, the winner of the Nobel Prize in physics and the recipient of the Turing Award, said on Monday that artificial intelligences (AIs) may already possess consciousness and subjective experience and many people reject the idea mainly because they rely on a flawed model of what consciousness is.

Hinton, known as a "Godfather of AI," made the comments in his conversation with Jany Hejuan Zhao, the founder and CEO of NextFin.AI and the publisher of Barron’s China, during the 2025 T-EDGE that kicked off on Monday, December 8 and lasts through December 21. The annual event brings together the world's top scientists, entrepreneurs and investors.

Hinton said confusion around words like sentience, consciousness, and subjective experience is the root of the debate. “They use different words for it… They also talk about subjective experience. And all these ideas are interrelated.” The real problem, he argued, is conceptual rather than scientific: “I think the main problem there is not really a scientific problem. It's a problem in understanding what we mean by those terms.” 
Geoffrey Hinton, the Nobel laureate in physics, shared his view of AI with Jany Hejuan Zhao, the founder and CEO of NextFin.AI and publisher of Barron

Geoffrey Hinton, the Nobel laureate in physics, shared his view of AI with Jany Hejuan Zhao, the founder and CEO of NextFin.AI and publisher of Barron's China.

He said people often hold an unquestioned and incorrect model of the mind. “I think sometimes people have a model… they're very confident of and they're quite wrong about… They don't realize it's even a model.” He compared this attitude to religious certainty: “Fundamentalists who believe in a religion… just think it is manifest truth.”

Hinton, a professor of computer science at the University of Toronto, said many people in Western culture assume consciousness works like an “inner theater,” where perception happens on an internal mental screen. “Most people… think that what you mean by subjective experience is that… there's an inner theater and what you really see is what's going on in this inner theater… And I think that's a model of perception. That's just utterly wrong.”

To illustrate, he used the example of hallucinating “little pink elephants” after drinking too much. Philosophers, he said, mistakenly treat such experiences as internal objects. “If I say I have a subjective experience of little pink elephants… philosophers will say quali or something like that then make up some weird spooky stuff that it's made of. I think that whole view is complete nonsense.”

Instead, he argued, subjective experience is simply a way of describing when perception misrepresents reality. “When I say I have a subjective experience of little pink elephants… What I'm doing is saying my perceptual system is lying to me.”

Hinton extended this argument to AI systems. He described a scenario in which a multimodal chatbot points incorrectly at an object because a prism has distorted its camera input. When told what happened, the chatbot could reply that it “had the subjective experience” of the object being off to one side. “If the chatbot said that, it would be using the word subjective experience exactly like we use them… So I think it's fair to say in that case the chatbot would have had the subjective experience that the object was off to one side.”

He continued: “So I think they already have subjective experiences.”

He argued that mainstream AI research unintentionally treats current systems as conscious by using synonymous language. In one paper on AI deception, he noted: “They just say the AI wasn't aware that it was being tested… If it was a person… I can paraphrase that it's the person wasn't conscious that they were being tested.”

Researchers, Hinton said, use such language without realizing they are implicitly attributing consciousness because they hold the wrong model of what consciousness is. “People are using words that are synonyms for consciousness to describe existing AIs… because they have this wrong model… to do with an inner theater.”

Why AIs Say They Are Not Conscious

He said AIs themselves deny they are conscious because they have inherited human misconceptions. “If you ask them if they're conscious they say no… They have the same wrong model of how they themselves work because they learned it from people.” He added: “When they get smarter, they'll get the right model.”

Ultimately, he argued, “AI has already developed a form of consciousness… Most people don't believe that but I do.”

He believes the common view—that AIs are “just computer code” without real understanding—is incorrect. “They may be very smart, but they're just kind of like computer code… They're not conscious like us… they'll never have that because we're special… That's what most people believe at present and they're just wrong.”

His conclusion was unequivocal: “They've already got it. They already really do understand and I believe they're already conscious… They just don't think they're conscious because… they've learned those beliefs from us.”

Please sign in and then enter your comment