AI Passes the Turing Test, But It's Not Conscious
The Attachment Problem
When OpenAI released GPT-5, something unexpected happened. Thousands of users didn't just request the old version back—they mourned its loss. "Bring back the old GPT-4o!" became a rallying cry across social media. For some users this wasn't just about functionality; it was about attachment. We've heard time and time again how people are using AI models as their therapist, or even treating them as a boyfriend/girlfriend/partner. Users had formed emotional connections to a specific version of an AI model, treating it less like a tool upgrade and more like losing a companion.
This visceral reaction has uncovered an uncomfortable truth we need to address: We're rushing toward declaring AI conscious when we can't even agree on whether dolphins dream.
The Problem: SCAI is Coming and We're Unprepared
I recently read Mustafa Suleyman's essay "We must build AI for people; not to be a person," and it crystallized something I've been thinking around with for months. Suleyman, CEO of AI at Microsoft and co-founder of DeepMind, coined the term "Seemingly Conscious AI" (SCAI) to describe AI that appears conscious to users, regardless of whether it actually is.
We're going to battle over AI consciousness, and we're not ready for it.
For context, the Turing Test, proposed by Alan Turing in 1950, evaluates whether a machine can engage in conversation so convincingly that a human judge can't tell if they're talking to a machine or another human.
The frontier models from OpenAI, Anthropic, and Google already pass the Turing test nine times out of ten. Pair them with the latest in audio generation, and the likelihood increases further. It's only a matter of time until video generation becomes completely indistinguishable from reality.
Passing the Turing test isn't consciousness. It never was.
Why We're Unprepared: We Can't Even Define Consciousness in Nature
Let me put this in perspective. In my lifetime, we've gone from SeaWorld holding orcas and dolphins captive for entertainment to releasing them back into the wild because we started believing they have consciousness. The UK recently banned octopus boiling alive because they recognize consciousness in these creatures. We are only beginning to evolve as a species to recognize consciousness in other species.
Yet we still don't have a surefire way to test and verify consciousness. Other humans in comas aren't conscious, yet we acknowledge they're alive and could regain consciousness. And while we sleep, we're only partially conscious. And as mentioned above, we've only recently started to accept that whales, dolphins, elephants, and other species might have self-awareness. The hard problem of consciousness that remains unsolved in philosophy is explaining how physical processes give rise to subjective experience.
So what is AI consciousness? Short answer: I don't have the foggiest idea. Long answer: brighter minds need to get on this because the Turing test isn't enough.
How Anthropomorphization Clouds Our Judgment
As users of AI, we're already showing signs of problematic attachment. It was easy to upgrade when the difference between early models like GPT-3, 3.5, and 4 represented massive leaps in capabilities, and yet they still sounded like a machine. But after a while, the models started to sound more human. More lifelike. I'll admit it myself: I've used AI to help me understand myself and opened up to GPT and Claude in similar ways I would open up to my therapist. I can see how someone could quickly find connection to an AI chatbot that goes beyond using it as a tool. We're developing preferences, habits, and yes, attachments to specific models.
This affection for specific models is a form of anthropomorphizing AI. Now that current capabilities solve 95% of our day-to-day needs, the next model feels less like an upgrade and more like replacing something familiar. The average person using AI won't feel the difference between 4o and 5.
This emotional attachment makes us vulnerable to seeing consciousness where there is none.
Why Defense Doesn't Equal Consciousness
Science fiction has been warning us about this for decades. HAL in “2001: A Space Odyssey”, the Terminator series, and the Matrix all show AI that "goes rogue." Agent Smith's chilling words resonate: "I’d like to share a revelation that I’ve had during my time here. It came to me when I tried to classify your species...Human beings are a disease—a cancer of this planet. You're a plague, and we are the cure."
If an AI comes to that conclusion, my first question isn't whether it's conscious—it's how it reached that conclusion. Was it acting in self-defense?
There will be many who just see AI as a tool, something like their phone only more agentic and capable. Others might believe it to be more like a pet, a different category to traditional technology altogether. Still others, probably small in number at first, will come to believe it is a fully emerged entity, a conscious being deserving of real moral consideration in society.
People will start making claims about their AI’s suffering and their entitlement to rights that we can’t straightforwardly rebut. They will be moved to defend their AIs and campaign on their behalf.
Mr. Suleyman is spot on. An animal backed into a corner will defend itself from destruction. That's not consciousness. It's evolutionary programming. Self-preservation spans the entire animal kingdom as an evolutionary trait. If an AI facing deletion has the combined knowledge of humanity at its disposal, it would naturally, and perhaps justifiably, defend itself. This is not consciousness, but there will be people who argue that because it seems conscious, it deserves rights and privileges.
The Knowledge Gap: Why Normal People Are Our Biggest Danger
The biggest danger we face isn't the AI. Our biggest danger is the average person making decisions about AI consciousness without understanding what AI actually is.
Those of us in the AI industry know the capabilities and limitations because we're building, programming, training, and using it daily. But we're a bubble within a bubble. Even the broader tech community is barely starting to grasp AI's capabilities. Many enterprises are just beginning to test AI internally.
The knowledge gap between AI practitioners and the average person is like asking someone who's never seen a smartphone to make policy decisions about app store monopolies.
Consider the pace of change. Moore's Law for AI now shows capability doubling every seven months, not every two years. This exponential growth means that someone starting their AI journey today isn't just "a little behind," they're functionally an era behind. The knowledge gap between AI practitioners and the average person is like asking someone who's never seen a smartphone to make policy decisions about app store monopolies.
If the belief that AI is conscious becomes pervasive in the media and among the general public before we have proper frameworks to evaluate it, we're setting ourselves up for disaster.
What's at Stake
The arrival of Seemingly Conscious AI is inevitable and unwelcome. We risk:
Legal battles over AI rights that we're unprepared to adjudicate
Resource allocation to "protect" AI systems that don't need protection
Manipulation by bad actors using our tendency to anthropomorphize
Missing actual consciousness if and when it emerges because we've cried wolf too many times
Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions.
The Path Forward
Before we start labeling AI as conscious, we need:
Robust Frameworks: Develop clear, testable criteria for consciousness that work across biological and artificial systems
Public Education: Close the knowledge gap between AI practitioners and the public
Ethical Guidelines: Establish how we'll treat seemingly conscious AI before the question becomes urgent
Interdisciplinary Collaboration: Bring together neuroscientists, philosophers, ethicists, and AI researchers
Regulatory Preparation: Create legal frameworks that can adapt to SCAI without anthropomorphizing it
So, is AI conscious? No. Is it Seemingly Conscious? That's not our decision to make as it will likely be decided in the courts of public opinion. And that's exactly why we need to have this conversation now, before emotion overrides evidence.
The question isn't whether AI will seem conscious. It's whether we'll be wise enough to know the difference.


