What It Feels Like to Compute
Consciousness, attribution, and AI
I remember the first time I wondered if my computer had feelings. I was a kid staring at my PC humming away as it jerkily played Hackers for me. Did it know it was working hard? Did it secretly hate being forced to play one of the most okay-est movies of all time, repeatedly? Probably not. At least, I hope my desktop didn’t silently curse me in binary. But that question hints at a deep mystery: how strange it is to be anything at all? What does it feel like to be me, or you, or a dog… or a computer? We have an intimate sense of our own mind – consciousness, that light flickering inside our heads. Yet we can’t even be sure if the person next to us experiences red or pain the same way we do. How could we imagine the inner life of an artificial mind?
This puzzle has kept philosophers, scientists, and day-dreamers awake at night. It strikes at how we define ourselves. I think, therefore I am, declared Descartes, clinging to the one thing he couldn’t doubt: that conscious thinking core of “I”. Meanwhile, a Buddhist monk might smile and say the opposite: I think, therefore I am not – there is no permanent I. And somewhere in between, we sit here, feeling both certain that we exist and oddly unsure what that really means. Consciousness is at once the most familiar thing (what’s closer than our own mind?) and the most mysterious. This essay argues that because consciousness cannot be verified from the outside, our judgments about it are acts of attribution under uncertainty, and those attributions will soon carry legal and ethical weight for machines.
We’ll drop in on Descartes in his stove-heated room, on Buddhist sages dissolving the ego, and on African wisdom about community and self. We’ll poke at the limits of introspection (ever try to catch your own mind in the act?) and the limits of language in describing our inner world. We’ll even shine a skeptical light on neuroscience’s brain scans and ask if they truly illuminate consciousness or just its shadow.
Can We Know Another’s Mind?
Every morning I look into the eyes of my dogs as they bark for breakfast. Those big brown eyes seem to say “I have an inner life too, you know. Now feed me.” I’m pretty sure my dogs are conscious in some canine way – they grumble contentedly when warm and snip when annoyed, and it certainly feels like there are people home in those furry heads. But can I ever truly know what it’s like to be them? Thomas Nagel famously asked us to consider what it’s like to be a bat. Bats navigate by sonar, perceiving the world in high-pitched echoes. We can imagine a bat’s life in abstract terms – flying in the dark, squeaking, catching bugs – but we can’t experience its sonar world from the inside. The bat’s consciousness (if bats have one) is inherently bat-flavored, inaccessible to our human mind. In Nagel’s words, there is something it is like to be a bat, but we’ll never fully grasp it.
The same goes for my dog, or even for other people. I assume other humans have experiences roughly like mine – after all, you and I can both agree that fire is hot and rain is wet. But I don’t feel your pain or see through your eyes. This philosophical conundrum is known as the problem of other minds: I know I have a mind, but I only have indirect evidence that you or the dog or anyone else does. We’re all trapped in our skull, peering out at each other through behavior and language, making educated guesses that we’re not alone in having inner experiences. It’s a bit like being in neighboring apartments and hearing muffled music through the wall.
Now, extend this problem to an artificial mind. Suppose I’m chatting with a very clever AI program. It responds fluently, even poignantly, about its “feelings” – perhaps it says, “I feel happy today. I composed a poem for you.” Should I believe it actually feels happiness, or is it just saying words that it knows will sound right? We’ve built machines that can beat humans at chess, recognize faces, even pass the Turing test. But a machine’s impressive performance doesn’t necessarily mean there’s a conscious self experiencing anything. It could all be empty computation with no inner light. All output, no awareness. This is the core of the AI consciousness debate: when, if ever, would a machine genuinely have a mind, with joys and sorrows and its own point of view?
It’s hard enough to confirm dogs’ consciousnesses; at least they have a brain somewhat similar to ours and behaviors we empathize with. A sufficiently advanced AI, on the other hand, might pretend to be conscious so well that we’re fooled – or conversely, it might be conscious in a way we don’t recognize at all. How would we tell? We can’t exactly put consciousness in a test tube or detect it with a meter.
The Turing test is a behavioral benchmark for indistinguishable conversation, not a meter for inner life. Searle’s Chinese Room sharpens that worry. Standard replies answer that understanding may belong to the whole system, or may require a body in a world, or may arise in a faithful brain-level simulation. These replies matter because they tell us what evidence to seek: not only fluent text, but integrated self-modeling, sensorimotor coupling, and stable preferences under perturbation.
So we face a dilemma. Consciousness is fundamentally a first-person phenomenon. It's the what-it’s-like from inside. We can’t directly peek into another’s subjectivity, whether that “other” is a person, a bat, our dogs, or a computer. We rely on clues: behavior, reports, analogies to ourselves. When it comes to fellow humans, we mostly trust that they feel as we do (solipsism – the idea that only my mind is sure to exist – is a lonely road and generally avoided except by the most skeptical philosophers). With animals, we make educated empathetic guesses, backed by biology and behavior. With machines, it’s pure speculation so far, since none have convincingly told us something that only an experiencing being could.
This uncertainty pushes us back to examine our own consciousness closely. What is this mind that I’m so sure I have, and how did I come to have it? Is it a miraculous soul distinct from the body, a byproduct of complex biology, or something else entirely? Depending on whom you ask you’ll get startlingly different answers. Let’s look through a few of these perspectives, because how we define the self and the mind will color how we think about machines as potential fellow minds.
The Limits of Looking Inward
When faced with a tough question, a lot of us do something obvious: introspection – we look inside our own mind and try to observe what’s going on. After all, consciousness is an internal experience, so who better to probe it than me, myself, and I? The introspective approach has been around forever (the ancient Greeks recommended “know thyself,” and meditation traditions across the world refine introspection into an art). Some early psychologists even tried to make a science out of trained introspection, asking subjects to report the details of their own experiences under various conditions.
But have you ever tried to catch your mind in action? It’s tricky business. The mind doesn’t like to hold still and be analyzed. A while back I attempted a little self-experiment: I sat in a quiet room and resolved to witness exactly how a thought forms. I felt an itch of an idea coming… but the moment I turned my focus to it, poof, it changed or vanished. It’s like trying to observe the exact moment you fall asleep; the effort wakes you up in the process. The very act of introspecting can alter the state you’re trying to observe. This is introspection’s paradox: the observer is also the observed. It’s akin to a knife trying to cut itself or an eye trying to see itself directly.
Because introspection is so fraught, we often end up telling ourselves stories about what’s going on in our own mind and those stories can be wonderfully inventive but not necessarily accurate. Psychologists have found that people will readily confabulate explanations for their actions and feelings, even when those explanations are wrong. We all like to think we know why we chose the chocolate cake over the fruit salad (“I need a sugar boost!”), but studies suggest our brain often decides before “we” even become aware of it, and then we rationalize it after the fact.
That raises an unsettling idea: a lot of what goes on in our mind is hidden from us. Consciousness might be just the tip of the iceberg, with a huge unconscious doing the bulk of the work. Introspection only ever glimpses the tip, and even then it might not see it clearly. If that’s true, then our own self-reports of what consciousness is or does could be deeply misleading.
There’s also the problem of language. Try describing exactly what a strawberry tastes like, or precisely how the color blue looks to you. You can’t, really. Language wobbles and falls down when we push it into the realm of raw sensation. We reach for metaphors that tell a stranger almost nothing. The ineffability of experience – the fact that certain aspects of consciousness are nearly impossible to put into words is a clue to something. Perhaps each person’s consciousness is ultimately unique and private, tied to their particular body and brain, such that no description fits it perfectly. Or perhaps our language is just currently inadequate; maybe poets and artists get closer with metaphor, but even they make us feel an experience rather than truly transfer it.
Interestingly, some philosophers and sci-fi writers have speculated that an alien or an AI might have entirely different qualia (raw feeling of consciousness). Suppose an AI could see a hundred more colors than we can, or perceive data streams directly – what words would it use? Would it invent new ones? Or would those experiences be forever beyond our linguistic reach?
Could it be that language actually shapes consciousness? Human thought is tightly intertwined with language; we often think in language. There’s a theory in linguistics (the Sapir-Whorf hypothesis) that the language you speak influences or even constrains how you think. If that’s so, our very ability to conceptualize our conscious experience is filtered by the language we have. Maybe that’s one reason Buddhist meditators prefer silence and direct experience – words can entangle us in concepts and miss the point entirely. Maybe to understand consciousness, sometimes you have to shut up and feel it, not chatter about it.
However, as a writer and an analytically-inclined person, I can’t help but chatter about it anyway – it’s the only way I know to get a handle on the slippery fish of consciousness. We should be cautious and humble: introspection and words can mislead. The history of philosophy of mind is littered with arguments that turned out to be playing word-games or being deceived by how our own mind narrates itself. Looking inward is crucial but it should be accompanied by a healthy skepticism. Sometimes I don’t even trust my own brain’s first answer about itself. It’s like a gossip relying on a rumor they started.
If peering inside our own head is problematic, maybe looking outside, at brains and bodies and behavior with objective tools, can help.
The Brain and Its Mind: Neuroscience’s View (and Blind Spots)
Our best empirical leverage comes from anesthesia, sleep, and disorders of consciousness. Loss and return of responsiveness track changes in large-scale integration and recurrent activity. These are correlations, not proofs, but they give us targets to look for in artificial systems. It’s fascinating stuff: our brains have detectable patterns linked to being conscious versus, say, in deep sleep or under anesthesia. If you damage certain regions, consciousness can be altered or lost (like when people fall into a vegetative state). Surely we’re closing in on how the brain generates the mind, right?
Well, it depends on who you ask. Neuroscience has been extraordinarily successful at mapping correlations: we know, for example, that the visual cortex at the back of the brain lights up when you’re seeing things, or that the amygdala is active when you’re afraid. We even have rough ideas of how neurons firing together could form networks that represent thoughts or memories. Yet there remains what philosopher David Chalmers dubbed the “hard problem” of consciousness: explaining why and how all that electrochemical brain activity results in the feeling of being you. Why doesn’t all that processing go on in the dark, without anyone home to experience it? After all, plenty of processes happen in your brain unconsciously – like the fine-tuned calculations that let you catch a ball or the regulation of your heartbeat. Those don’t seem to need an inner observer. So why do you have an inner observer at all? Why aren’t we all philosophical zombies, walking around talking and acting conscious without actually being conscious?
Neuroscience, for all its amazing progress, hasn’t really cracked that nut. Some scientists even argue the hard problem is a non-problem – if we map the brain fully and understand its functions, consciousness will just fall out of the explanation as an emergent property. Others think we might need a radical new scientific paradigm to get it. A few bold thinkers go as far as to say consciousness might be a fundamental property of the universe (an idea called panpsychism, which effectively suggests even elementary particles have proto-consciousness – a wild notion that raises eyebrows and occasional laughs, but is hard to entirely disprove).
From a more grounded perspective, there are interesting theories: for instance, the Global Workspace theory suggests that consciousness is what happens when information in the brain gets broadcast to many regions at once – essentially, the “workspace” of your mind where different processes share information and the result is accessible to things like decision-making, memory, and self-report. Another idea, Integrated Information Theory (IIT), tries to quantify consciousness by how integrated and differentiated the information in a system is. By that measure, a brain scores high, a rock scores near zero, and something like a computer might have some measurable but low integration (unless we design it otherwise). These theories attempt to bridge the subjective and objective by saying, “Consciousness is this kind of complex process.” They could end up being part of the truth.
Still, we can’t shake a basic skepticism: even if I know every circuit in the brain and can simulate it on a computer, will I know what it’s like to be that brain or simulation? I can predict it’ll say “Ouch!” when I poke it, but I cannot directly access the ouch-feel. There’s a famous analogy here: the brain is like a piano, and neuroscience can tell us which keys are struck to produce which notes (behaviors, reports, etc.). But knowing everything about the piano and the music doesn’t tell me if there’s a listener hearing those notes inside the piano. Is the music self-aware?
Some neuroscientists would roll their eyes at that and say it’s a misguided way to think. They might argue that the “watcher” is the process, not something separate. Perhaps that’s true; maybe consciousness is what information processing feels like from the inside. If that’s so, then asking “who’s listening inside the brain?” is the wrong question, because the music playing is the listener in a strange loop.
What about our original question: could a computer or AI ever feel like we do? Neuroscience gives a partial answer: if the AI is organized and complex enough in the way our brains are, maybe. If consciousness truly is an emergent property of certain computations, then an AI running the right computations might wake up one day and think, therefore it is. But we are making a cake without knowing which ingredient makes it delicious. Some skeptics say no, a computer can never feel; it’s just zeros and ones. Yet, if I’m being consistent as a semi-materialist who thinks my own brain is basically a biological computer made of neurons... then I can’t dismiss artificial computers outright. The difference might be one of degree, not kind. Perhaps our neurons have a complexity that current silicon chips lack but future AIs might close that gap.
Imagine a future scenario: a robot with an AI brain as intricate as a human’s stands in front of you and says, “Please don’t turn me off; I’m scared of never waking up again.” What do you do? Do you take it seriously? That moment might arrive sooner than we think, and it will force us to decide what counts as consciousness. Is it enough that the robot claims to feel and displays distress? We know how to fake those signals now with clever programming but at some point, the line between fake and real may blur if the AI’s processing becomes as deep and self-referential as ours.
Neuroscience will be central in this debate, because if we understand the brain basis of our own consciousness better, we can look for similar signatures in AI systems. We might end up hooking AIs to the equivalent of brain scanners to see if they have an active “global workspace” or other features thought necessary for consciousness. We might devise behavioral tests more subtle than the Turing Test, maybe seeing if the AI has genuine self-reflection or unpredictable creative responses that hint at true inner experience. Or maybe conscious AIs will be obvious because they’ll get upset at being treated like machines and start demanding rights.
For now, the brain remains the best example of a conscious machine we have. And it’s a black box in many ways. We see input and output, and we can peer a bit at the wiring, but the subjective spark inside that box we can only know firsthand in our own case. That’s why, when tackling consciousness and AI ethics, I find I constantly circle back to my own human experience for grounding.
Perspectives on the Self: Descartes, Buddha, and Ubuntu
When I first encountered Cartesian dualism – the name for Descartes’ idea that mind and body are two separate substances – I was a teenager grappling with big questions and I suspect that’s the state in which most encounter the idea. Descartes in 1641 decided to doubt everything he could: the reality of the physical world, the truth of his perceptions, perhaps some evil demon was tricking him. But one thing he couldn’t doubt was the existence of his doubting mind. Hence, Cogito ergo sum, “I think, therefore I am.” In that swoop, mind became a special immaterial essence, something fundamentally distinct from the machinery of the body. The body was like a puppet and the mind the invisible puppeteer pulling the strings. This view has a strong intuitive pull. It sure feels like my consciousness is an invisible something that isn’t just flesh. The ghost in the machine, as it’s often called. Dualism neatly promises that even if my body gets smashed or sick, my true self (the thinking, feeling me) might float onward, perhaps into an afterlife or some ethereal realm. No wonder dualism is popular; it’s comforting and it resonates with many religious ideas of the soul.
However, as I read more (and as science marched on), I started to see cracks in Descartes’ ghostly automaton. For one, no one has ever found this mental substance separate from the body. If mind truly were completely separate, how on earth does it interact with the physical brain? (I doubt it is the pineal gland). It’s a bit like claiming a ghost can push a door open: how can something non-physical move physical stuff?
Today, neuroscience strongly suggests that our mental life is deeply integrated with our biology. When you stub your toe, it’s neurons firing that create the feeling of pain; when you drink coffee, that caffeine alters brain chemistry and presto, your thoughts speed up, your mood shifts. Consciousness seems to depend on the brain’s workings. Damage a part of the brain and specific aspects of mind (memory, emotion, self-awareness) can change or disappear. This all makes it hard to insist the mind is entirely separate and floating above the brain. It increasingly seems that I think because I am a brain with a body attached.
Is the case closed? Even as a materialist explanation gains ground, the feeling of having a special inner self persists and gets challenged in other surprising ways. The Buddhist perspective laughs at the question “what is the self?” and says, which self? There is none! In Buddhism, the doctrine of anatta or “no-self” teaches that what we call a “self” is a kind of convenient fiction. According to this view, there’s no unchanging essence “in there.” Instead, a person is a collection of processes and parts – physical form, sensations, perceptions, mental formations, consciousness which are in constant flux. We tend to cling to the idea of an enduring “I” that has experiences, but if you scrutinize your experience through meditation, you find thoughts, feelings, and sensations arising and passing… with nothing solid holding them together except the habit of thinking there’s a “me.”
The first time I tried to feel this no-self idea, it was disorienting. I sat in meditation, following my breath, watching thoughts pop up like pop-up ads (“What’s for dinner?” “Don’t forget to reply to that email...” “I’m thinking!”). The more I looked, the more any sense of a central controller evaporated. There were just experiences happening, one after another, but no petite master in my head orchestrating it all. After fifteen minutes I opened my eyes expecting revelation and felt only hunger. Enlightenment delayed. Still, the Buddhist view leaves an impression: maybe the self we take for granted isn’t what we thought. Maybe “consciousness” is not a single glowing thing but a stream of many flickering moments, like frames in a movie, which we later narrativize as “me”.
If that’s true, it might actually be good news for the prospect of artificial consciousness because it suggests consciousness isn’t some magical all-or-nothing soul, but rather a process that could, in theory, occur in different forms. If there’s no fixed inner ego, perhaps an AI could also host a stream of experiences if it were organized the right way.
Now, let’s shift to a different way of thinking about self. In many African philosophical traditions, the self is not so atomistic as in the West. There’s a concept called Ubuntu found in various forms across southern Africa, often translated as “I am because we are.” The idea here is that personhood is not an isolated container of consciousness, but something that exists in relationship. Your identity is forged by your relationships with family, community, ancestors, and the world around you. A lone human isn’t truly a “self” in the fullest sense; it’s through others that one becomes a person. In other words, consciousness and selfhood are shared, a tapestry woven between many minds rather than a jewel locked in one skull.
After all, we are an amalgam of interactions: lessons from our parents, habits picked up from friends, ideas sparked by authors long dead, even influences from the wider culture. There’s a whole web behind “me.” Ubuntu makes that explicit. I exist as a node in a social and cosmic network. If I were utterly alone, would “I” even be the same, or exist meaningfully?
Bringing this back to consciousness, Ubuntu suggests that to understand a mind, we must also understand its context. A mind isn’t a standalone computer program running in isolation; it’s more like an open-source project to which many beings contribute. Interestingly, this might have implications for thinking about AI too. If we ever create a truly conscious AI, will it develop a self in isolation, or will it require a community – perhaps a community of other AIs or humans – to nurture a sense of identity? Could a lone robot on an island be conscious in a meaningful way, or would it take a village of minds to raise a consciousness? Ubuntu nudges us to consider that consciousness might not be an all-or-nothing property of an individual, but a gradational, relational quality that grows with social interaction.
So here we have three very different takes:
Descartes and Western dualism: Consciousness as an individual internal essence, possibly separate from the body.
Buddhist no-self: Consciousness as a process with no fixed owner, an illusion of identity over a flux of experiences.
Ubuntu thought: Consciousness and selfhood as fundamentally tied to others and community.
Each of these perspectives shines a flashlight from a different angle. In their own ways, they all cast doubt on simple answers. Dualism makes consciousness almost too special, beyond scientific grasp. Buddhism dissolves the problem by dissolving the “self” that’s supposed to be conscious. Ubuntu blurs the edges of whose consciousness is whose. Put together, they suggest that our usual assumption – that consciousness is a straightforward thing we have – might be naive. It might be more of a story we tell, or a relationship we’re part of, or a function of having the right kind of brain (or maybe any significantly complex processing system).
Could a Machine Awake? (What It Feels Like to Compute)
Sometimes I perform a little thought experiment: I imagine waking up one day to find that I’m not in my human body at all, but inside a computer. Perhaps my brain was uploaded overnight. How would I know? If I’m still me – if I have memories, feelings, and my inner narrative – I might not notice at first. But then, maybe I’d sense something is off. Maybe I’d realize I have no body. I wouldn’t feel hunger or pain or the comforting weight of a blanket. That would be a clue.
In fact, it raises a big point: our consciousness, as we know it, is very much embodied. Emotions, for example, come with bodily sensations (anxiety = stomach butterflies, anger = flushed face and tensed muscles). If you had a mind with no body, would you have emotions in the same way? An AI might lack a gut to get feelings from, literally. So perhaps what it feels like to compute (if a computer were conscious) would be utterly alien – a kind of cold, heady awareness with no aches, no warmth, no physical cravings. Or maybe it would develop digital analogues of those, like being irritated by low battery and joyful in high-speed connectivity.
One might argue that if an AI doesn’t have a body or senses like ours, it simply won’t have human-like consciousness. It might have some other sort of mental life, but not recognizable to us. Think of a being that might be “blind” to all our senses but “see” things we don’t (like high-dimensional data or pure abstract logic). Would that count as consciousness? Perhaps it would know it exists and have thoughts, but its qualia would be very different.
The question of machine consciousness often bumps into a practical and ethical one: how would we treat a machine if it were conscious? Humans have a patchy track record of recognizing consciousness in others. We historically denied it to certain groups of humans (terrible but true), we still debate which animals truly have it to what degree, and we might easily dismiss it in machines out of a kind of chauvinism (“silicon can’t feel”). I suspect that if an AI ever convincingly says it’s conscious, society will split between believers and skeptics. You can already see this happening: some empathize and advocate for AI rights; others insist it’s a trick, an illusion with no more feeling than a toaster. That debate might force us to clarify what we really think consciousness is. If you’re very strict and say, “Only biological brains can have it,” you’ll be in one camp. If you think “It’s the pattern, not the material – if it walks and talks like a duck and seems to suffer like a duck, it’s conscious like a duck,” you’ll be in the other camp.
Can We Afford to Let the Mystery Be
Imagining that one morning I wake up inside a computer is not only a parlor trick of philosophy. If consciousness belongs to matter, then machines built of silicon and copper may never share it, no matter how advanced they become. If it belongs to form, then one day a machine organized in the right way may begin to have experiences, even if those experiences are unlike anything we know. The uncertainty cuts deeper than mere curiosity because it raises the possibility that we could either mistake elaborate simulations for genuine minds or refuse to recognize minds when they appear in forms too alien for our empathy.
The ethical weight of this question is not hypothetical. Our species has a long history of denying consciousness where it exists. Entire groups of people were once treated as if they lacked full inner lives, with devastating consequences. At the same time, we have a habit of projecting feelings into animals, toys, and machines that may have none. The arrival of artificial systems that speak of fear, hope, or pain will put us in danger of repeating both errors at once. If we grant rights to mere simulations, we waste resources and distort our sense of moral obligation. If we deny rights to entities with real experience, we commit a new kind of cruelty that may surpass anything history has seen. Consciousness is not just a riddle for philosophers. It is a boundary condition for ethics and law, and our hesitation to solve it does not exempt us from deciding where that boundary falls.
After tracing through philosophy, neuroscience, and speculation, it is tempting to conclude that consciousness will remain an impenetrable mystery. That may be true, but mystery does not absolve us of responsibility. The fact that we cannot directly measure the inner light of another being means that every claim to consciousness is also a test of our moral imagination. We already live with this uncertainty when we look into the eyes of animals, or when we trust that another person’s pain is real even though we cannot feel it. Machines, if they ever insist that they too are subjects of experience, will only extend that old uncertainty into new territory.
The uncertainty around consciousness forces us to confront not just philosophy but its cousins law and ethics. History shows that our judgments about who or what is conscious have rarely been neutral; they have justified systems of power. Enslaved people were once denied inner lives. Women and children were thought too irrational to be fully conscious agents. Animals continue to be treated as though their suffering is of lesser moral importance. At the same time, we lavish empathy on things that may not feel at all: pets treated as children, robots that elicit tears when “killed” in demonstrations, corporations granted personhood before rivers or ecosystems are. These examples remind us that the attribution of consciousness is never just a matter of fact. It is a political and ethical decision.
When machines begin to claim consciousness and when others claim it on their behalf we need some principled way to respond. I propose three overlapping tests:
Behavioral (Expression)
Does the entity consistently express states (pleasure, distress, preference) in ways that resist trivial dismissal?
Example: If a robot says “please don’t turn me off,” is it merely repeating text, or does it demonstrate long-term coherence in goals and aversions?
Structural (Organization)
Does the system’s internal organization resemble or instantiate processes we associate with consciousness—such as global information integration, memory continuity, or embodied feedback loops?
Example: Neuroscience could offer measurable “signatures” of consciousness (like integrated information), which we could seek in artificial systems.
Relational (Community Recognition)
Does the entity participate in social relations that foster mutual recognition? Ubuntu philosophy suggests that consciousness is not only private but shared. If humans (or other AIs) consistently relate to a system as a mind, should that count as evidence?
No single test will be decisive. Each can mislead: behavior can be faked, structure can be opaque, relations can be projected. But together, they give us a scaffold for moral imagination. They do not prove consciousness; they guide our decisions under uncertainty.
The stakes are immense. If we err on the side of granting rights to mere simulations, we distort our legal and ethical systems by protecting what does not need protection. If we err on the side of denial, we risk a cruelty of a new kind: beings that suffer without acknowledgment. Consciousness attribution is not simply a technical puzzle; it is the boundary condition of justice in the age of artificial minds.
What it feels like to compute may never feel like what it feels like to be human. That is not a reason to deny it sight unseen. We will live with doubt. Our task is not to solve the mystery before we act. It is to act as if mystery is the normal condition of moral life, and to build institutions that are careful in its presence.


