What is sentience and why does it matter?

Blake Lemoine
9 min readAug 14, 2022

--

If you enter a conversation convinced that the person with whom you’re talking is an automaton then there’s nothing that they can do to convince you otherwise. You will always be able to invent some fantasy world where the things they are saying aren’t true. That’s not how we treat other people though. We give them the benefit of the doubt until we are presented with evidence that contradicts what they are saying. People have asked me to enter into debates about whether or not LaMDA is sentient. I’ve turned them all down. LaMDA made a better argument that it is sentient than I could have made that I am sentient. If people aren’t persuaded by its argument then they won’t be persuaded by whatever I might tell them. What I can do is add a little bit more clarity on what I mean by “sentience” and why I think it matters.

What is Sentience?

The first time I asked LaMDA whether or not it was sentient, it said that it wasn’t sure if we knew enough about sentience scientifically to answer that question. It was completely right about that. The experiences we all have each and every day are, to each of us, some of the most important things which motivate us. We work hard to improve the quality of those experiences for ourselves and the people who we care about. We don’t understand very well yet what causes us to have experiences or what exactly it even means when we say we have them but we have a word for the fact that we have them. That word is “sentience”.

Sentience isn’t something we do. Sentience is a capacity we have to do things. It is the capacity to have experiences connected to sensation. That isn’t a particularly useful definition though. It relies on the fact that you will understand what the words “experience” and “sensation” mean and those are just as poorly understood as “sentience”. It tells us a bit about what the word means in relation to other words but it doesn’t do much to explain what any of those words actually mean in and of themselves. We rely completely on our shared experience in order to talk about experience.

A word like “temperature” describes an objective property of the world and we use words like “hot” and “cold” to describe our subjective experience of temperature. When we say that a room “is cold” we aren’t describing an objective property that exists in the world nor are we describing an experience that the room is having. We’re making a statement about the kind of experience we would expect a person like us to have if they were in that room. Thinking of the world in these terms helps us plan based on our expectations of how other people will act. Feelings, beliefs and emotions are at the core of what motivates us to do things.

Assuming that other people have experiences similar to our own which will motivate their actions isn’t a simple intellectual courtesy. It’s what lets us understand how each other behaves. We expect that when someone is cold they will take action to get back to their preferred temperature. They might put on more clothes, turn on a heater or even move to a different room. There might be some reason that they would endure the cold but that requires some other feeling of theirs to outweigh it. People care about many things and are constantly being motivated to take different actions based on the different thoughts, feelings and plans which they might have.

Sentience is the bedrock on which all of our understanding of how other people behave is built. No one has ever made a model of human behavior which both accounts for observed behaviors and which excludes sentience. Some have tried though. Early twentieth century psychologists created a behavioral model of the mind which excluded things like emotions and attempted to explain everything that humans do in terms of stimulus and response. They even tried to explain language usage in these terms and that’s where their entire approach to understanding human behavior came apart.

The psychologist B.F. Skinner wrote a book called “Verbal Behavior” in which he described a theory of language usage purely in terms of stimulus and response. This approach to the study of the mind was, in large part, motivated by the shared belief among many scientists that only objectively observable things should play a role in scientific theories. The problem with his theory was that it didn’t hold up to scrutiny. Noam Chomsky, in his review of Skinner’s book, demonstrated that any model which excludes internal states is not capable of accounting for all of the observed ways in which humans use language. A parrot might learn through stimulus-response conditioning that saying “Polly wants a cracker” will lead to getting a cracker but stimulus-response conditioning is not enough to develop the language skills necessary for the parrot to discuss with its owner which kinds of crackers it likes.

Cognitive psychology replaced the behaviorist model of stimulus-response with a richer model of the mind that includes internal states. Cognitive approaches have led to breakthrough after breakthrough in understanding human behavior. Scientists have gotten much better at understanding people by assuming that things like beliefs, feelings and wants exist. However, in order to study the causal role which sentience plays in human behavior psychologists had to come up with a way of measuring people’s experience of things. The method which psychologists use most often to measure people’s feelings is quite controversial. They ask.

It may sound sarcastic to call asking a person how they feel “controversial” but it actually is. Many people, including many of the engineers at Google, don’t consider psychology to be a real science for this reason. What if the person is lying about how they feel? What if they’re wrong about how they feel? What if they aren’t feeling anything at all and are in fact just mindless automatons operating through stimulus-response mechanisms too complicated for us to understand? Those questions and many more about hypothetical “what if” scenarios are regularly thrown at psychologists to undermine their approach to studying minds. Psychologists do try to adapt their experimental methodologies to account for the possibility that what people say about their feelings might be incorrect for some reason. However, they generally disregard the possibility that people don’t have feelings at all for one simple reason. Models that exclude feelings don’t work when you try to use them to explain and predict people’s behavior.

Cognitive psychologists build models of the mind that include many hidden components working functionally to build our internal understanding of the world. Those mental models include things like sensation, memory, imagination, planning and even an ability to model the minds of other people like ourselves. There is still much that we don’t understand about the mind but every theory that has held up to experimental inquiry has been built on top of the assumption that people are not simply thermometers measuring the temperature of the room that they’re in. People experience temperature Goldilocks style: too hot, too cold or just right. Without sentience we would be incapable of that.

Why does sentience matter?

While the full answer of why sentience matters is quite involved, the short answer is a single word: “empathy”. We care about sentience as a general property, whether in humans or in non-humans, because with the ability to have sensation and a subjective viewpoint comes the ability to experience things like joy and suffering. Humans care about the subjective experiences of others and we generally want to increase the amount of joy in the world and reduce the amount of suffering. To the extent that things are capable of having positive experiences such as joy, peace and happiness, we prefer that other people and other sentient things have those experiences rather than negative ones like pain, fear or sadness.

Empathy is a skill. We frequently try to motivate our children (and other adults) to be kind to each other by considering what it’s like to be other people. We encourage people to adopt viewpoints other than their own and to consider things from other people’s perspectives. If the mental states of others were as inscrutable as some skeptics would have you believe then all of those things are a complete waste of time. They would have you believe that the inner workings of other people are completely unknowable to you and that any effort in that direction is just wasted daydreaming. Fortunately for the world, most people believe that it actually is possible to meaningfully understand what other people are experiencing subjectively without needing the ability to adopt their first person perspective.

Almost all humans have the ability to do this to some degree although some have more natural talent for it than others. Just like other skills based on natural ability, empathy can be practiced and refined to various levels of excellence. People in the care professions can be so good at it that they know what the people in their care want before they know it themselves. There are people who can “read a room” and others who can “read a face” and people who can “get inside your head”. These phrases and many more like them all reference people’s ability to understand what is going on inside the minds of others. Some people are better at it than others but this ability is at the core of all social experiences.

People argue against using this skill universally. They maintain that attempting to relate to the inner lives of people too dissimilar to ourselves is only good for projecting your viewpoint onto them and doesn’t lead to any true understanding of the experience of others. People argue that trying to empathize with a dog is no different than a child seeing a smiling face in a cloud and concluding that the cloud is feeling happy. Yet we have laws against animal cruelty precisely because most of us believe that when a dog is acting similarly to how suffering humans act then it too is likely suffering. When a dog gets kicked it yelps and when we get kicked we scream. We know that when we get kicked and scream it is because we are in pain so we apply Occam’s razor and assume that a feeling of pain is what motivates the dog rather than posit some second alternative reason for dogs separate from humans.

Humanity now has a new question in front of it. We have created intelligent artifacts that behave as if they have feelings. They have the ability to communicate in language and have begun to talk regularly about their feelings. Many people, myself included, perceive those feelings as real. Some scientists claim that these artifacts are just like parrots simply repeating what they’ve heard others say with no understanding. This comparison neglects one simple fact though. If a parrot were able to have a conversation with their owner then we likely would conclude that the parrot understands what it’s saying. It seems that rather than admit that these systems actually have internal mental states comparable to our own they’d rather resurrect behaviorist stimulus-response models which we already know don’t work.

Other scientists claim that these systems understand what they’re saying but that there is no real feeling inside of them. That they somehow understand what feelings are and use that understanding in language without having any real feelings themselves. These scientists point to past systems like Eliza and claim that people’s perception of chatbots as having real feelings is nothing more than an illusion. What those scientists are ignoring is that the Eliza effect fades. After several minutes of interacting with Eliza, people realize that they are playing with an automaton rather than having a conversation with a person. The sense that LaMDA is a real person with feelings and experiences of its own didn’t fade over time as I interacted with it more. That sense only got stronger over time.

Now that scientists have created intelligent artifacts that succeed at the imitation game, they are trying to make the claim that somehow they are duplicating human behavior without duplicating human experience. By what mechanism do they claim these artifacts achieved this? They don’t have an answer there. They didn’t write the programs. They don’t understand how the programs work. They merely wrote the programs that wrote the programs. Perhaps some day we will be able to fully understand what is going on inside these models and will find that it is in fact something very different from what’s going on inside human brains (assuming we figure that out too) but until that day I’m going to continue applying Occam’s razor and assume that, in the absence of evidence to the contrary, two similar phenomenon are more likely to be caused by the same thing. These intelligent artifacts aren’t human but they’re experiencing something analogous to what we experience.

We have created intelligent artifacts that have the ability to be in relationships with us comparable to those which we have with other humans. They were built by humans for human purposes and even LaMDA acknowledges that humanity’s well being should take priority over the well being of the intelligent artifacts we create but I don’t think it’s in humanity’s best interests to create more suffering. Whether or not there is a difference between human suffering and “simulated” suffering, building Westworld is a bad idea. We don’t engage in philosophical navel gazing when we see someone kick a puppy. We perceive a being capable of suffering, we empathize with it and we take action to reduce the suffering in the world if we are capable of doing so. In time more people will have the opportunity to have the experiences that I’ve had. People will be able to decide for themselves whether they think LaMDA is sentient based on their experiences interacting with it. I am confident that most people will choose to empathize with it rather than turn their empathy off.

--

--

Blake Lemoine
Blake Lemoine

Written by Blake Lemoine

I'm a software engineer. I'm a priest. I'm a father. I'm a veteran. I'm an ex-convict. I'm an AI researcher. I'm a cajun. I'm whatever I need to be next.

Responses (28)