SOMA Analysis:
The Philosophical Zombie
Published: 2015 (Legacy Article)
What if someone created a human being in your image? It is identical to you, from flesh to memories. It looks, behaves, and thinks like you do. You are completely indistinguishable from each other. How would anybody be able to tell you apart?
Frictional Games are masters of the stealth horror genre. They helped bring it to the mainstream in 2010 with Amnesia: the Dark Descent and last year continued this trend with psychological sci-fi horror game SOMA. In SOMA you play as Simon Jarrett, a man who finds himself in an underwater research facility with no explanation as to how he got there. You explore claustrophobic, dark and difficult to navigate hallways, unable to defend yourself from monsters, and solving increasingly difficult puzzles.
For the most part SOMA delivers on being a good horror story experience, but good horror relies on much more than scares, and tense atmosphere. A good horror story will make you think, leave you with important questions that linger in your mind for days, and perhaps even alter your views on the themes it explores. SOMA not only achieves this in droves, but it actually achieves this better than most horror games in recent memory. By the end of the game you're left questioning your very existence, something that is actually a pretty strange experience.
SOMA deals with the theory of the “philosophical zombie”. The philosophical zombie refers to a hypothetical being that is indistinguishable from a normal human being, except in that it lacks consciousness and sentience. An example of this is that the philosophical zombie would not be able to feel pain, yet it would react to a typically painful stimulus the exact same way a real human being would.
Consciousness is defined as "the state of being aware of and responsive to one's surroundings". If you are ever questioned as to what consciousness is, you would likely respond with terminology such as the self, your soul, that “you” are “in there”, that you “exist”. These sorts of responses are scientifically ambiguous statements, seeing as a machine with sensory and linguistic abilities will be able to do much the same thing. Self-awareness, consciousness, is completely subjective and as such cannot be defined by the laws of science.
You know that consciousness exists is because you experience your own self-awareness in ever waking moment. But there is no way that you can irrefutably prove that you are self-aware, nor to tell if anybody around you is, or is not self-aware other than through verbal communication. You may be able to affirm your own self-awareness, but so could a machine that has been programmed to do so. The philosophical zombie is a person that is not self-aware, but because we lack a way to prove this, it will be impossible to be able to tell this person apart from an actual self-aware person. This includes you, the person reading this article, as both you and the philosophical zombie would both argue that you are sentient.
This is similar to the thought experiment 'Schroedinger's Cat'. To put it very simply, in this experiment a living cat is placed in a sealed box, though the box is fitted with a device that could kill the cat in an unspecified amount of time. Therefore from a logical standpoint, for the spectator, the cat is dead once the box is sealed shut, but also alive until the box is re-opened to see the result. According to the laws of applied logic, both these statements are equally true. Similarly, the philosophical zombie is simultaneously sentient and non-sentient, both these assertions are equally 100% correct. With this knowledge, theoretically every person you know is a philosophical zombie, as you have no way to prove that they are, or are not sentient beings.
So how does this have any relevance to SOMA? The game applies these theories as the foundation to the mystery of the main story that the player is tasked with uncovering. It plants the seeds of thought into the player and never forces you into conclusions you do not come to yourself. There is also no sense or right or wrong, or directing the player to think in particular ways. The game's ambiguity allows the player to explore the topic of their own accord, drawing their own conclusions to the topic.
Throughout the game you regularly interact with machines that appear to have human consciousness. They talk, gesture, and behave exactly like human beings, as well as themselves believing that they are human, despite from your viewpoint clearly being machines. Over the course of the game it is revealed that the scientists in the facility have discovered a way to scan and archive the personality and memories of the facilities employees. In effect, creating an exact digital copy of that person which can be downloaded to compatible machinery. In turn this will create a machine that is completely sentient and a copy of a living human being, essentially creating two of the same person. This machine believes that it is still the same person who was scanned, it behaves in the same way, answers questions the same way, and even shares memories of that person who was scanned. Yet instinctively we would assume that it doesn't experience life as we do, because it's just a machine, right?
This judgement is based entirely on emotional bias. Our preconceived ideology of a subjective understanding of self-awareness, but as previously stated there is no cohesive, scientifically conclusive definition as to what the self is, or what is required for it to originate. There are only unproven theories. One theory states that it requires the ability to understand and utter a temporally complex language that allows the being to develop an abstract understanding of past, present, and future for self-awareness to develop. The machines in SOMA meet this criteria, they show clear understanding of past events, and are able to express their desires for the future, just like a person would.
Another theory attests the ability to understand one's self in a reflection as evidence for self-awareness and conscience. Again, the machines in SOMA would pass this test, along with animals like pigs, elephants, dolphins, and more.
Another theory is the Turing Test. A linguistic experiment that is supposed to tell artificial from human intelligence. Again, the machines in SOMA would pass it.
Again, we cannot empirically comprehend what sentience is, nor what is required for it to form. How complex must organisms be to be self-aware compatible? To allow a soul to inhabit it that spiritually exists, that turns it into a living being.
Is every human self-aware? Do philosophical zombies that are just machines of flesh and bone actually exist? Which animals are self-aware and which are not? Opinions on the matter are based on an entirely subjective understanding of a concept that we're not currently able to fathom. What about an elaborate artificial intelligence that fulfils every one of the aforementioned criteria? Can it develop a self and become a sentient being? Or does it just believe it is and just appears that way to us?
What about that perfect copy of you I spoke of earlier? Is it just a soulless machine made of flesh and blood that's not distinguishable from the actual you? Or is somebody actually in there? Are you in there?
These are questions that are ethically important and yet there is no cohesive answer to any of them. Every assumption that claims to irrefutably know something is fundamentally flawed because of the lack of empirical evidence.
Because of that, because of the Schroedinger's Cat theory, a robot that believes it's human is in fact a self-conscious entity. Which means that the moment we pull the plug, as you are sometimes required to do in SOMA, you take the life of a sentient being. But at the same time it's also nothing but a machine. Just like you.
You are a conscious being inhabited by a soul. You are self-aware and you know it, but at the same time you are also nothing but a soulless machine, at least according to logic.
So here's something for you to think about, something SOMA also probes the player with in its finale. If you had the chance to live the rest of your life in a simulation, one that feels 100% authentic to you, so authentic that you didn't even know that you were in a simulation. Would you chose the simulation, or would you stay with your “real” life? Just to make that question even more impactful, how do you know that you're not already in such a simulation?
Frictional Games are masters of the stealth horror genre. They helped bring it to the mainstream in 2010 with Amnesia: the Dark Descent and last year continued this trend with psychological sci-fi horror game SOMA. In SOMA you play as Simon Jarrett, a man who finds himself in an underwater research facility with no explanation as to how he got there. You explore claustrophobic, dark and difficult to navigate hallways, unable to defend yourself from monsters, and solving increasingly difficult puzzles.
For the most part SOMA delivers on being a good horror story experience, but good horror relies on much more than scares, and tense atmosphere. A good horror story will make you think, leave you with important questions that linger in your mind for days, and perhaps even alter your views on the themes it explores. SOMA not only achieves this in droves, but it actually achieves this better than most horror games in recent memory. By the end of the game you're left questioning your very existence, something that is actually a pretty strange experience.
SOMA deals with the theory of the “philosophical zombie”. The philosophical zombie refers to a hypothetical being that is indistinguishable from a normal human being, except in that it lacks consciousness and sentience. An example of this is that the philosophical zombie would not be able to feel pain, yet it would react to a typically painful stimulus the exact same way a real human being would.
Consciousness is defined as "the state of being aware of and responsive to one's surroundings". If you are ever questioned as to what consciousness is, you would likely respond with terminology such as the self, your soul, that “you” are “in there”, that you “exist”. These sorts of responses are scientifically ambiguous statements, seeing as a machine with sensory and linguistic abilities will be able to do much the same thing. Self-awareness, consciousness, is completely subjective and as such cannot be defined by the laws of science.
You know that consciousness exists is because you experience your own self-awareness in ever waking moment. But there is no way that you can irrefutably prove that you are self-aware, nor to tell if anybody around you is, or is not self-aware other than through verbal communication. You may be able to affirm your own self-awareness, but so could a machine that has been programmed to do so. The philosophical zombie is a person that is not self-aware, but because we lack a way to prove this, it will be impossible to be able to tell this person apart from an actual self-aware person. This includes you, the person reading this article, as both you and the philosophical zombie would both argue that you are sentient.
This is similar to the thought experiment 'Schroedinger's Cat'. To put it very simply, in this experiment a living cat is placed in a sealed box, though the box is fitted with a device that could kill the cat in an unspecified amount of time. Therefore from a logical standpoint, for the spectator, the cat is dead once the box is sealed shut, but also alive until the box is re-opened to see the result. According to the laws of applied logic, both these statements are equally true. Similarly, the philosophical zombie is simultaneously sentient and non-sentient, both these assertions are equally 100% correct. With this knowledge, theoretically every person you know is a philosophical zombie, as you have no way to prove that they are, or are not sentient beings.
So how does this have any relevance to SOMA? The game applies these theories as the foundation to the mystery of the main story that the player is tasked with uncovering. It plants the seeds of thought into the player and never forces you into conclusions you do not come to yourself. There is also no sense or right or wrong, or directing the player to think in particular ways. The game's ambiguity allows the player to explore the topic of their own accord, drawing their own conclusions to the topic.
Throughout the game you regularly interact with machines that appear to have human consciousness. They talk, gesture, and behave exactly like human beings, as well as themselves believing that they are human, despite from your viewpoint clearly being machines. Over the course of the game it is revealed that the scientists in the facility have discovered a way to scan and archive the personality and memories of the facilities employees. In effect, creating an exact digital copy of that person which can be downloaded to compatible machinery. In turn this will create a machine that is completely sentient and a copy of a living human being, essentially creating two of the same person. This machine believes that it is still the same person who was scanned, it behaves in the same way, answers questions the same way, and even shares memories of that person who was scanned. Yet instinctively we would assume that it doesn't experience life as we do, because it's just a machine, right?
This judgement is based entirely on emotional bias. Our preconceived ideology of a subjective understanding of self-awareness, but as previously stated there is no cohesive, scientifically conclusive definition as to what the self is, or what is required for it to originate. There are only unproven theories. One theory states that it requires the ability to understand and utter a temporally complex language that allows the being to develop an abstract understanding of past, present, and future for self-awareness to develop. The machines in SOMA meet this criteria, they show clear understanding of past events, and are able to express their desires for the future, just like a person would.
Another theory attests the ability to understand one's self in a reflection as evidence for self-awareness and conscience. Again, the machines in SOMA would pass this test, along with animals like pigs, elephants, dolphins, and more.
Another theory is the Turing Test. A linguistic experiment that is supposed to tell artificial from human intelligence. Again, the machines in SOMA would pass it.
Again, we cannot empirically comprehend what sentience is, nor what is required for it to form. How complex must organisms be to be self-aware compatible? To allow a soul to inhabit it that spiritually exists, that turns it into a living being.
Is every human self-aware? Do philosophical zombies that are just machines of flesh and bone actually exist? Which animals are self-aware and which are not? Opinions on the matter are based on an entirely subjective understanding of a concept that we're not currently able to fathom. What about an elaborate artificial intelligence that fulfils every one of the aforementioned criteria? Can it develop a self and become a sentient being? Or does it just believe it is and just appears that way to us?
What about that perfect copy of you I spoke of earlier? Is it just a soulless machine made of flesh and blood that's not distinguishable from the actual you? Or is somebody actually in there? Are you in there?
These are questions that are ethically important and yet there is no cohesive answer to any of them. Every assumption that claims to irrefutably know something is fundamentally flawed because of the lack of empirical evidence.
Because of that, because of the Schroedinger's Cat theory, a robot that believes it's human is in fact a self-conscious entity. Which means that the moment we pull the plug, as you are sometimes required to do in SOMA, you take the life of a sentient being. But at the same time it's also nothing but a machine. Just like you.
You are a conscious being inhabited by a soul. You are self-aware and you know it, but at the same time you are also nothing but a soulless machine, at least according to logic.
So here's something for you to think about, something SOMA also probes the player with in its finale. If you had the chance to live the rest of your life in a simulation, one that feels 100% authentic to you, so authentic that you didn't even know that you were in a simulation. Would you chose the simulation, or would you stay with your “real” life? Just to make that question even more impactful, how do you know that you're not already in such a simulation?