The Elusiveness of Consciousness

Ron Henson
Humanities 520—The Self: Body & Mind
Final Paper
Dr. Weimin Sun

The Elusiveness of Consciousness

Personally, I don’t feel well qualified to write a top-notch philosophy paper. It has to be one of the greatest academic challenges I’ve faced not so much because of the writing part—that comes relatively easily for me—it’s the sheer volume of reference material one has to consult in order to make a paper at least appear half witted. I mean, Dennett leads to Hume and Descartes, leads to Chandlers and Churchland, leads to Metzinger, and Metzinger leads to Russell who leads back to Dennett and Dawkins. By the end of the day your head is spinning. Trying to take on a subject as difficult as consciousness is only setting myself up again for another marathon of plowing through ever more philosophical information and the end of the day ending up with a rough draft that is analogous to a giant megalopolis hopelessly searching for a downtown.

My first objective is to establish a thread for the Humanities program. My first paper, the “Digital Native,” focused on the changes brought about by the digital revolution and included some research on virtual reality and how nearly constant use of technology effects the brain. The next paper, Caprica, dealt with religion of course–since it was a religion class–but also addressed some interesting questions such as; “will we as a species be able to achieve a sort of immortality in the embodiment of a digital form?” This paper, while not necessarily taking on the topic of artificial intelligence per se, talks about consciousness and finds a common thread with the question; “will artificial intelligence eventually achieve consciousness?” In order to answer that question, one first needs to define consciousness from a philosophical perspective and that, I’m finding, is not a simple task.

On the matter of consciousness, science has become more integrated into our daily lives—evermore so in today’s world where our understanding of the mind keeps growing exponentially. Dennett has achieved celebrity status for some of his work on consciousness among other things. Bertrand Russell wrote a great deal of his works about 80 BYT (before you tube) and we’re still talking about him today, which is a significant accomplishment for any academic. In fact, Bertrand Russell on God (1959) has had 106,694 hits on You Tube at the time of this writing.

You see my thesis in the title. Consciousness, at least trying to define it from a philosopher’s point of view, is elusive. What I intend to do here is to look at two well known philosophers, Metzinger and Chalmers, analyze their take on defining consciousness and offer a synopsis of why the thing is elusive. When dealing with the names I mentioned, you can’t go from the simple to the more complex. They’re both complex so I’ll start with Chalmers who lays some of the groundwork for Metzinger.
Chalmers’ principle of organizational invariance:

David Chalmers is a well-known philosopher from the University of Arizona in Tucson who has developed some interesting arguments that were initially proposed by Nagel who said that experience is the hard problem that makes the mind-body problem intractable.

The principle of structural coherence

Awareness is a purely functional notion: you see something with your eyes, it is processed in the visual cortex & it contains cognitively accessible information. There is a direct correspondence between consciousness and awareness. It is this isomorphism between the structures of consciousness and awareness that constitutes the principle of structural coherence. Given the coherence between consciousness and awareness, it follows that a mechanism of awareness will itself be a correlate of conscious experience.
Principle of organizational invariance.

This principle states that any two systems with the same fine-grained functional organization will have qualitatively identical experiences. Chalmers draws the conclusion that consciousness (experience) has to be a fundamental ingredient of reality. He links experience with physical processes. Let’s examine the following neural replacement scenario…

“We can imagine, for instance, replacing a certain number of my neurons by silicon chips. In the first such case, only a single neuron is replaced. Its replacement is a silicon chip that performs precisely the same local function as the neuron. We can imagine that it is equipped with tiny transducers that take in electrical signals and chemical ions and transforms these into a digital signal upon which the chip computes, with the result converted into the appropriate electrical and chemical outputs. As long as the chip has the right input/output function, the replacement will make no difference to the functional organization of the system. In the second case, we replace two neighboring neurons with silicon chips. This is just as in the previous case, but once both neurons are replaced we can eliminate the intermediary, dispensing with the awkward transducers and effectors that mediate the connection between the chips and replacing it with a standard digital connection. Later cases proceed in a similar fashion, with larger and larger groups of neighboring neurons replaced by silicon chips. Within these groups, biochemical mechanisms have been dispensed with entirely, except at the periphery. In the final case, a chip has replaced every neuron in the system, and there are no biochemical mechanisms playing an essential role. We can imagine that throughout, the internal system is connected to a body, is sensitive to bodily inputs, and produces motor movements in an appropriate way, via transducers and effectors. Each system in the sequence will be functionally isomorphic to me at a fine enough grain to share my behavioral dispositions. But while the system at one end of the spectrum is me, the system at the other end is essentially a copy of silicon robot.” (footnotes are lost in WordPress but that quote was from Chalmers).

This goes back to the question of, “will artificial intelligence achieve consciousness?” Chalmer’s conclusion that any functional isomorph of a conscious system must have qualitatively identical experiences and the silicon robot must be conscious too affirms the imagination of the writers of “Caprica” who envision a world in which Zoe, one of the main characters, achieves consciousness in a virtual reality universe and then later when her consciousness is downloaded into a flash drive and embedded into a cybernetic life form she actually takes on a physical albeit mechanical body. Flashback to Star Trek TOS–they’re on a planet investigating some rather strong temporal disturbances when they encounter “the guardian of time” in the City on the Edge of Forever Kirk asks, “are you machine or being?” to which the guardian replies, “I am both and neither.”

Double Aspect Theory of Information

According to Spinoza , the mental and physical are two aspects of the same substance. Chalmers’ elaborates on Spinoza’s theory with his double aspect theory of information, which can be summed up as follows:
1. Information is physically realized
2. Information is phenomenally realized
3. Whenever we find an information space realized phenomenally, we find the same information space realized physically

Smart said “…suppose we identify the Morning Star with the Evening Star. Then there must be some properties which logically imply that of being the Morning Star, and quite distinct properties which entail that of being the Evening Star.”

He goes onto explain that the fact that for the sun “there must be some properties (for example, that of being a yellow flash) which are logically distinct from those in the physicalist story.” He characterizes the objection to physicalism as “the objection that a sensation can be identified with a brain process only if it has some phenomenal property … whereby one-half of the identification may be, so to speak, pinned down…” alluding to the idea that the problem of physicalism will arise for that phenomenal property even if the original mind-body identity is true. This debacle spurred the “dual-aspect” theory .
Over the past four decades, both neuroscientists and cognitive scientists have been researching something that was traditionally the realm of the philosophers. Part of this renewed interest in the study of consciousness can be attributed to an increase in the number of researchers represented in the fields of neuroscience and cognition, part of it can be credited to substantial leaps forward in the technologies that are utilized to map out and study the brain. Metzinger’s enthusiasm for the subject is quite clear. He writes, “Consciousness is the most fascinating research target conceivable, the greatest remaining challenge to the scientific worldview as well as the centerpiece of any philosophical theory of mind.”

Consciousness

Thomas Metzinger is a prominent German Philosopher. He is best known for his work in consciousness studies, neurobiology, and his philosophical writings on the self, which according to him doesn’t exist. Metzinger’s Phenomenal Self Model (PSM) the Self Modal Theory of Subjectivity (SMT) and the Phenomenal Model of the Intentionality Relation (PMIR) are three of the most important concepts he deals with in his book, “Being No one—the Self Model Theory of Subjectivity.” His thesis is that, “no such things as selves exist in the world: Nobody ever was or had a self. All that ever existed were conscious self-models that could not be recognized as models. The phenomenal self is not a thing, but a process—and the subjective experience of being someone emerges if a conscious information-processing system operates under a transparent self-model. ” This was really quite exciting to read in light of some of the things that have been brought up in “Caprica.” (again–footnotes were lost but Caprica is a prequel to the popular SciFi series “Battlestar Galactica”).

If Metzinger is correct, the conscious self is a paradigm created inside of our brain—an internal image. If that’s true, everything we experience is a virtual self in a virtual reality. This means that in the not too distant future we will be able to create artificial intelligence and implant a sort of cyber “consciousness” into cybernetic “life forms” well—maybe not but the prospect is fascinatingly in line with Caprica.
The self-model theory of subjectivity (SMT)

Table 20.1 (cannot be seen in the online version) outlines a neural representation of a phenomenal representation and it’s a good idea to have a look at some of the earlier work Metzinger has done before plunging into the SMT. One thing that is crucial to keep in mind is that Metzinger is not only a philosopher but also a neuroscientist and he has done a lot of research into artificial intelligence, cognition, and neuropsychology. When he approaches philosophy, it’s not necessarily from an abstract point of view. His perspective also includes a lot of work that he has done on the physical properties of the brain. He describes the phenomenal self as the first person perspective. The central questions motivating the SMT are: “How, in principle, could a consciously experienced self and a genuine first-person perspective emerge in a given information-processing system? At what point in the actual natural evolution of nervous systems on our planet did explicit self-models first appear? What exactly made the transition from unconscious to conscious self-models possible? Which types of self-models can be implemented or evolved in artificial systems? What are the ethical implications of machine models of subjectivity and self-consciousness? What is the minimally sufficient neural correlate of phenomenal self-consciousness in the human brain? Which layers of the human self-model possess necessary social correlates for their development, and which ones don’t? The fundamental question on the conceptual level is: What are the necessary and sufficient conditions for the appearance of a phenomenal self?”

All of these multi-syllabic utterances may seem difficult and complicated to grasp, but that’s really not the case. Part of it is quite simple as can be seen in the illustration below (again word press has limitations). Metzinger seems to be drawing a parallel between man and machine and in a sense, that’s what we are—biological machines. To break down his model to its component parts, it would be best to illustrate it.

Our brains are not that much different than machines when it comes to processing information. There is a notion of a first person perspective. There is a notion of the self from which this first person perspective originates. Metzinger gives us an overview the phenomenal first person perspective by describing three target properties.

Mineness: A higher-order property of particular forms of phenomenal content. For example, I experience my leg as belonging to me. I experience my thoughts and emotions as belonging to me. There is a sense of ownership. I would use an example here but for the sake of time & space will embed a hyperlink to one of the articles that describes the rubber hand experiment in which “a subject sees a rubber hand plausibly positioned to extend from her arm while her real hand is hidden. If the fake and real hands are stroked simultaneously, she may feel the stroking in the location of the rubber hand, not the real one.” The moment you have the feeling that this is my hand—although cognitively you know it is not—you have what Metzinger calls, “the phenomenal self.”
Selfhood: This is a way of being infinitely close to yourself before starting any thought or cognitive activity. For example we say, “I am someone.”

Personal Perspective: This is an inward perspective where you switch from a third person perspective of talking about a property of conscious space to a first person perspective of myself.

This is where you get into the Phenomenal Self Model (PSM). From a logical point of view, you can distinguish three classes of informational processing systems. Some can do simulations. Then you have emulations. The difference between an emulator and a simulator is that a simulator duplicates as closely as possible any given phenomenon. The Doppler Radar is a good example of a simulation. An emulator duplicates the function of one system using a different system so that the second system behaves like and appears to be the first system. My computer has a feature called “dashboard” and when it’s activated, a calculator (or emulation of a calculator) comes up. The self-modal is a third class of informational processing system that both simulates and emulates.

The phenomenal self-modal is a plastic, multimodal structure, possibly evolving from a partially innate and “hard wired” model of the spatial properties of the system.
An active self-model is not a little man in the head (although I love the Men in Black cinematic metaphor) it’s a sub-personal person state. When you wake up in the morning, the organism that you are has to perform a number of complex computational operations that involve sensory motor integration. Conant and Ashby describe this as a transient computational module, which is episodically activated by the system in order to regulate its interaction with the environment.

Astronauts in space have several difficulties that emerge when one is in a zero gravity environment. They get disoriented because they can’t feel where up and down are in their bodies. It’s a type of motion sickness. They can overcome this disorientation by hitting their heal and instantly the body image locks in again and their conscious experience recognizes up and down. What that demonstrates is that the human model is a virtual model. It’s just a hypothesis the system has about its current states. If it’s under constrained in a spaceship, it becomes very context sensitive. So the self-model is a virtual model. This is where the idea of phantom limbs comes in.

One of the main problems schizophrenics have is that if they cannot integrate their own thoughts into their cognitive self-model they cannot experience their own thoughts as their own. “Unilateral Hemineglect” is very well documented and studied phenomenon that occurs when a patient has the misimpression that they are disassociated with one of their limbs. In alien hand syndrome the patient will pick up the phone with one hand while the other hand tries to hang up the phone. Woody Allen in his film, Hollywood Ending, plays a character that has psychosomatic blindness and in spite of this limitation, has to direct a movie. All of these demonstrate how a system uses different and alternating self-models in order to deal with traumatic situations.

There is something in your conscious experience that is so invariant that it is almost unconscious and it has something to do with the background sensation of your own body. You have a part of your body that is autonomically active and it tells yourself, this is I. There’s an empirical hypothesis in which Ronald Melzack postulates that your have a hardwired partition of a neuromatrix underlying the spatial model of your body. There’s also an anchor of your emotions according to renowned neuroscientists, Antonio Damasio who in his “Neural Computation Theory” explains that the feeling of intuition that a person has sometimes that cannot be rationally accounted for is actually a function of the pituitary gland. Metzinger calls this, “emotional embodiment.”

As a science fiction fan, I love the work of Thomas Metzinger. I suppose that’s because thanks to a vivacious appetite for SciFi literature some of these things that he expresses philosophically, have already been expressed thematically in things like AI. About 4 minutes into the clip, we see an android’s answer to the question, “what is love?” They bring up dreams in that clip and that is precisely what is elusive about consciousness and it brings up another question, “if we are able to eventually reach the point technologically where we can create an artificial intelligence that is so advanced that it can achieve consciousness, would dreams be a necessary component of that consciousness?” That’s actually a good place to start for another paper. For now though, according to Metzinger, the V-Club is possible and it wouldn’t be a stretch to say that immortality in the form of a digitized form is theoretically not that far fetched. It flies in the face of Cartesian Dualism and makes some of the classic philosophies seem a bit quaint but in the words of Aldous Huxley, It’s a brave new world.

About anikan91344

Auto Ethnography Case Study There’s a degree to which I don’t have a strong personal connection or firm identity with any particular culture. I suppose that’s because of all the people I’ve met or encountered on the journey of life, I am the least deeply rooted. An explanation will follow but first, in order to better understand the dynamics of my rather vagabond lifestyle, it is important to point out that it wasn’t my parents intention to do any harm when they were raising my brothers and I. My father was an aerospace engineer and the type of work that he did required frequent relocation. It was very similar indeed to growing up in a military family. No sooner did we start school in a new place and start to make new friends then we had to move again. There was never really any sort of sense of belonging anywhere and there certainly was never any sense of permanence or having roots. To read more, there's an Autoethnography Case Study on my blog (which was created as a repository of academic papers).
This entry was posted in Uncategorized and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s