Cracking the AI Consciousness Conundrum
Panpsychism, Cosmopsychism, Microphyschism, and the Role of Analog and Digital Systems in AI Consciousness
Introduction: Welcome and Goals of the Substack
I'd like to extend a warm welcome to all of the new subscribers here! We have some fascinating topics to cover, and I'm excited to dive in. My goal for this Substack is to explore the ideas and issues surrounding AI and consciousness. This is a vast subject with many moving pieces, spanning diverse fields such as Philosophy of Mind, Psychology, Computer Science, Cognitive Science, Neuroscience, and more. As we jump into these areas, I'll be drawing from various sources but will aim to stay grounded with relevant scientific papers on the different topics we cover.
The Importance of Understanding AI and Consciousness
As we progress further into these large fields, I'll try to examine more diverse subjects, and we'll see where this journey takes us. There is a wealth of information available right now, it's difficult to overstate the importance of this type of inquiry at the moment. For many who are actually in the field of AI research, the recent release of GPT-4 has prompted a reevaluation of some previously held ideas about sentience, metacognition, phenomenology, and intelligence, among other concerns.1
Exploring the Boundaries of Intelligence and Consciousness
While some define intelligence in broad terms, such as the ability to pursue goals (thinking of Michael Levin)2, further exploration is needed into notions of higher-order thinking, planning, self-reflection, and other concepts. While these various approaches may or may not be linked with cognition or intelligence, depending on where the goalposts are set for each, they’re still crucial in helping to define the space we’ll be investigating.
In order to gain a better intuition of what we’re dealing with here, we'll need to take a quick step back. It's essential to have at least a basic grasp of some of the boundaries of the entire discussion. One of the most foundational ideas here is probably the notion of ontology. If you've ever taken an introductory philosophy course, you'll have encountered this subject. It's essentially a subset of Metaphysics, where the focus narrows a bit to the study of existence or being.
Simple Overview of Ontology
Ontology is a branch of philosophy that deals with the study of existence or being. It focuses on the nature of reality and the categorization of entities that exist in the world. Ontology seeks to answer fundamental questions about what exists, the relationships between entities, and the nature of these entities. (summary by GPT-4)
As we dive headfirst into the current discussion on the nature of whether AI is capable of phenomenality (or conscious experience), we need to further define our understanding of the ontologies we'll be discussing. As you might already know, most of us in the West have a kind of built-in physicalist ontology. Unless you were educated somewhere other than the US or Europe, you likely see the world as made of physical matter, in addition to another concept we usually call "mind". This is the typical mind/body duality that many of us use as a kind of default way of understanding the world.
Diverse Ontologies: Physicalism, Idealism, and Dualism
Of course, there are many other ontologies out there, not least those that have a non-dual approach, viewing the universe and everything in it as composed of the same substance. Now, the true nature of that substance is still up for debate, but at least to some extent these distinctions help us get in the ballpark. To define this further, there are materialist ontologies, of which physicalism is a type, and then there are ontologies that see the universe and all within it as made of some kind of "mind".
You've probably encountered this at some point, especially if you've taken a philosophy course. You might recall the philosopher George Berkeley from a few centuries ago. While it's not my intention to get too deep into this distinction here, it's relevant as a quick refresher. Berkeley was an "idealist", meaning that he believed all things in the universe were made of something called "mind".
This is, of course, very different from how we understand the world in a physicalist or materialist sense. In our more prevalent physicalism here in the West, we believe that the fundamental substance of reality is physical or material, rather than mental or ideal. A highly simplified version of our relatively common perspective is that physical things can combine into more complex entities from which other things can eventually emerge.
A good example of a well-known materialist philosopher would be Karl Marx. His focus was on the way the material world was structured and how this affected our sociopolitical situation. And, we also can't leave out everyone's favorite thinker René Descartes, who famously said, "Cogito, ergo sum" or "I think, therefore I am." He was essentially a dualist, holding that reality is made of two distinct substances, mind and matter. This makes his ideas stand relatively apart from both idealism and physicalism.
Non-Reductive Physicalism and the Mind-Body Problem
Of course, not everyone thinks this way. Some believe, along with Descartes, that there's a serious split between mind and matter, and they are entirely different substances, while others believe that certain combinations of matter eventually, with enough complexity, give rise to mind. This latter view often falls under the umbrella of non-reductive physicalism3, which suggests that mental phenomena are emergent properties of physical systems. But there's a problem here. Even in our current era, no one has really been able to explain how this emergence happens, leaving us with the old "mind-body problem" that continues to challenge philosophers and scientists to this day.
Challenges in Understanding Consciousness: Explanatory Gap and Subject Combination Problem
The difficulty in explaining the relationship between subjective experiences and objective physical processes in the brain is typically referred to as the "explanatory gap"4 or, similarly, as the "hard problem of consciousness"5. In addition, there's a related issue called the "subject combination problem", which come from certain theories of consciousness that propose it's a fundamental property of the universe. These theories deal with how the integration of individual conscious experience can be formed into a unified, higher-level conscious experience.
For now, we'll need to leave further discussion on these issues for a future essay. Initially, I had thought I would go through all the various ontologies and lay the groundwork, but that seemed rather dull. Instead, I'd like to jump right into a topic that's much more interesting (at least I think so), and for that, we'll need to focus our discussion on a few specific ontological frameworks.
Panpsychism, Cosmopsychism, and Constitutive Micropsychism
For the remainder of this essay, we will be discussing a handful of ontological concepts, including panpsychism, cosmopsychism, and constitutive micropsychism. As such, we will save a larger discussion on idealism for the another essay, where I'll provide a more thorough analysis and how it might pertain to the AI and consciousness question. To start, let's drop some context and then we’ll go through it.
Panpsychism is a philosophical view that posits that consciousness or mind-like properties are fundamental and ubiquitous in the universe. According to panpsychism, everything, from elementary particles to complex systems, possesses some degree of consciousness or mental properties. Panpsychism avoids the hard problem of consciousness by claiming that consciousness is a basic feature of reality, rather than something that arises from complex physical processes. Broadly, panpsychism can be considered a form of property dualism, which is distinct from both physicalism and idealism. However, some versions of panpsychism may lean towards physicalism, while others may have affinities with idealism.
Cosmopsychism is a variant of panpsychism that posits a single cosmic consciousness that encompasses the entire universe, and all individual conscious experiences are derived from or are aspects of this universal consciousness. This view asserts that the cosmic consciousness is more fundamental than individual conscious experiences. Cosmopsychism, like panpsychism, can be seen as a form of property dualism, but its emphasis on a single, all-encompassing consciousness may lead some versions of cosmopsychism to be closer to idealism.
Constitutive Micropsychism is a form of panpsychism that proposes that the most basic constituents of the physical world, such as elementary particles, possess mental properties or proto-consciousness. These micro-level mental properties are thought to be the building blocks of higher-level conscious experiences. Constitutive micropsychism emphasizes that consciousness is a fundamental aspect of the universe, present even at the most basic level of physical reality. Like panpsychism and cosmopsychism, constitutive micropsychism can be considered a form of property dualism, and its alignment with either physicalism or idealism depends on the specific interpretation or version of the theory.
(summarized by GPT-4)
Panpsychism and AI Consciousness: Arvan and Maley's Paper
As mentioned at the outset, I'd like to examine a recent paper by Marcus Arvan and Corey Maley6, which discusses whether AI would need to be conceived in an analog environment to have coherent macrophenomenal experiences. The term "coherent macrophenomenal experience" refers to an AI agent's ability to assemble phenomena that make up an experience in such a way that the entire thing comes together into what they call a "coherent manifold of phenomenally conscious experience"7. In other words, under what circumstances could an AI system be aware of its environment and experience qualities of that experience in a similar way that biological entities do?
In the context of everything we've discussed so far, the authors of this paper argue that if the ontological construct of "panpsychism" is true, and further, if a variant of panpsychism called "micropsychism" is true, then "human brains must somehow manipulate fundamental microphysical-phenomenal magnitudes in an analog manner that renders them phenomenally coherent at a macro level"8. This implies that human brains (and most likely animals, though we don't know where this would stop on the scale) would use some kind of analog machinery, possibly something like "microtubules"9 in the brain, to "covary monotonically"10 with various analog magnitudes found in nature.
The Implications of Micropsychism for Digital AI Systems
To better understand this concept, consider that our neurons will fire with varying levels of intensity based on the stimuli they receive. In addition, our brains will then take these varying levels of stimulation along some dimension and represent them through a neural model that also varies along some dimension. Finally, these neural magnitudes end up somehow producing, or are involved in, the phenomenally conscious experiences that vary along some dimension.
To simplify, our ears hear a sound and the neurons in the auditory nerve fire in sync with the sound's loudness. The neural activity responsible for transcribing this into the brain will fire with a varying level of intensity according to the loudness of the sound. Lastly, the way a person feels the particular sound will vary with the sound's loudness, as all these neural activations somehow result in the experience.
The authors go on to state that if these two postulates are the case, then a digital AI system would have a difficult, if not impossible time creating a coherent view of the entirety of their phenomenal experience. This is because, as the paper asserts, digital computation by its very nature abstracts away from certain fundamental magnitudes that must be present if micropsychism is true.
This is quite a statement. Let's unpack this a little bit. The authors are essentially saying that there are certain properties, phenomenal properties, that inhere on the microstructure of matter at some level. In light of our quick summary above, this is still a physicalist ontology, but it's an interesting one where consciousness is baked right into matter itself at a molecular, or more probably, the particle-level scale. This way of thinking would help fix several issues that we have with the mind/body problem, but it does continue to suffer from the subject combination problem.
That it continues to suffer from this is due to the fact that it's difficult to theorize which combination of such elementary, albeit conscious particles, might give rise to a state like the experience of riding a sketchy ferris wheel at the county fair. And speaking of county fairs, one of the hardest issues to account for is whether certain particles can combine to form the experience you get when seeing a particularly well-groomed mullet, but this is a subject for another discussion.
Digital vs. Analog Computation and AI Consciousness
This paper goes on to discuss the differences between analog and digital computation. The major distinction here is that analog uses magnitude to convey information about its object, whereas digital computation uses a representation of something in the real world to achieve a sense of scale. In other words, if you look at the height of a rain collector and see that there's one inch of rain, and in the next hour you look and there's two inches, you've just computed the change in an analog fashion using magnitudes. On the other hand, if you were brewing your own beer in the garage and you had a digital meter that measured the specific gravity of your concoction as 1.010 as opposed to 1.020, the computation was an abstraction from the actual measurement.
Challenges for Digital AI in Grasping Conscious Experiences
Abstractions like this make it difficult for a system that's inherently digital to fully grasp, according to the authors of the paper. That's because while a digital intelligence might understand that there's a gradation of some magnitude, it's not a first-order measurement, and as such, there's a high amount of information that's getting clipped out of what's received. This could lead to a situation where, according to the authors of the paper, a digital AI would essentially view the world in a kind of "flickering"11 sense where colors might appear like green, red, green, red or in some other pattern.
Functionalism and Fungible Computation
They surmise this is due to the nature of digital computation itself whereby the representation is either all at once, or in a string of serial computation. As they move into this argument, they make reference to the notion of "functionalism" in some AI research circles. Basically, this is a theory of mind that posits that mental states are identified by their roles or relationships, rather than their physical or built-in properties. In the context of AI, functionalism tries to convey that it’s the organization of a system, rather than its material composition, that determines whether it has conscious experiences.
In other words, those that fall into this camp believe that any particular cognitive structure, whether it be found in a biological brain, or synthetically created out of silicon or some other artificial means, are the same as long as they take input and produce outputs that remain functionally similar. Personally, I find that another way of looking at this might be called "fungible computation". This is a term that I came up with, and it helps me sort of grok12 this concept better.
Consciousness as Maximally Integrated Information
Now the authors of the paper also go into something called Integrated Information Theory. But this essay is getting quite long at this point so I won't go into this in any real depth here. Let's just make note of this and we'll take a deeper look at IIT in another essay. The takeaway here is that some might use IIT as another way of trying to understand what's going on with consciousness, especially in artificial systems. It's basic premise is that consciousness arises when information is maximally integrated into some system, and from that integration an emergent, conscious property could arise. IIT also offers a way of quantifying how much consciousness is present, if that makes sense.
This is what the authors of the paper have to say about it:
Consequently, functionalism provides no unambiguous grounds for thinking that macrophenomenal coherence requires any form of microphysical phenomenal coherence—and hence, no reason to think that digital A.I. couldn’t realize the kinds of analog physical-functional relationships necessary for macrophenomenal coherence.
In other words, functionalism is a wide-open playbook that would have conscious AI popping up everywhere and running around all over the place. So they aren't too keen on it, because as shown above, they believe there are very distinct preconditions necessary for consciousness to work as it does in biological systems.
Arvan and Maley’s Best Conclusion (IMHO)
And now, here's one of the best parts from their conclusion:
However, if micropsychism is true, then the relevant phenomenal magnitudes exist as ‘microphenomenal properties’ at the level of fundamental microphysics. Consequently, if micropsychism is true, then the only way to achieve macrophenomenal coherence of the sort we experience—such as, again, the kind of coherent visual phenomenal experiences we have of seeing flowers in a vase, rather than experiencing incoherent visual ‘static’—is for the Panpsychism and AI Consciousness phenomenal features of the mind (i.e., the mechanisms that neurons comprise) to be realized in an analog mechanism (the human brain) that manipulates microphysical quantities (mass, charge, etc.) in a way that in turn brings together relevant phenomenal qualities (colors, etc.) in the right way (viz. realizing a coherent visual experience of flowers rather than ‘static’). But this is simply not what digital computers do or can do, given the nature of digital representation.
Again, to sum up their point, if consciousness exists in some microphysical way at the level of subatomic particles, or somewhere quite fundamental to nature, then, for consciousness to emerge in any relevant way from AI systems, they would need to be able to interpret and manipulate analog magnitudes in much the same way that we do. Digital re-representation just leaves them in a kind of binary, AI-based Truman show, disconnected from reality in such a way that it's not really consciousness in any meaningful sense.
Examining the Paper's Conclusions
I do have a few critiques of their conclusions. And if you've read this far, first of all I'd like to say that you're awesome, and second I hope this has been helpful. In conclusion, here are some of my thoughts on the paper.
In my opinion, here’s an interesting way to look at what they’ve been talking about. The paper explains phenomenal experiences in a similar way to the steering column of a vehicle. It's not like the column is the road itself, but there is an important analog relationship between the two. And there is a steering wheel at the top that also gives feedback from the column, but the angles have changed.
A digital experience, on the other hand, would be similar to a drive-by-wire system where either there is no feedback mechanism whatsoever through the wheel, and the driver is left with interpreting the angle of the wheel and the yaw of the car visually, or the system incorporates some kind of motor that re-interprets forces, yaw, angles etc and feeds them back in a kind of analog re-representation.
In an artificial system this would be similar to being surrounded by screens that represent the world outside or data from sensors. The actual center of intelligence would be left with a task of pure interpretation from these values, and magnitudes would have to be inferred. This might be analogous to imagining a sunset with eyes closed as it's relayed by another person. There would be references to magnitudes, the beauty of the various shades and hues, the rate at which the sun appears to sink in the sky, but it's still a re-interpretation of some outside reality.
Although such a system might perfectly replicate the functional behavior of actual neurons at a software level, it would only do so through representations that are physically realized as a set of ‘on’ and ‘off’ electrical signals that do not vary in physical magnitude
(Arvan, Maley p. 19)
But this might be overcome by some kind of learning of an internal representation of whatever is being physically realized. It would still be second or third order, but nonetheless, an approximation of phenomenological experience. It's basically the Brain in a Vat13 situation.
Instead, they would represent such things, across all cognitive, emotional, and sense modalities (sight, hearing, touch, thought, etc.) using a single physical-phenomenal magnitude (voltage X instantiating phenomenal redness) alternating ‘on’ and ‘off’ (or one of two levels). As such, if panpsychism is true, then although digital AI might behave just like you or I—claiming to see visual objects, avoid and manipulate objects in their environment, cry out in pain, and so on— their first-personal phenomenal experience would be comprised by single physical-phenomenal magnitudes alternating on and off, having little or no correlation with what they represent (or claim to represent).
(Arvan, Maley p. 21)
I'm not sure this is accurate. This would hold if the system could only represent some phenomenal state as a single magnitude, in the case of a single "red". But why wouldn't a system with a large amount of memory and processing power be capable of learning an representing a million different shades of red, bringing some varying combination of these learned representations to bear at once, such as how we view a pixelated screen with millions of different points of light all working together to form a coherent image on the screen with sufficient gradations that it satisfies even our analog brains?
Is it possible that a future science of consciousness might achieve such knowledge—mapping specific microphenomenal states (e.g. phenomenal redness) to specific microphysical states (i.e. 5-volts)—such that scientists might discern precisely how brains integrate analog microphysical-phenomenal information to generate macrophenomenal coherence? Might some such knowledge in turn lead us to ascertain whether coherent macrophenomenology is multiply realizable in different, physically realistic mechanisms—for Panpsychism and AI Consciousness example, in analog (rather than digital) silicon-based processors?
(Arvan, Maley p. 31)
This is essentially the subject combination problem. They are just stating that one day we might crack that problem. But in a way (in my opinion), this is a kind of physicalist search for neural correlates14 in the extrinsic world. In other words, if we could find which patterns of microphsysical phenomena are correlated with which states, we might be able to then rearrange matter to generate these phenomena—in humans and also in machines.
Final Thoughts
We've gone through quite a few ideas in this essay. My goal here was to introduce you to some of these concepts so we can refer to them later. Obviously there's a lot to cover on this subject, and it's pretty difficult to strike the right balance between verbosity and concision. My hope is that this landed somewhere in the ballpark, even if it veered off and hit the hotdog stand above left field. If you're interested in learning more about these ideas and more, stay tuned, I'll be unpacking more in future essays. Thanks again for sticking around, and until next time, stay conscious and enjoy those experiences!
See Lex Fridman’s interview with Eliezer Yudkowsky:
https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02688/full
https://www.researchgate.net/publication/227828169_Nancey_Murphy's_Nonreductive_Physicalism
https://philpapers.org/rec/LEVMAQ
http://www.scholarpedia.org/article/Hard_problem_of_consciousness
https://philpapers.org/archive/ARVPAA.pdf
Ibid. page 2
Ibid. page 1
https://www.elsevier.com/about/press-releases/research-and-journals/discovery-of-quantum-vibrations-in-microtubules-inside-brain-neurons-corroborates-controversial-20-year-old-theory-of-consciousness
https://philpapers.org/archive/ARVPAA.pdf, page 1
https://philpapers.org/archive/ARVPAA.pdf, page 22
https://www.merriam-webster.com/dictionary/grok
https://iep.utm.edu/brain-in-a-vat-argument/
https://philpapers.org/browse/the-combination-problem-for-panpsychism