person reaching out to a robot

The Hard Problem of Consciousness as a Blocker for AI/ML

Philosophy Technology

One of the most infamous challenges for the mainstream physicalist paradigm of today is the hard problem of consciousness, also called the mind-body problem or the explanatory gap.

Despite a plethora of advances in neuroscience and neurobiology over the past century, many feel that the hard problem is insoluble. At the very least, no neuroscientist worth their salt would argue that it has been solved, though some would argue that it was never a problem to begin with.

What is the hard problem of consciousness?

Here is what the hard problem entails…

It is not possible, even in principle, to reduce qualitative experience to the quantitative parameters of observed physical matter, regardless of the arrangement of that matter (Chalmers, 2003).

In other words, the mainstream paradigm of today claims that the physical brain, which is an incredibly complex arrangement of matter, generates consciousness. However, we do not understand how, even in principle, that happens.

For example, how can mathematical abstractions, such as mass, charge, and spin, give rise to the experience of what it is like to taste chocolate? The current paradigm has found hundreds of neuronal correlations between brain activity and conscious states (the NCCs), but no causal system by which we can reduce any conscious state to specific brain activity.

The easy problems of consciousness

Further, we know that the brain performs computational, behavioral, predictive modeling, and cognitive functions, such as the integration of information. These are called the “easy problems of consciousness,” not because they are easy in the absolute sense, but because we have an idea of how to explain them.

We can find neural and computational mechanisms that account for how the brain performs these functions. But why don’t those demonstrably useful functions happen in the dark, without subjective experience, as they do in today’s computers?

From an evolutionary standpoint, phenomenal consciousness seems completely unnecessary at best and even harmful to our survival fitness at worst, since for the brain to generate it, we must extract even more energy from our environment to maintain phenomenal consciousness than we would without it. If it’s so costly and not even necessary, why did the evolutionary process select for it? Here we find the hard problem of consciousness (Chalmers, 2022).

It’s not just about explaining the brain

It is called a “hard” problem because the dilemma goes deeper than lacking a scientific causal link between brain activity and consciousness. There is no way, in principle, for qualitative subjective experiences to reduce down to quantitative arrangements of matter that, by definition, do not have any qualities at all.

On top of that, everything that we know of the world, including the brain itself, we know through and in consciousness. In philosophy of mind and in neuroscience, we are studying our own first-person perspective, not something outside ourselves that we can observe from a distance (Kastrup, 2019).

Implications for artificial intelligence and machine learning (AI/ML)

Just as it has blocked our progress toward understanding how the human brain could produce consciousness, the hard problem stands in the way of artificial consciousness in silicon systems.

Indeed, the hard problem may well be insoluble (Chalmers, 2003; Levine, 1983).

However, should AI/ML research produce such a system, that accomplishment could help dissolve the hard problem of consciousness in humans, giving us one of the most profound answers we seek about ourselves, a mystery our greatest thinkers have contemplated for thousands of years.

AI/ML seems to have an advantage over the other relevant fields of study here—by trying to replicate consciousness in a system that humans create, we can take this study out of the first-person and back into the purview of our observation.

That may help us resolve the hard problem of conscious. Alternatively, it could help us negate the problem entirely. If we could explain why we feel consciousness is so difficult to explain, perhaps we could show that there is no actual problem at all.

The meta-problem of consciousness

This is the meta-problem of consciousness, or why it is that we feel consciousness is this special thing that defies our intuitions. If, for instance, we could identify an evolutionary reason for such a belief, we might be able to explain away the hard problem (Chalmers, 2022).

Then again, perhaps we think consciousness is special because it is. Other metaphysical theories, such as dualism, panpsychism, and idealism certainly claim so.

For now, the hard problem remains one of physicalism’s most infamous paradoxes, though it is certainly not the only one. For AI/ML, there is great opportunity to shed light on this debate. However, the hard problem remains a blocker for any future in which a computer might be able to achieve human-level consciousness.

In future posts, we’ll discuss the major metaphysical theories on the table, how they handle the relationship between consciousness and matter, and what they mean for AI/ML.

Bibliography

  1. Chalmers, D. (2003). Consciousness and its Place in Nature. Stich, S. & Warfield, T. (eds.). Blackwell Guide to the Philosophy of Mind. Malden, MA: Blackwell.
  2. Chalmers, D. (2022). The Meta-Problem of Consciousness. Shulman Lectures. Yale University.
  3. Kastrup, B. (2019). The Idea of the World: A multi-disciplinary argument for the mental nature of reality. iff Books.
  4. Levine, J. (1983). Materialism and qualia: The explanatory gap. Pacific Philosophical Quarterly, 64: 354-361.
michael.santos

Michael Santos is a thriller author, amateur philosopher, member of the American Philosophical Association (APA), and is a technology industry writer. Explore his thriller novels at: https://michaelsantosauthor.com/

http://michaelsantosauthor.com