top of page
An abstract depiction of subjective experience during neural computation.jpg

WHAT IS PERCEPTUAL CONTENT?

Why do our brains produce perceivable content as our neurons encode information?

Our brains compute information, but it’s more complicated than that! Somehow, the biophysical processes occurring inside our brains give rise to a subjective experience, a qualitative feeling of existence that includes sensations, thoughts, and emotions. But why? 

​

This is not an easy question to answer. In fact, it is formally called ‘the hard problem’ because it is thought by some philosophers to not be answerable within the framework of science. 

​

David Chalmers, a notable philosopher of mind at NYU, first articulated the problem in 1995 [1]: 

​

“The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) [2] has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them.”

a cozy and secluded garden at dusk.jpg

Chalmers coined the term "hard problem" to distinguish this fundamental question about the nature of conscious experience from the more easily explainable aspects of brain function. The key element here is the "feeling" of experiencing something, which is what sets the "hard problem" apart from simply understanding the mechanics of brain activity. This question has significant implications for the philosophy of mind, as it challenges the idea that all mental phenomena can be explained solely by physical processes in the brain.

The hard problem also gives neuroscientists a challenge: Can you explain this phenomenon? Otherwise, it remains a mystery which may lie outside the realm of science. And until neuroscientists do come up with a suitable explanation, many people will continue to believe something non-physical and non-explainable by science is afoot. 

​

One core issue at play is that we can only access our own subjective experience, making it difficult to understand how another person or a machine could have a similar "feeling" even if their brain activity appears to be identical. While scientists can explain how the brain processes information and performs complex tasks, our current frameworks do not explain why these neurocomputational processes feel like something to the individual experiencing them. 

​

The field of neuroscience – and the field of philosophy of mind – are currently in their infancy. We still do not know whether consciousness is an emergent property of complex systems or a fundamental aspect of reality. But we do know that perceptual content is part of it. We do not have raw access to reality – we gather information from the sensory apparatus, and we encode that information in the activity of neurons in our brains. And somehow, that process reconstructs an experience of being in the world, surrounded by sights and sounds and smells and objects and other people. 

​

And the question is important! As artificial intelligence becomes more sophisticated, questions arise about whether it could ever achieve consciousness, and if so, how would we know if it did. Even if an AI system could perfectly mimic human behavior, it's not clear how we could determine if it has any subjective experience or "feels" anything. Also, are animals conscious? Are plants? Is the whole universe conscious, or is it something unique to complex neural networks? 

​

Clearly, neuroscience has some work to do. At the very least, we have to find out exactly how much explanatory power the field has. Is there any form of computation that gives rise to qualitative perceptual content, in nature or in manmade systems? Where do we even start to explain such a thing? 

​

It may help to think about a neural circuit that has the properties of consciousness, like perceptual content, and one that doesn’t. Luckily, we have one of each inside our own bodies! So we can make a comparison between cortical circuits, where neural activity is associated with qualitative perceptual experience, and spinal reflex circuits, where neural activity is not associated with qualitative perceptual experience. 

​

When we touch a hot stove, temperature receptors and pain receptors in the skin are activated. The sensory neuron receiving these signals is activated, and it sends a signal to an interneuron in the spinal cord. That interneuron then sends a signal to an alpha motor neuron, which fires. That impulse causes the ipsilateral muscle to flex, thereby achieving the limb withdrawal response with only a simple three-neuron spinal reflex circuit! But there is no perception of the pain, until a second or two later, when the information reaches thalamocortical circuits in the brain. And of course, there is no decision to withdraw the limb from the hot stove – this is a spinal reflex, after all. So there is something about perceptual experience and the ability to decide our actions that is deeply tied to cortical neural network activity, and is not achieved by just any neural circuit. 

​

So what is it about cortical neural circuits that is special, compared with spinal reflex circuits? There is a critical difference, one we have known in neuroscience for about thirty years. Spinal neurons have essentially deterministic outcomes – they are always in an off-state or an on-state, firing or not firing, at any given time. By contrast, cortical neurons have probabilistic outcomes – they are essentially calculating the probability of transitioning from an off-state to an on-state. 

 

Cortical neurons tend to sit right at the threshold for firing an action potential, and they allow random electrical noise (that is, stochastic events) to affect signaling outcomes. And at the network level, a statistically random ensemble of cortical neurons fires synchronously, and then a few milliseconds later, another statistically random ensemble of cortical neurons fires synchronously. These cortical oscillations are thought to be critical for conscious awareness. 

​

At the Western Institute for Advanced Study, we took on the challenge. But rather than thinking about consciousness, and trying to find neural correlates of this really intangible process, we did the opposite. We started by formally modeling the activity of cortical neurons, and discovering what natural properties emerged from this computation. Where did we start? With the tiny little probabilistic events that were happening at the neuronal membrane, of course! Because that’s the really key difference between cortical neurons and spinal neurons – so there must be something interesting about this difference in computation! 

​

So how do we formally model ions interacting with the neuronal membrane, and affecting the voltage state of each of these computational units? We decided to model each ion as a qubit, with some distribution of possible positions and momenta, in relation to the neuronal membrane. That quantum uncertainty creates quantum information – a mixed sum of possible system states. And the neuronal membrane encodes this quantum information – the voltage of the cell is a function of the mixed sum of all these component pure states. So each ion is modeled as a probability distribution – some set of positions and momenta plotted along the x, y, z, and time axes (it's also sensible to include the atomic orbital, which is also uncertain). Instead of having a distribution of possible system states (information) along one axis, the spin axis, like our ultra-cold quantum computers, we have a whole manifold of probability distributions. That gives the system way more computational power. 

​

Imagine each ion becoming information, or a distribution of possible system states, and the neuronal membrane encoding all these possible system states within its electrochemical potential! This high-dimensional probability distribution is quantum information. Then, the high-dimensional wavefunction collapses, as each qubit interacts with its surrounding environment. You can think of it as constructive and destructive interference of all the probability amplitudes, across all these orthogonal axes. As a result, the system completes a computation, and all the neuronal states are defined by how all the ions actually moved around. 

​

Something really interesting happens in this model – since these wavefunctions are constructively and destructively interfering on the polymer surface of each computational unit (and the network as a whole), the process encodes information on these holographic recording surfaces. The interaction of ions with the neuronal membrane is a physical process of encoding information, and this encoding process is the constructive and destructive interference of probability amplitudes across a high-dimensional plane. The ions don’t really exist as matter until their states are defined. But the information is encoded in that very process of computation. 

​

The information is encoded as wavefunctions constructively and destructively interfere on the polymer surface of the neural membrane. Then the information is reconstructed, as the previous wavefunction interferes with all possible paths. If the polymer structure of the neural network meets the criteria for an ideal holographic recording surface, this computational process naturally generates a holographic projection of all the information encoded by the system. That holographic projection changes over time, and encodes the location and intensity of stimuli from all available sensory apparatus. What’s more, the holographic projection is exclusively accessed by the encoding system – the brain itself. Pretty neat – you get qualitative perceptual content for free, by formally modeling out the high-dimensional quantum computation in a biological circuit!

​

References 

​

1. Chalmers, D. J. (1995). Journal of Consciousness Studies, 2(3), 200-219. 

2. Nagel, T. (1974). Philosophical Review, 83, 435-450. 

BACK TO THE BEEHIVE

The Western Institute for Advanced Study was conceived in 2018 and incorporated in 2019.

 

We are a federally-registered 501(c)3 non-profit organization based in Denver, Colorado. 

ABOUT US
ADDRESS

1312 Seventeenth Street

Suite 745

Denver CO 80202 U.S.A.

​

PHONE 720-999-9363

FAX 720-547-6815

EMAIL info@westerninstitute.org

SUBSCRIBE TO OUR EMAIL LIST

© 2024 by Colorado Web Developers

bottom of page