THE
WESTERN INSTITUTE
FOR
ADVANCED STUDY

IS THE BRAIN A QUANTUM COMPUTER?
Are the statistically random firing patterns of cortical neurons the result of a truly probabilistic form of computation? In other words, is the brain actually a quantum computer? And how can we tell the difference?
The brain can be considered a kind of computer – it integrates many kinds of data representing the state of the world, and it makes decisions based on these incoming data, in the context of previously-stored memories. But in another sense, it doesn’t seem like a computer at all – the brain is capable of general problem-solving, creativity, perceptual experience. Our brains do not just compute tasks that are laid out for them – they invent new tasks and new ways to solve them. How can these phenomena be considered computational processes? Is it maybe a different kind of computation than we are used to?
​
Every computer in regular use today – every laptop, desktop, handheld and in-flight system on earth – is built with classical architecture. These systems are built from binary computational units, or arrays of transistors. A transistor is always in an off-state or an on-state, 0 or 1. And so we can encode data with patterns of these 0s and 1s, although it is quite an energy-consuming process to do so.
​
By contrast, quantum computers are not made of binary computational units. They are made of qubits, which have some probability of being in one state or another. The current crop of quantum computers typically encode an uncertain spin state. For example, the qubit could be 'spin-up' or 'spin-down', and a computation is complete when it takes on one state or another. If the qubit is entangled with another qubit, the state of each one may be dependent on the other.
​
Is the brain more like a classical computer, or is it more like a quantum computer? By the standards used today, it appears that neurons in our brains are more like classical systems than quantum systems. Firstly, neurons are always firing an action potential, or not firing an action potential. That’s a lot like the 1s and 0s in a classical system. Secondly, neurons are operating at ambient temperatures, not a few degrees above absolute zero, which is currently required for the operation of quantum computers.

But let’s look a little more closely.
​
When we compare the incoming currents to the voltage readings of neurons within spinal reflex circuits, we find a direct correlation, as expected [1,2]. But when we compare the incoming currents to voltage readings in neurons within the cerebral cortex, we find a lot of spontaneous subthreshold fluctuations [3,4]. What’s more, neurons in the cerebral cortex hover right near the threshold for flipping between an off-state and an on-state, allowing random electrical noise to push them into firing an action potential [5,6].
And we can watch this happen! The voltage state of a cortical neuron changes over time, as upstream signals and stochastic charge flux converge to affect the resting membrane potential. So, although neurons are classically described as binary computing units, a cortical neuron is better described as having some probability of switching from an off-state to an on-state.
​
And interestingly, we even observe probabilistic outcomes in cortical neurons. This is very different from the deterministic outcomes we observe in spinal reflex circuits. At the individual neuron level, the time between action potentials (called the inter-spike interval) is statistically random. A neuron could fire, fire again, and then not fire again for a little while – even while receiving the same inputs. At the system level, a statistically random ensemble of cortical neurons will fire synchronously – and then, ten milliseconds or so later, another statistically random ensemble of cortical neurons will fire synchronously. These are called cortical oscillations, because these events occur at a range of nested frequencies. Cortical oscillations, incidentally, are critically involved in sensory perception and decision-making [7-10].
​
So we find statistically random outcomes at the cell level [11] and the systems level [12] in cortical neural networks. In fact, neuroscientists have observed this statistically random cellular-level and systems-level phenomena for over thirty years. So the question is: Are the statistically random firing patterns of cortical neurons the result of an inherently probabilistic computation? And are cortical neurons quantum computing units, or classical computing units?
​
Over the past twenty years, researchers have begun to explore this territory. The first mathematical approach was to consider how noise contributes to stochastic resonance in complex dynamical systems like the brain [13]. The second mathematical approach was to introduce quantum fluctuations into the classical Hodgkin-Huxley equations, a set of four partial differential equations which are used to calculate neuronal signaling outcomes [14,15]. The third mathematical approach was to model neuronal signaling outcomes using the Fokker-Planck equations, an even more explicitly quantum approach [16]. In all three cases, the goal was to formally model the contribution of random electrical noise to cortical neuron signaling outcomes. And gradually, the field moved closer and closer to a formal quantum approach.
​
At the Western Institute for Advanced Study, we have formalized this method even further [17, 18]. Typically, the cortical neuron is modeled as a binary computational unit, either spiking or not spiking (thereby encoding Shannon entropy). In our model, the cortical neuron is modeled as a two-state quantum system, with some probability of switching from an off-state to an on-state (thereby encoding von Neumann entropy). This approach takes into account the contribution of both upstream signals and random electrical noise in gating signaling outcomes. With this method, the membrane potential is described as a macrostate – the mixed sum of component microstates, or the quantity of information that is physically encoded by the computational unit. And usefully, the approach can be modelled using either wavefunctions [17] or matrices [18].
​
This quantum system can also be modeled as interfering probability amplitudes – with ions encoding some range of possible positions and momenta, relative to the neuronal membrane. This random ion flux, once again, affects the voltage of each neuron. In this particular mathematical model, the constructively and destructively interfering wavefunctions physically encode information on the polymer surface of the neuronal membrane. If that polymer surface meets the criteria for an ideal holographic recording surface, the information that is physically encoded will generate a holographic projection of information content, representing the position and intensity of objects in the surrounding environment [19]. In this way, a high-dimensional quantum computation – operating at ambient temperatures – could actually give rise to perceptual content, exclusively accessed by the encoding system (our brains).
​
Even more astoundingly, this model of quantum computation can explain the energy efficiency of our brains, which classical models cannot do. [Click here to read more about the energy efficiency of quantum computation. And click here to read more about how the brain runs on the same amount of energy as a lightbulb.] In this model, neurons cyclically generate and compress information, as they extract a meaningful signal from the noise. And they do this over an over and over again, many times per second. In doing so, they not only encode the state of their surrounding environment, but they also save energy through this computation [20].
​
If there is so much explanatory power in the quantum approach, how come this view has not become well-established in the field? Well, for one thing, it takes awhile for new information to be disseminated and accepted within a field. And for another thing, the new theoretical framework actually has to bear out in laboratory experiments for it to be accepted by the scientific community!
​
The physicists helpfully set out the criteria for a quantum system, so that neuroscientists could systematically evaluate whether the brain meets these criteria. Max Tegmark, a notable physicist working at the intersection of quantum mechanics and information science, has been a critical figure in this effort. He agrees that near-absolute-zero temperatures are not necessarily needed for a quantum system – but he has pointed out it is really unlikely to find the proper conditions for quantum coherence at ambient temperatures. Tegmark explained how large the coulomb scattering profiles of ions would need to be, and how long the decoherence timescales of these ions would need to be, for the brain to be a true quantum system [21]. He then calculated these quantities, using the best empirical approximations at the time. In doing so, he found the brain did not meet the criteria to be a quantum system.
​
Case closed, right?
​
Not quite. Over the past twenty years, the data has evolved. Neuroscientists have gotten better at modeling the coulomb scattering profiles of ions contributing to the voltage of the neuronal membrane, both in terms of modeling efforts and in terms of cryoEM experimental setups. And now, our best measures for these quantities have been updated.
​
And so, we plugged these newer calculations into Tegmark’s equations. The coulomb scattering profiles of ions were bigger, the decoherence timescales were longer – and lo and behold, the brain now meets the criteria to be considered a quantum system [17].
​
That’s quite a statement! How can we be sure? A new theory should not only be consistent with existing data – it should also make predictions that can be tested. What predictions does our new theoretical framework make? That’s a good question! The first prediction is that free energy should be released when quantum information is compressed, and these infrared photons should be paired with the neuronal firing patterns observed in the healthy brain (but not with epileptic activity) [17]. The second is that drugs which increase perceptual content should increase coulomb scattering profiles, and vice-versa [18]. The third is that altering the polymer structure of the neuronal membrane (e.g. by altering the cholesterol content of the lipid bilayer) should alter performance on perceptual tasks [20].
​
The answer will come with time. Neuroscientists are hard at work in the lab, studying neurocomputation and differentiating between the various hypotheses. The next few decades will be exciting, as we grow to understand ourselves more and more – and in turn, learn what is possible in terms of computation.
​
If there is another way to explain the perceptual content and the energy efficiency that is so characteristic of neurocomputational processes in cortical neural networks, then we will have a competing hypothesis. But for now, this emerging model usefully gives us a mechanism for how the brain might reconstruct a multi-modal experience of its surrounding environment, by holographically encoding and projecting the encoded information content [19]. What's even more incredible is that this computation uses the same amount of energy as a regular lightbulb [20]. Regardless of the exact mechanisms, we can all agree that whatever kind of computation is happening in the brain, it's pretty darn neat.
​
References
1. Bialek, W. & Rieke, F. (1992) Trends Neurosci 15(11): 428-34.
2. Powers, R. & Binder, M. (1995) J Neurophysiology 74(2): 793-801.
3. Stern, E.A., et. al. (1997) J Neurophysiology 77(4): 1697-715.
4. Dorval, A.D. & White, J.A. (2005) J Neuroscience 25(43): 10025-8.
5. Insanally, M.N., et al. (2019) Elife 8: 42409.
6. Steriade, M., et. al. (2001) J Neurophysiology 85(5): 1969-85.
7. Engel, A.K. & Singer, W. Trends Cogn Sci 2001, 5, 16–25.
8. Buzsaki, G. & Draguhn, A. Science 2004, 304, 1926–1929.
9. Csibra, G., et al. Science 2000, 290, 1582-1585.
10. Geisler, C., et al. J Physiol 2005, 94, 4344-4361.
11. Fayaz, S., et al. (2022) PLOS Comput Biol 18(7): 1010256.
12. Beck, J.M., et al. (2008) Neuron 60(6): 1142-52.
13. Adair, R.K. Proc Natl Acad Sci USA 2003, 100, 12099–12104.
14. Rowat, P. (2007). Neural Comput 19(5): 1215-50.
15. Austin, T.D. (2008). Ann Appl Prob 18: 1279-1325.
16. Ostojic, S.J. J Neurophysiol 2011, 106, 361-373.
17. Stoll, E.A. Appl Math 2024, 4(3), 806-827.
18. Stoll, E.A. bioRxiv 2022. https://doi.org/10.1101/2022.12.03.518981.
19. Stoll, E.A. bioRxiv 2022. https://doi.org/10.1101/2022.12.03.518989.
20. Stoll, E.A. Phys Biol 2024, 21, 016003
21. Tegmark, M (2000). Information Sciences 128: 155––79.