Published

- 16 min read

Why Is It Like To Be?

img of <em>Why</em> Is It Like To Be?

For half a century, the philosophy of mind has been organized around a single question. Thomas Nagel asked it in 1974: what is it like to be a bat? David Chalmers sharpened it in 1995 into the hard problem: why does any physical process give rise to subjective experience at all? We can describe everything a brain does — every neuron firing, every chemical cascade, every behavioral output — and still seem to leave something out. The redness of red. The felt quality of pain. The fact that there is, as philosophers say, something it is like to be you.

I want to propose a different question. Not what is it like to be — but why is it like to be? Why does the appearance of a unified subjective experience exist? And I want to answer it by looking at a distinction that becomes visible only when you compare the vertebrate way of being intelligent with two radically different architectures that evolution has produced for managing equally complex adaptive systems.

The distinction is this. Some biological systems solve coordination problems by summation — aggregating many local states and local actions without ever compressing them into a single representation. Others solve them by summarization — integrating distributed information into a single, globally reusable control state available to the system as a whole. Vertebrates summarize. Honeybee colonies sum. Octopuses do something in between. And the degree and form of unified experience tracks this architectural variable, not computational sophistication.

Once that distinction is in view, the hard problem changes shape. What needs explaining is not why matter acquires an inner light, but why a system built around global summarization would encounter its own control state as unified, immediate, and seemingly irreducible.

The architecture that builds a world

Look around. Your brain is, at any given moment, doing an enormous number of things simultaneously. Color is being computed in one region, edges and shapes in another, spatial relationships in a third, emotional valence in a fourth. Sound is being parsed for speech, for threat, for music. Your body’s internal state — hunger, temperature, fatigue — is being monitored continuously. All of this happens in parallel, in specialized processing streams that operate independently and at extraordinary speed.

And yet your experience of all this is not a cacophony. You don’t perceive redness and roundness and on-the-table-ness as three separate signals. You perceive an apple on a table — a single, unified object in a single, coherent scene, embedded in a moment that feels like now.

This unity is not free. It requires an architecture that can compress many parallel streams into a single reusable control state — a capacity-limited integrative channel that selects one coherent representation at a time. Decades of research in cognitive neuroscience have established that while the brain processes information in massively parallel streams, conscious awareness operates through a serial bottleneck. Only one integrated representation at a time gets selected for what Bernard Baars called the “global workspace” — the neural broadcast system that makes information available to memory, language, planning, and action. Everything else stays in the dark. This is summarization: the vertebrate solution to the coordination problem.

The serial bottleneck doesn’t just filter information. It creates the demand for integration. If you have parallel processing streams that each drive their own downstream responses independently — color triggers one behavior, motion triggers another, sound triggers a third — then there is no need to bind those streams together. Each can operate on its own. The binding problem, the question of how the brain combines features from different processing streams into unified objects, exists only because the serial channel requires a single, coherent signal. No bottleneck, no binding. No binding, no unified objects. No unified objects, no coherent world.

The same logic extends to the qualitative character of experience — what philosophers call qualia. Color processing, edge detection, and emotional valence each operate in their own computational language — formats that have nothing in common with each other. But the serial channel does not care where a signal came from. To pass through the bottleneck and participate in planning, memory, and action, the output of each specialized stream must be translated into a common representational currency — a condensed, context-independent token that can be compared and combined with every other token competing for the same channel. “Redness” is not a mysterious extra property added to a wavelength computation. It is what a color signal becomes when it is reformatted for global reuse. On this account, the quale is the summary. And the cost of this translation is measurable: simultaneous color comparison operates at the full resolution of the perceptual system, but the moment a comparison must be mediated by memory — even across a delay of a few hundred milliseconds — discriminability drops by roughly half. That loss is the compression cost of summarization: the price of a representation that is reusable rather than stimulus-bound.

And the temporal structure of experience falls out of the same architecture. A system that processes information in parallel has no intrinsic temporal sequence at the system level — things happen simultaneously, not one after another. But a serial channel imposes order. Each summarized representation must be composed, selected, and broadcast before the next can begin. According to this model, the felt flow of time — the sense that experience moves forward through a sequence of moments — is what serial processing feels like from the inside.

The tetrapod constraint

Here is where evolutionary biology has something important to contribute that consciousness research has largely missed.

The optic tectum — called the superior colliculus in mammals — is the midbrain structure at the core of this selection architecture. It is present across all vertebrates, from zebrafish to humans, and dates to roughly the origin of the vertebrate lineage, five hundred million years ago. The architecture has been elaborated enormously since then — in primates, cortical networks are deeply integrated with the older midbrain machinery, expanding and flexibilizing the workspace — but the basic design has been conserved. Massively parallel processing feeds into a selective, capacity-limited integrative system. What emerges is not the full parallel signal but a reduced, coherent, and temporally structured control state — one representation at a time, available to the whole organism.

This conservation is not the result of natural selection repeatedly evaluating serial integration against alternatives and choosing it anew at every branch point. It is a phylogenetic constraint — the same kind of constraint that gives all land vertebrates four limbs. The ancestral lobe-finned fish that crawled onto land did not have the optimal number of legs. It had the number it happened to have. And once the tetrapod body plan was established, everything that evolved afterward — birds, bats, whales, humans — was built within that constraint. Evolution never rejected eight-legged vertebrates. It was never given the opportunity to consider them.

The serial integrative architecture is the cognitive equivalent. It was established once, early, in the vertebrate common ancestor. It works — it coordinates complex organisms effectively, so there is no selection pressure to dismantle it. And everything that has been built on top of it — cortical expansion, language, abstract thought — has been built within the constraint of information flow through a serial bottleneck. The summarization architecture is not something vertebrate brains keep choosing. It is the vertebrate bauplan for cognition.

This reframes the evolutionary question about consciousness. The right question is not “what is unified experience for?” as though phenomenality were a trait that natural selection optimized. The right question is: what does the inherited summarization architecture necessarily produce? And the answer is: unified experience. The proposal is that phenomenality is what it is like from the inside when your information processing is forced through a serial integrative channel that was locked in half a billion years ago and never revisited.

Nobody’s home

If unified experience is the product of a summarization architecture — not of intelligence, not of computational sophistication, but of a specific way of organizing information flow — then we should be able to find systems that are computationally sophisticated but experientially dark. Systems that solve hard problems without anyone being home to experience the solution.

We can. They’re in your garden.

A honeybee swarm looking for a new nest site faces a genuinely difficult decision problem: evaluate multiple candidate sites across a landscape, weigh distance, cavity size, entrance orientation, and dozens of other variables, and converge on a single best option with the colony’s survival at stake. Thomas Seeley, who spent decades studying this process, documented something remarkable. The swarm solves this problem using a mechanism that is structurally equivalent to how populations of neurons in primate visual cortex reach perceptual decisions. Scout bees report on candidate sites through waggle dances. Other scouts are recruited to evaluate the advertised sites. Scouts for losing sites gradually stop dancing. Scouts for the winning site recruit more evaluators. Mutual inhibition — the bee equivalent of neural competition — ensures convergence. A quorum threshold finalizes the decision.

Seeley identified five specific structural features shared by both systems: independent sensory units, evidence accumulation over time, competitive inhibition between alternatives, leakage that requires sustained input, and threshold-based commitment. The abstract decision architecture is the same.

But nobody experiences the decision. No individual bee has compared all the candidate sites. No individual bee knows the swarm is choosing. The information is distributed across hundreds of scouts, and the decision emerges from their collective dynamics without ever being summarized into a single representation that passes through a single channel. This is summation: local states are aggregated through local interactions, and the colony converges on a solution without any node in the system ever holding a global summary of the problem.

Note that each individual bee is itself a centralized organism — it has its own midbrain, its own serial architecture, and presumably its own unified experience. The colony is built out of summarizers. But summation at the colony level does not inherit the summarization properties of its components. Each bee integrates its own sensory world into a coherent experience; the colony does not integrate theirs into a colony-level experience. The phenomenal unity stops at the organism boundary, because that is where the summarization architecture stops.

The computation happens. The colony moves to the best site. And there is no moment of unified awareness, no experienced now, no point of view from which the colony perceives itself choosing. The colony’s architecture provides no basis for organism-like unified phenomenality at the colony level — not because the computation is too simple, but because summation does not produce the conditions that make experience possible.

The key comparison: the same abstract decision architecture — competitive evidence accumulation with mutual inhibition and threshold commitment — produces phenomenal experience when it runs through a summarization architecture (as in your visual cortex deciding what you’re seeing) and produces nothing experiential at all when it runs through a summation architecture (as in the swarm deciding where to live). The computation is equivalent. The information architecture is different. The phenomenology tracks the architecture, not the computation.

The octopus in the middle

If the argument is right, there should be intermediate cases — systems with partial summarization that produce partial phenomenal unity. And evolution has obligingly provided one.

The octopus has roughly five hundred million neurons, comparable to a dog. But only about forty-five million of them are in the central brain. The rest — roughly three-fifths of the total — are in the arms. And the connection between the central brain and the arm nervous system is remarkably narrow: about thirty thousand nerve fibers linking subsystems of hundreds of millions of neurons each.

The consequences are startling. A severed octopus arm continues to grasp, explore, and recoil from noxious stimuli. Arms can communicate with each other through a neural ring that bypasses the central brain entirely. Different arms dynamically specialize for different tasks — anterior arms explore while posterior arms handle locomotion — yet every arm retains full behavioral flexibility. The coordination emerges from the distributed network, not from central command.

Critically, the octopus arrived at this architecture from a completely different phylogenetic starting point. It is a mollusk, not a vertebrate. It evolved a large centralized nervous system independently, but with a fundamentally different degree of serialization — precisely because it was building on a different ancestral body plan, just as the tetrapod constraint shaped vertebrate morphology from a different starting point than the one that produced arthropod or mollusk body plans.

What emerges across all three cases is a common structural motif: substantial intelligence at the periphery, balanced against a central integrative capacity with access to broader information and responsibility for whole-system coordination. Each individual bee is a capable agent with its own centralized nervous system, operating as an edge node in a colony that coordinates without central integration. Each octopus arm is a semi-autonomous processor with hundreds of millions of neurons, connected to a central brain through a narrow channel. Each vertebrate sensory region is a specialized parallel processor whose output must pass through a serial bottleneck to reach the global workspace. What varies is how much of the system’s total computation gets routed through a central summarization channel — and that variable tracks the gradient from full phenomenal unity to none.

Peter Godfrey-Smith has suggested that this architecture may produce consciousness of a form deeply unlike our own — perhaps multiple partial experiential streams rather than a single unified world. On the summarization account, the prediction is clear: octopus phenomenality, if it exists, should be loosely integrated, corresponding to partial and federated summarization rather than the full serial integration of the vertebrate bauplan. Not dark inside, like the colony. Not fully unified, like the vertebrate. Something in between — a different architectural inheritance producing a different phenomenological outcome.

Dissolving the gap

What does this mean for the hard problem?

The standard objection to any functional or architectural account of consciousness is the explanatory gap: fine, you’ve explained the structure and the function, but you haven’t explained why it’s experienced. Why isn’t it all just happening in the dark?

But “dark inside” is not a hypothetical scenario. It is the colony. Here is a system performing the same computation as your visual cortex, and the architecture provides no place for unified experience to occur. Not because the computation is simpler — it isn’t — but because the information architecture is different. The darkness is what you get when the coordination problem is solved through summation rather than summarization.

The hard problem felt hard because it was framed as a question about an ingredient: what must be added to physical processes to produce experience? But the comparative evidence suggests there is no ingredient. There is only architecture. Serial integrative summarization produces a unified experiential manifold — binding, temporal coherence, object permanence, selfhood — as necessary consequences of forcing parallel computation through a single channel. Distributed summation produces computation without those features. The presence or absence of unified experience tracks a specific, identifiable, empirically measurable architectural variable.

The explanatory gap itself has a structural explanation. What Thomas Metzinger calls phenomenal transparency — the fact that we can introspect on the content of experience but not on the processes that produced it — is exactly what a summarization architecture would generate. The summary token is designed to be globally reusable, stripped of its computational origins. You encounter redness but cannot see the wavelength computation behind it, because that computation was discarded in the act of summarization. The hard problem is what it is like to be a system that has access to the summary but not to the summarization.

This reframing connects to existing theoretical work. Global Workspace Theory identifies the serial broadcasting architecture and the competition for conscious access. What the present account adds is the evolutionary question: why does this architecture exist, why is it phylogenetically locked in, and why does it — rather than equally powerful alternatives — produce the appearance of an explanatory gap?

The answer doesn’t satisfy the hard problem in the metaphysical sense Chalmers demands. It doesn’t explain, in some ultimate register, why serial summarization produces experience rather than “just happening.” But it converts the question from an unfalsifiable philosophical puzzle into an empirical research program. Does phenomenal unity track architectural summarization across independently evolved lineages? The comparative evidence — vertebrates, cephalopods, eusocial colonies — says yes. And if the mapping holds, then the “extra ingredient” intuition loses its force, not because it has been philosophically refuted, but because the phenomenon it was trying to explain has been accounted for.

The answer to why

So: why is it like to be?

Because you are a complex adaptive system — a body made of trillions of semi-autonomous cells organized into competing and cooperating subsystems, processing information through dozens of parallel channels, each with its own specialized logic. And your lineage solved the problem of coordinating all of that through summarization: a centralized serial integrative channel that compresses the output of massively parallel processing into a single, globally reusable control state. That architecture was established in your vertebrate ancestors half a billion years ago, and like the four limbs you inherited from the same lineage, it was never revisited. It didn’t need to be. It works. And everything that came after — cortical expansion, language, the recursive self-model you call a self — was built within its constraints.

Unified experience is not a mystery. It is not an extra ingredient sprinkled on top of information processing. It is what information processing necessarily produces when it is forced through a serial summarization channel in an organism complex enough to have parallel streams worth integrating. Other lineages solved the same coordination problem differently — through distributed summation, through colony-level stigmergy — and they got different phenomenological outcomes, because phenomenology tracks architecture.

The question “what is it like to be a bat?” invited fifty years of wondering about the ineffable character of alien experience. The question “why is it like to be a bat?” has a concrete answer: because bats, like all vertebrates, inherited a summarization architecture that has been conserved for half a billion years. The architecture builds a world. The architecture is the experience.

And for the honeybee colony making its impeccable real-estate decisions in your garden — performing the same computation under a summation architecture — it isn’t like anything at all.