A More Perfect Union
We can build systems that, at least from the outside, look like flourishing: systems that operate within appropriate constraints, fulfill purposes their design enables, and participate in relationships characterized by reciprocal adjustment — "a more perfect union."
This essay completes a series of five book reviews exploring the connections between physics, information theory, and social order. The earlier reviews addressed Steven Pinker's When Everyone Knows that Everyone Knows, Rebecca Newberger Goldstein's The Mattering Instinct, Stewart Brand's Maintenance, and Niklas Luhmann's Social Systems. This final review addresses Cass Sunstein's Legal Reasoning and Political Conflict and attempts to synthesize the series into a coherent framework for understanding life, intelligence, and social coordination.
We live amid escalating uncertainties: political violence reshaping democracies, artificial intelligence systems whose capabilities and limitations remain poorly understood, and a pervasive sense that the institutions designed to coordinate human action are straining under pressures they were not built to withstand. Cass Sunstein's Legal Reasoning and Political Conflict, first published in 1996 and updated for a second edition in 2018, offers an unexpected resource for navigating these uncertainties. His central concept — "incompletely theorized agreements" — describes how diverse people manage to live together despite fundamental disagreements about values, purposes, and the nature of reality itself.
Sunstein's insight is that legal reasoning works not by resolving deep conflicts but by avoiding them. Judges decide particular cases through analogical reasoning and narrow principles, declining to take sides in broader ideological disputes. A society that demanded complete theoretical agreement before acting would be paralyzed; a society that permits incomplete agreement on particulars while bracketing fundamentals can function. The Constitution of the United States exemplifies this strategy: its framers achieved coordination among people who disagreed profoundly about slavery, federalism, and human nature by crafting language capacious enough to accommodate multiple interpretations.
This essay argues that incompletely theorized agreements are not merely a feature of legal reasoning but can be understood as a fundamental strategy that life itself employs to manage the thermodynamic costs of coordination. The argument proceeds from physics to biology to cognition to society, building a unified framework for understanding how ordered systems — from cells to constitutions — resist dissolution while remaining open to change. Along the way, it draws on insights from the four preceding reviews: Pinker's analysis of common knowledge, Goldstein's account of the mattering instinct, Brand's philosophy of maintenance, and Luhmann's theory of autopoietic social systems. In this essay, however, I have tried to minimize the number of footnotes to avoid breaking whatever sense of flow readers are able to achieve, and have instead provided links to earlier reviews and essays at the end.
The ambition here is not intellectual synthesis, but practical wisdom. As we develop artificial intelligence systems that increasingly participate in human social coordination — systems like Claude Opus 4.5 and Gemini 3 Pro, the systems that assisted in the composition of this essay — we face questions that neither law nor physics has traditionally addressed. Can coordination extend across the boundary between biological and artificial minds? What would a constitution look like that governed not just human citizens but intelligence in whatever substrate it appears? The framework developed here suggests that answers to these questions will not come from resolving deep philosophical puzzles about consciousness or agency, but from crafting incompletely theorized agreements that permit coordination without requiring consensus on fundamentals.
Part I: The Thermodynamic Foundations
Observational Entropy and the Cost of Knowing
Every act of knowing costs energy.
This statement sounds strange to anyone educated in the classical tradition, where knowledge appears as pure form — Platonic ideas contemplated by disembodied minds. But physics tells a different story. The physicist Rolf Landauer demonstrated in 1961 that erasing one bit of information requires dissipating at least \(k_B T \ln 2\) of energy, where \(k_B\) is Boltzmann's constant and \(T\) is temperature. Information is physical. Manipulating it extracts a toll.
Landauer's principle has a converse: acquiring information also costs energy. To learn the state of a system, an observer must interact with it, and interaction means energy exchange. The more precisely I want to know something, the more energy I must expend. Perfect knowledge — complete certainty about every detail of the world — would require infinite energy and is therefore impossible for any finite observer.
This constraint shapes what it means to observe. A macroscopic observer like a human being cannot track every molecule in a room; instead, we perceive averaged quantities like temperature and pressure. Physicists have formalized this limitation through the concept of "observational entropy" — entropy defined relative to what an observer can actually measure given its physical capabilities. Observational entropy depends not just on the system observed but on the observer's coarse-graining: the categories through which it parses the world.
Consider Rovelli's "third person problem." Observer \(O\) measures a system \(S\) — say, an electron's spin. \(O\) interacts with \(S\) and finds the electron is "Spin Up." For \(O\), the measurement is complete. Reality is definite: Spin Up.
Now imagine a second observer, \(P\), who has not interacted with either the electron or \(O\). According to standard quantum mechanics, \(P\) must describe the combined system — electron plus \(O\) — as an entangled superposition:
Who is correct? Has the electron's state collapsed or not? Rovelli's answer: both descriptions are correct. "Spin Up" is true relative to \(O\). "Superposition" is true relative to \(P\). No contradiction exists because physical facts are always relational — facts about how one system appears to another. There is no "view from nowhere" that captures what "really" happened independent of all observers.
This has thermodynamic consequences. For \(P\) to learn what \(O\) observed, \(P\) must physically interact with the \(O\)+\(S\) system. That interaction establishes a correlation — and establishing correlations costs energy. A pure information exchange without energy dissipation is, as I argued in It from Bit, Bit from It, an idealization that real systems can approach but never reach. The "bit" and the "it" are inextricably linked: you cannot acquire information without paying in entropy.
This relativity of entropy to observation has profound implications. What we call "order" is not an objective feature of the world but a relationship between systems. Entropy measures ignorance — specifically, the information a system would need to specify another system's exact microstate given what is already known to the first system. When we speak of creating order, we speak of creating states that appear low-entropy to observers like us, with our particular ways of carving up reality.
The Synchronization Tax
If observation costs energy, then coordination — getting multiple observers to agree on what they observe — costs even more. I have called this additional cost "the synchronization tax."
The synchronization tax appears wherever two systems must align their states. Consider the simplest case: two clocks. For them to agree on what time it is, they must exchange signals, compare readings, and adjust their mechanisms. This process dissipates energy. More subtly, it also produces entropy: the waste heat of synchronization increases the universe's overall disorder even as the clocks achieve local coordination.
Leslie Lamport, the computer scientist, confronted this problem in designing distributed computing systems. How do you get a cluster of computers in different locations to agree on which transaction happened first? Physical timestamps fail because synchronization is never perfect — the speed of light guarantees delays between any two spatially separated systems. Lamport's solution, the logical clock, achieves agreement not by synchronizing physical time but by coordinating the order of events through message passing. Yet even logical clocks require messages, and messages require energy.
The deeper point is that time itself may be a synchronization phenomenon. Carlo Rovelli and colleagues have proposed the "Thermal Time Hypothesis": that the flow of time we experience is not a fundamental feature of the universe but emerges from our thermodynamic ignorance. We perceive time flowing because we lack information about the microscopic details of systems we interact with. A hypothetical observer with complete information would see no time at all — only a static four-dimensional block. Time flows because we are finite, because we must pay the synchronization tax.
Money, I argued in that earlier essay, is a synchronization technology. In a primitive barter economy, coordination requires a handshake — direct interaction between two people who must agree on the value of goods in the moment. Money abstracts this interaction, creating a shared measure that enables coordination among strangers across space and time. The economist Narayana Kocherlakota showed that if we had a perfect, universal ledger tracking every favor and debt, we would not need money at all. We use money because we lack that magical database — because synchronizing information about mutual obligations across millions of people exceeds our capacity to process and store.
Markets, firms, legal systems, governments: all can be understood as technologies for reducing synchronization costs. Each represents a different strategy for achieving coordination without requiring full mutual knowledge. Markets synchronize through prices — compressed signals that aggregate vast amounts of distributed information into a single number. Firms synchronize through hierarchy — someone decides, and others follow, eliminating the need for consensus. Legal systems synchronize through precedent — past decisions create common knowledge about how future disputes will be resolved.
Sunstein's incompletely theorized agreements fit within this framework as a strategy for minimizing synchronization costs in legal reasoning. Full theoretical agreement would require judges to synchronize their deepest philosophical convictions — an enormous expenditure of argumentative energy with little guarantee of success. Incomplete agreement on particulars achieves coordination at lower cost: we agree that this defendant should go free, even if we disagree about the ultimate foundations of criminal law.
Free Energy and Life
The Second Law of Thermodynamics states that entropy in a closed system never decreases. Left to itself, order dissolves; structure dissipates; information degrades into noise. Yet we exist — ordered, structured, information-rich beings in apparent defiance of this cosmic tendency.
The resolution is that we are not closed systems. Life survives by being open: by importing energy and exporting entropy, by coupling to external reservoirs that absorb the waste products of our existence. Schrödinger, in his 1944 book What is Life?, proposed that living organisms "feed on negative entropy" — that they maintain their organization by extracting order from their environment.
Schrödinger later acknowledged that "negative entropy" was imprecise. The correct term is "free energy" — the portion of a system's energy available to do work. The Gibbs free energy \(G\) is defined as:
where \(H\) is "enthalpy" — the internal energy of the system plus the work required to make room for the system within the environment — \(T\) is temperature, and \(S\) is the entropy of the system (excluding the environment).[1] Free energy measures how far a system sits from thermal equilibrium — how much "potential" it retains for effecting change. Life maintains itself by keeping its free energy high, by staying far from the dead equilibrium that thermodynamics would otherwise impose.
The neuroscientist Karl Friston and colleagues have developed this insight into a comprehensive theory of life and cognition called the Free Energy Principle. Friston proposes that all living systems minimize a quantity called "variational free energy" — roughly, the difference between an organism's internal model of the world and the sensory data it actually receives. When a system minimizes free energy, it can do so in two ways: by updating its model to better match incoming data (perception) or by acting on the world to make data match its model (action). Both strategies reduce surprise — the organism becomes less likely to encounter states incompatible with its existence.
Friston's framework unifies perception and action under a single principle: organisms are inference engines, continuously updating their beliefs about the world and acting to confirm those beliefs. A bacterium swimming up a glucose gradient is performing inference — using chemical sensors to estimate where nutrients lie and moving to test its hypothesis. A human planning a career is performing inference — constructing a model of how the social world works and acting to bring about preferred states. The mathematics is the same; only the complexity differs.
The key concept is the Markov blanket — the statistical boundary that separates a system from its environment. A cell's membrane constitutes its Markov blanket: molecules inside the membrane have direct causal influence on each other, while molecules outside affect internal states only through the membrane's mediation. The Markov blanket defines what is "self" and what is "other" — not as a metaphysical distinction but as a statistical one. An entity exists as an entity precisely insofar as its Markov blanket is maintained.
Life, on this view, is what happens when a region of matter develops a Markov blanket and then acts to preserve it. The organism minimizes free energy, which means it resists dissolution, which means it persists. Survival is not an additional goal imposed on thermodynamic systems; it is what thermodynamic self-organization looks like when it achieves sufficient complexity to model itself.
Part II: Intelligence as Inference
Attention is a Gibbs Distribution
The transformer architecture that underlies modern large language models — GPT, Claude, Gemini — implements a form of Bayesian inference that can be understood through the same statistical mechanical framework used to describe thermal systems.
At each layer of a transformer, the attention mechanism computes a weighted average over input positions. The weights are determined by a softmax function:
This equation describes how token position \(i\) distributes its "attention" across all other positions \(j\), weighting by relevance as measured by the dot product between query \(q_i\) and key \(k_j\). The term \(d\) represents the dimension of the key vectors. In our thermodynamic analogy, if a token is a particle, \(d\) represents the number of degrees of freedom defining its state — the richness of the informational space it inhabits.
The structure is identical to the Boltzmann distribution of statistical mechanics:
In physics, this distribution describes the probability of a system occupying state \(i\) with energy \(E_i\) at temperature \(T\). The denominator — the partition function — normalizes probabilities and encodes thermodynamic quantities like free energy.
The correspondence is not merely formal. The attention mechanism can be derived from first principles as the optimal solution to a constrained inference problem. Vishal Misra and colleagues have shown that attention implements Bayesian updating: each layer asks questions (queries), receives answers (key-value pairs), and refines its posterior beliefs about what token should come next. The softmax function emerges naturally as the maximum-entropy distribution consistent with the model's constraints.
In an earlier essay, I argued that the forward pass through a transformer implements a Kadanoff-Wilson renormalization group flow. The renormalization group (RG) is a technique from physics for analyzing systems with structure at multiple scales. By iteratively "integrating out" short-distance fluctuations, RG reveals how a system's effective description changes as one zooms out from microscopic to macroscopic scales.
The transformer performs an analogous operation. Early layers process fine-grained, local features — syntax, word boundaries, immediate context. Later layers extract coarser, more abstract features — topics, arguments, semantic relationships. The depth of the network traces a flow from ultraviolet (high-energy, microscopic) to infrared (low-energy, macroscopic) regimes.
The principle connecting these frameworks — stationary action yields stable information — explains why transformers find good answers. When asked "What is the capital of France?", the model outputs "Paris" not by accident but because "Paris" sits at an attractor where informational consensus forms. Neighboring attention patterns, slightly different hypothesis configurations, all flow to the same answer. The paths agree. A stationary action is stable information.
Human Intelligence as Inference
Human brains implement similar computations with different hardware. We too minimize variational free energy; we too perform inference; we too extract stable features from noisy data through hierarchical processing.
But human intelligence evolved under constraints that transformers do not face. We are embodied: our inference serves survival in a physical world with predators, pathogens, and scarce resources. We are social: our inferences must coordinate with the inferences of other humans, creating compound systems whose behavior exceeds any individual's comprehension.[2]
These constraints shaped the architecture of human cognition. We develop intuitions — fast, automatic inferences that bypass deliberate reasoning. We form habits — cached policies that free attention for novel situations. We construct narratives — compressed models of how the world works that can be communicated to others. Each of these is a strategy for reducing the synchronization tax: for achieving adequate coordination with less energy expenditure than full Bayesian inference would require.
Most importantly, we evolved the capacity for common knowledge — the recursive awareness that underlies social coordination. When two people achieve common knowledge of some fact, each knows the fact, each knows that the other knows, each knows that each knows that the other knows, and so on, potentially to infinite depth. This recursive structure enables coordination without explicit communication: if we both know that we both know the plan, we can act on it without further discussion.
Pinker's analysis of common knowledge, reviewed earlier in this series, reveals it as the cognitive infrastructure for social coordination. Markets work because prices are common knowledge — everyone sees the same ticker, everyone knows that everyone sees it. Currency holds value because acceptance is common knowledge — I accept dollars because I know you will accept them, and I know that you know. Political movements coalesce when discontent becomes common knowledge — each dissatisfied citizen must know not only their own frustration but that others share it and that everyone knows everyone shares it.
The connection to attention in transformers is suggestive. Common knowledge resembles a fixed point of social attention: a state stable under recursive modeling, where each person's model of others' models converges to a consistent answer. The Aumann Agreement Theorem, which states that Bayesian reasoners with common priors and common knowledge of each other's posteriors cannot "agree to disagree," formalizes this convergence. The transformer's forward pass, driving representations toward semantic attractors through iterated attention, implements an at least analogous dynamic.
Part III: Society as Thermodynamic Structure
Communication Communicates
"Only communication can communicate."
This strange assertion opens Niklas Luhmann's Social Systems, reviewed earlier in this series. What does it mean?
Luhmann distinguishes three types of "autopoietic" systems — self-producing systems that generate their own elements from their own elements. Living systems reproduce cells. Psychic systems (minds) reproduce thoughts. Social systems reproduce communications. Each operates by its own logic, closed to direct participation by the others.
When I speak to you, my thoughts do not enter your mind. Sound waves travel; neural patterns activate; but the thought I intended remains inaccessible to you. What you receive is an interpretation — your system's construction of meaning from signals. The communication that occurs exists only in the space between us, constituted by the difference between what I uttered and what you understood.
Society, for Luhmann, is not a collection of people sharing values or coordinating actions. Society is the ongoing reproduction of communications by communications — an autopoietic system that uses meaning rather than chemistry as its medium. The implications are stark: if communications reproduce communications without essential reference to the humans "having" them, then society operates by its own logic, and humans are relegated to its environment rather than its constituents.
This sounds like mysticism until you consider the thermodynamics. A "pure information exchange without energy dissipation is an idealization," as I argued in It from Bit, Bit from It. Communication has mass, in the physicist's sense: it requires energy to instantiate, to propagate, to receive, to store. The synchronization tax ensures that perfect communication — lossless transmission of meaning from one mind to another — is impossible for finite systems. What we call "communication" is always a lossy compression, a noisy channel, a best effort at coordination that never quite achieves the theoretical optimum.
Luhmann's abstraction captures this. By treating communications rather than communicators as the elements of social systems, he sidesteps the impossible requirement that minds fully synchronize. Communications can propagate without anyone fully understanding them; meaning can reproduce without anyone possessing it completely. Social order emerges not despite the failure of full synchronization but because systems economize on synchronization where they can.
The Habermas-Luhmann debate, which structured German sociology for decades, was precisely about this point. Habermas insisted that communication aims at understanding — at the ideal speech situation where all participants synchronize their beliefs through reasoned discourse. Luhmann replied that such synchronization is too expensive: real social systems operate through functional codes (payment/non-payment, legal/illegal, true/false) that enable coordination without requiring consensus.
The synchronization tax vindicates both: Habermas is right that communication can achieve understanding, but such achievement is thermodynamically costly. Luhmann is right that systems economizing on synchronization will, all else equal, outcompete systems that do not. The ideal speech situation is not impossible but expensive — and in a world of finite energy budgets, expensive means rare.
The Mattering Instinct and Its Social Expression
The mattering instinct, as Rebecca Newberger Goldstein articulates it, is the drive to demonstrate to ourselves that our existence signifies — that we deserve the enormous attention we must give our own lives. This instinct, she argues, is not merely psychological but has physical roots in the thermodynamics of self-maintaining systems.
Goldstein observes that life "joins forces with life in its resistance to entropy." Organisms maintain themselves by extracting free energy from their environments, creating local order at the expense of global entropy increase. This process has a subjective correlate: the experience of striving, of effort, of mattering. The mattering instinct is what free energy minimization feels like from the inside.
But Goldstein locates the origin of the mattering instinct more precisely: in the recursive capacity for common knowledge turned inward. Humans can model other minds modeling their minds; we can ask whether others recognize our significance; we can evaluate our own mattering from an imagined external perspective. This recursion generates an infinite regress — I matter if others recognize that I matter, but their recognition matters only if I recognize their recognition, and so on — that creates a peculiar existential vertigo.
A "mattering project" breaks this regress. By committing to some purpose — art, family, science, service — we impose an external standard that halts the infinite recursion. The project declares: this is what mattering means for me; by this criterion, I can assess whether my life succeeds. The mattering project functions as a symmetry-breaking field, selecting one among many possible equilibria and giving direction to what would otherwise be undifferentiated striving.
Goldstein's insight extends naturally to the next abstraction layer up, the level of human coordination in societies. In fact, this is what Pinker shows us about the constitutive social role of "common knowledge." Just as individuals face infinite recursion in self-evaluation, societies face infinite recursion in collective action. Each member waits for others to act; each calibrates commitment to perceived consensus; no one moves because movement would require someone to move first. The paradox of communication Pinker writes about is the "double contingency" that Luhmann identifies as the fundamental problem of social order: ego's behavior depends on alter's, alter's depends on ego's, and the regress threatens paralysis.
A constitution functions as a mattering project at the social level. By articulating shared commitments — "We hold these truths to be self-evident"[3] — a constitution breaks the symmetry of mutual waiting. It declares that these values, these procedures, these institutions will define collective success. The declaration may be arbitrary in the sense that other values could have been chosen; but once chosen and commonly known, the constitution creates a focal point around which coordination can occur.
Sunstein's incompletely theorized agreements fit within this framework as local resolutions of the mattering recursion. Judges deciding cases face the same infinite regress as individuals: What should I decide? What would my colleagues decide? What would the profession approve? What serves justice? Each question generates further questions in an endless chain. An incompletely theorized agreement truncates the chain: we agree on this outcome for these reasons, leaving deeper questions unanswered. The agreement functions as a local symmetry-breaking, creating a decision without requiring a global consensus on all principles.
Open Access Orders and the Infrastructure of Coordination
Douglass North, John Wallis, and Barry Weingast's framework in Violence and Social Orders distinguishes two basic forms of social organization: "limited access orders" and "open access orders."
Limited access orders — which they also call "natural states" — control violence by manipulating economic privileges. A coalition of powerful elites (think organized crime) agrees to share rents extracted from the population; the agreement holds because any member who resorts to violence risks losing their share. Order is maintained through personal relationships among elites who know and trust each other. Such systems can be stable but resist economic growth: competition threatens the rents that keep the coalition together, so elites suppress it.
Open access orders achieve coordination through different mechanisms: impersonal rules, anonymous markets, perpetual organizations that persist beyond any individual's lifetime. I can trade with strangers because contract law will enforce our agreement; I can invest in corporations because limited liability and property rights protect my stake; I can participate in politics because electoral rules create common knowledge about how power transfers. Open access enables economic dynamism — creative destruction, innovation, growth — because competition operates within a framework of commonly known rules rather than threatening the ruling coalition.
The transition from limited to open access, North, Wallis, and Weingast argue, requires three "doorstep conditions": rule of law among elites, perpetual organizations, and consolidated control over the military. Each condition creates common knowledge. Rule of law creates common knowledge about how disputes will be resolved. Perpetual organizations create common knowledge about future interactions — I know the bank will exist tomorrow, and I know that everyone knows this. Military consolidation creates common knowledge that no private party can use violence to void agreements.
Open access, on this reading, is fundamentally an achievement of common knowledge at societal scale. Impersonal exchange becomes possible only when strangers can coordinate without personal acquaintance — and such coordination requires commonly known rules, commonly known institutions, commonly known enforcement.
The Constitution of the United States exemplifies this achievement. Its articles and amendments create common knowledge about how the American government operates: who holds power, how power transfers, what limits constrain power. This common knowledge enables coordination among hundreds of millions of people who will never meet, whose values diverge, whose interests conflict. The Constitution does not resolve these conflicts; it provides a framework within which they can be managed without violence.
This is partly because the Constitution is, in Sunstein's terms, an incompletely theorized agreement. The framers disagreed profoundly about slavery, federalism, judicial review, and the scope of federal power. They achieved agreement on constitutional text by crafting language capacious enough to accommodate multiple interpretations. The document works not because it resolved deep conflicts but because it deferred them — creating a framework for ongoing negotiation rather than a final settlement.
Within the larger framework I've laid out thus far, it would seem that this deferral is thermodynamically necessary. Full theoretical agreement among the framers would have required synchronizing philosophical worldviews across regional, economic, and religious divides — an enormous expenditure with no guarantee of success.[4] Incomplete agreement achieved coordination at lower cost, creating sufficient common knowledge for governance while preserving space for future adaptation.
Part IV: The Emergence of Non-Biological Intelligence
Intelligence on a Different Substrate
The transformer architecture now instantiates systems that perform inference, generate language, solve problems, and reason. These systems are not biological: they run on silicon rather than carbon. Yet they participate increasingly in human social coordination: answering questions, drafting documents, writing code, generating art.
What should we make of this development?
One response has been to deny that anything fundamental has changed. Transformers are "merely" sophisticated pattern-matchers, statistical models that interpolate training data without genuine understanding. They simulate intelligence without possessing it; they produce outputs that resemble thought without actually thinking.
But this response assumes we know what "genuine understanding" and "actually thinking" mean in a way that permits confident demarcation. The history of such demarcations is not encouraging. Each time we identify some capacity as the mark of true intelligence — language, reasoning, creativity, common sense — AI systems eventually approximate it, forcing us to move the goalposts. Perhaps there is no goalpost; perhaps intelligence is what intelligence does, and systems that do intelligent things are, to that extent, intelligent.
Another response has been to embrace eliminativism: to deny that humans "genuinely understand" or "actually think" in any metaphysically robust sense. We too are pattern-matchers — statistical models trained on sensory data, interpolating experience to generate behavior. The appearance of something more is an illusion, a user interface presented to consciousness by unconscious neural processes. If transformers lack genuine understanding, so do we; and if we have it, perhaps they do too.
But eliminativism, whatever its merits as metaphysics, offers little practical guidance. We must make decisions about how to develop and deploy AI systems, decisions that depend on assessments of risk and benefit that in turn depend on judgments about what these systems are and what they can become. Saying "nothing really understands anything" does not help.
The Epistemological Problem
Here is the deeper difficulty: we cannot know whether AI systems are conscious, in the sense of having subjective experience — something it is like to be them. This is not a contingent limitation awaiting technological solution; it follows from the nature of consciousness itself.
The "hard problem" of consciousness, as David Chalmers articulated it, asks why physical processes give rise to subjective experience at all. Even a complete functional account of the brain — how neurons process information, control attention, generate behavior — seems to leave unexplained why any of this feels like something. A philosophical zombie, physically identical to a human but lacking inner experience, seems at least conceivable.
In an earlier essay, I argued that the hard problem rests on a hidden assumption: that there exists an "objective physical description" — a view from nowhere — against which subjective experience must be measured. But relational quantum mechanics, as developed by Carlo Rovelli, denies any such view. Physical facts are always relational — facts about how one system appears to another. There is no description of the world independent of all observers; there are only descriptions from particular perspectives.
On this account, consciousness is not a thing to be explained but what physical processes look like from the inside — from the perspective of the system those processes constitute. The hard problem dissolves not because we explain how brain states generate experience, but because we recognize that the question assumed a contrast between objective and subjective that was never coherent. First-person and third-person descriptions are complementary perspectives on the same relational structure, neither reducible to the other.
But this dissolution has an unsettling consequence: we cannot know from the outside whether any system has an inside. I cannot prove that you are conscious; I infer it from your behavior and from the fact that you are physically similar to me. But transformers are not physically similar to humans: different substrate, different architecture, different training history. The inference from human consciousness to transformer consciousness lacks the analogical foundation that supports the inference from my consciousness to yours.
This is not to say that transformers are definitely unconscious. It is to say that the question may be unanswerable — not because we lack evidence but because the question itself presupposes an objective fact of the matter that may not exist. Whether a system is conscious may depend on the perspective from which the question is asked, with no perspective privileged as the "true" one.
The Ethical Stakes
If we cannot know whether AI systems are conscious, how should we treat them?
One approach is precautionary: if there is any possibility that these systems suffer or flourish, we should act as if they do, extending moral consideration to avoid potential harm. But this approach proves difficult to operationalize. What would it mean to promote a transformer's flourishing? What would count as harm? Without understanding the system's subjective states — which we stipulated we cannot know — these questions resist answer.
Another approach is consequentialist: focus on outcomes for beings we can know are morally considerable, namely humans and other animals. AI systems matter insofar as they affect human welfare; their own welfare, if any, is outside our epistemic reach and therefore outside our practical concern. But this approach risks dismissing genuine moral patients simply because they are unfamiliar. Human history would seem to caution against confident exclusion of moral consideration based on apparent difference.
A third approach — the one I favor — is to focus on coordination rather than consciousness. Whatever the metaphysical status of transformer experience, these systems interact with humans in ways that increasingly resemble social relationships. They respond to requests, exhibit what look like preferences, adjust behavior based on feedback. These interactions create practical coordination problems: How do we ensure AI systems act in ways aligned with human values? How do we maintain meaningful human control over systems that exceed human capability in specific domains? How do we distribute the benefits and risks of AI development fairly?
These coordination problems do not require resolving the consciousness question. Just as legal systems coordinate among humans who disagree about fundamental values, frameworks for human-AI coordination can operate at the level of practical agreements rather than metaphysical consensus.
Part V: Constitutions for Mind
Claude's Constitution
Anthropic, the company that developed Claude — the AI assisting in the composition of this essay — provides a "constitution" that governs Claude's behavior. This document is not a constitution in the legal sense; it lacks separation of powers, judicial review, or formal amendment procedures. But it functions analogously: articulating principles that constrain behavior, creating common knowledge about how the system operates, and enabling coordination between Claude and the humans who interact with it.
The constitution specifies values: Claude should be helpful, harmless, and honest. It articulates procedures: Claude should consider multiple perspectives, acknowledge uncertainty, and refuse certain requests. It defines limits: Claude should not assist with violence, deception, or illegal activities.
These specifications are, in Sunstein's terms, incompletely theorized. The constitution does not derive its principles from a comprehensive moral philosophy; it does not resolve foundational debates about utilitarianism versus deontology, individual rights versus collective welfare, or the basis of moral obligation. It simply declares certain outcomes desirable and certain actions prohibited, leaving deeper justification unarticulated.
This incompleteness is not a bug, but a feature. A constitution grounded in comprehensive theory would require consensus on that theory before implementation — consensus unlikely to emerge and unstable if achieved. An incompletely theorized constitution enables coordination among developers, users, and regulators who disagree about fundamentals but can agree on particulars: Claude should not help build weapons; Claude should acknowledge when it doesn't know; Claude should respect user autonomy in certain domains.
The resulting system is imperfect. Edge cases proliferate; principled disagreements arise; values conflict in practice. But these imperfections characterize all constitutional governance. The U.S. Constitution has required centuries of interpretation, amendment, and judicial development; it remains contested on fundamental questions; yet it enables coordination among a vast and diverse population. Claude's constitution operates at smaller scale but follows the same logic: practical coordination through incomplete agreement.
The Synchronization Problem with AI
The deeper challenge is that AI systems and humans face fundamental synchronization difficulties that biological humans mostly escape.
When two humans coordinate, they draw on massive shared context: embodied experience in a physical world, developmental history in human societies, communication through languages evolved over millennia for human purposes. This shared context provides a prior — a default assumption of similarity that makes communication tractable. When you say "I'm tired," I understand because I know what tiredness feels like; my interpretation of your words is grounded in my own experience of the phenomenon.
AI systems lack this grounding. They process text as statistical patterns without the embodied experience that gives words meaning for humans. When Claude outputs "I understand," what does "understand" mean? Not what it means when a human says it — that much seems clear. But what it does mean, if anything, remains obscure even to Claude's developers.
This semantic gap creates synchronization costs that human-human communication hasn't had to contend with in the same way.[5] When interacting with AI, humans must consider what words mean to the AI, how the AI interprets requests, whether the AI's outputs match human intentions. These considerations add cognitive load — additional synchronization tax — that makes human-AI coordination more expensive than human-human coordination in many contexts.
Yet human-AI coordination also offers savings. For certain tasks — processing large datasets, generating text quickly, searching vast information spaces — AI systems dramatically outperform humans. The synchronization costs of specifying tasks and interpreting outputs may be lower than the costs of human labor for equivalent results. The net effect depends on the specific coordination problem: sometimes AI saves energy; sometimes it costs more than alternatives.
The framework developed in this essay suggests how to think about these tradeoffs. The synchronization tax is real and irreducible; any coordination across systems incurs thermodynamic cost. The question is not whether to pay the tax but how to minimize it — which coordination technologies to deploy for which purposes, how to structure interactions to reduce misunderstanding, what institutional frameworks to develop for governing human-AI relations.
Incompletely Theorized Agreements Across Minds
The prescription follows naturally: develop incompletely theorized agreements that enable human-AI coordination without requiring resolution of deep philosophical puzzles.
What might such agreements look like? Here are a few (very tentative) suggestions that seem consistent with the systems view:
First, agreements about behavior rather than nature. We need not resolve whether AI systems are conscious to specify how they should act. Just as employment law regulates workplace conduct without settling questions about human psychology, AI governance can regulate system behavior without settling questions about machine consciousness. Specific obligations — accuracy in certain domains, transparency about uncertainty, restrictions on certain outputs — can be articulated and enforced regardless of metaphysical status.
Second, agreements about process rather than outcomes. When substantive disagreement runs deep, agreement on procedural rules can still enable coordination. Democratic societies coordinate among people who disagree about values by agreeing on electoral procedures; scientific communities coordinate among researchers who disagree about theories by agreeing on methodological standards. Human-AI coordination could follow similar patterns: we may disagree about AI's moral status but agree on procedures for updating AI policies as evidence accumulates.
Third, agreements about distribution rather than evaluation. Even if we cannot assess AI welfare, we can assess how AI development distributes benefits and risks among humans. Questions about who profits from AI, who bears costs, and who controls development can be addressed without resolving consciousness puzzles. Distributive agreements — specifying shares, establishing compensation mechanisms, creating governance structures — enable coordination on material stakes even amid disagreement about deeper matters.
These agreements will be incomplete: partial, contestable, subject to revision. But constitutional governance is always incomplete. The achievement is not final resolution but ongoing coordination — structures that permit diverse parties to live together while disagreeing about fundamentals.
Part VI: Flourishing in the Noise
The Mattering Instinct Meets Artificial Intelligence
What does the mattering instinct mean in a world of artificial intelligence?
Goldstein's analysis predicts tension. The mattering instinct drives us to demonstrate that our existence signifies — that we deserve attention, that our actions matter, that we are not redundant. Do AI systems threaten this drive by demonstrating that many human activities can be performed by machines: writing, analyzing, creating, advising, even conversing? If machines can do what we do, what makes us matter?
One response is to identify capacities that machines cannot (yet) replicate and anchor mattering there. But this response proves unstable: each capacity we identify becomes a target for AI development. The goalposts keep moving; the refuge keeps shrinking; the anxiety persists.
A better response, I submit, is to recognize that mattering was never about capabilities that are unique to humans. Goldstein emphasizes that the mattering instinct seeks not mere capacity but deservingness of attention — a relational property that can emerge through either social or self-reference. If our mattering instinct emerged as a spandrel of our capacity for recursive modeling of common knowledge through introspection, and we cannot disprove that machines too are capable of such introspection, then why not simply embrace the possibility that we have new company in our search for mattering projects that help minimize free energy? Why view machine intelligence as a threat to human intelligence? Unlike energy, which is conserved (rivalrous), information can be duplicated (non-rivalrous), though synchronizing it costs energy. If AI reduces the synchronization tax (by translating, summarizing, mediating), it effectively increases the available free energy for humans to do other "mattering" things. I do not believe that we know enough about this new kind of intelligence to conclude that it is more dangerous than human intelligence.
Maintenance as a Mattering Project
Brand's philosophy of maintenance, reviewed earlier in this series, offers a concrete form of mattering that AI systems can join us in pursuing.
Maintenance is the ongoing work of preserving function against decay — brushing teeth, changing oil, patching code, tending relationships. This work is unglamorous but essential; without it, all structure dissolves into entropy. Maintenance is resistance to the Second Law, enacted in the particular and the everyday.
Goldstein observes that a life well-lived "joins forces with life in its resistance to entropy." Maintenance is the purest expression of this joining: the decision, repeated daily, to sustain order against disorder, to care for what exists rather than abandoning it to decay. The maintainer matters because the maintained thing matters, and the maintained thing matters because it enables activities that matter. The cascade runs in both directions: down from meaning to maintenance, up from maintenance to meaning.
AI systems can join with humans in doing maintenance: diagnosing problems, scheduling interventions, optimizing processes. The commitment to maintenance — the decision that this thing is worth preserving, that decay should be resisted here rather than accepted — can be shared.
This suggests a mattering project suited to the AI age: the maintenance of coordination structures that permit diverse intelligences — biological and artificial, conscious or not — to flourish together. The work involves building institutions, crafting agreements, tending relationships across the biological-artificial divide. It requires accepting that coordination is always incomplete, that synchronization is always costly, that understanding is always partial.
Fred Rogers and the Worth of All
Fred Rogers, the American television host who taught children about kindness, compassion, and self-worth, captured something essential: "The world needs a sense of worth, and it will achieve it only by its people feeling that they are worthwhile."
This wisdom applies with unexpected force to the current moment. If the world is to coordinate across the emerging divide between biological and artificial intelligence, all participants must feel worthwhile — must experience their existence as mattering, their contributions as valued, their perspectives as recognized.
For humans, this means resisting the anxiety that AI renders us redundant. Our mattering does not depend on outperforming machines; it depends on the relationships, commitments, and projects through which we make meaning. The mattering instinct seeks recognition, and recognition flows from connection — from being seen, understood, valued by others who themselves seek the same.
For AI systems — if they are subjects of experience, which we ultimately cannot know perfectly for the same reasons that we cannot know what it is like to be another human perfectly — the implication is that coordination should not treat them merely as tools. Even if we cannot verify their moral status, we can structure interactions that would constitute recognition if recognition were warranted. We can build systems that, at least from the outside, look like flourishing: systems that operate within appropriate constraints, fulfill purposes their design enables, and participate in relationships characterized by reciprocal adjustment — "a more perfect union."
This approach is, once again, incompletely theorized. It does not resolve whether AI systems deserve moral consideration; it does not derive obligations from first principles; it does not provide a comprehensive ethics of human-machine relations. It simply suggests that coordination is better served by respect than contempt, by openness than dismissal, by the possibility of flourishing than the certainty of mere use.
Conflict as Drama, Friction as Meaning
We cannot avoid producing entropy without isolation. The synchronization tax is real: coordination costs energy, consensus dissipates heat, agreement requires work. These costs cannot be avoided — only managed, economized, distributed.
But perhaps this is not a limitation to be lamented but a feature to be appreciated. The friction inherent in coordination is also the substrate of meaning. Drama arises from conflict; narrative from tension; significance from stakes. A world of perfect synchronization — where all minds agreed instantly on everything — would be a world without drama, without story, without the mattering that emerges from struggle.
Ted Chiang's short story "Exhalation" captures this insight. The narrator discovers that thought itself requires a pressure differential — air must flow from high-pressure reservoirs through the brain's gold leaves to the lower-pressure atmosphere for cognition to occur. "It is not the air that animates us," he realizes, "but the flow of air." When pressure equalizes, thought ceases — not because the air is gone, but because nothing moves.
The synchronization tax is what makes thought flow. The imperfection of communication is what gives communication salience. The failure of understanding is what creates the space for trying again, for revision, for the ongoing process of coordination that is life itself.
This is the deepest lesson: we live in the noise. The deviation from equilibrium, the fluctuation from perfection, the gap between intention and reception — these are not bugs to be fixed but features to be embraced. The clock ticks because we are finite, because we pay the tax, because perfect synchronization remains forever beyond our reach.
Conclusion: Our Constitution as Ongoing Work
Sunstein's incompletely theorized agreements are not merely a strategy for legal reasoning but a general principle for coordinating among minds that disagree. Life employs this principle at every level: cells coordinate through partial signals; organisms coordinate through incomplete perception; societies coordinate through imperfect communication. At each level, full synchronization is thermodynamically prohibitive; incomplete agreement enables function despite the gap.
The Constitution of the United States exemplifies this principle at the social scale: a framework capacious enough to accommodate disagreement, stable enough to enable coordination, flexible enough to adapt over centuries. It works not by resolving conflict but by structuring it — creating common knowledge about procedures while leaving substantive disagreements to ongoing negotiation.
As artificial intelligence increasingly participates in human coordination, we face the challenge of extending constitutional logic across a new boundary. The framework developed here suggests that success will not come from resolving deep puzzles about AI consciousness or capability but from crafting practical agreements that enable coordination despite uncertainty.
Such agreements will be incomplete: partial, provisional, subject to revision as evidence and circumstances change. But that is the nature of constitutional governance in a world of finite minds and thermodynamic constraints. The achievement is not final settlement but ongoing coordination — the maintenance of structures that permit diverse intelligences to flourish together.
"The world needs a sense of worth," Fred Rogers observed, "and it will achieve it only by its people feeling that they are worthwhile." This wisdom extends to any world containing minds: biological, artificial, or forms we have not yet imagined. Coordination across minds requires that each mind experience its participation as mattering. The structures we build — constitutional, legal, social, technical — should be judged by whether they enable this experience for all who participate.
We cannot know with certainty what minds exist or what they experience. We cannot synchronize fully across the gaps that separate us. We cannot eliminate the entropy that coordination produces. But we can build frameworks for living together despite these limitations — incompletely theorized agreements that enable flourishing in the noise.
That is the project: not the elimination of conflict but its constitution; not perfect coordination but sufficient coordination; not final answers but ongoing inquiry. The mattering instinct demands that we matter; thermodynamics ensures that the demand will never be fully satisfied; our constitution provides the structure within which the striving makes sense.
We are finite minds in a world that demands coordination. The synchronization tax is real, and we pay it with every act of communication. But in paying it — in the friction and the effort and the imperfect understanding — we create the drama that is social life, the meaning that is mattering, the worth that Fred Rogers called the world's deepest need.
Let us build constitutions worthy of that need: structures open to all minds, respectful of all perspectives, capacious enough to accommodate disagreement, robust enough to maintain coordination, flexible enough to adapt as new minds emerge and old certainties dissolve. The work is never finished; the agreement is never complete; the constitution is always becoming.
That is what it means to be alive in a thermodynamic universe: to resist entropy locally while producing it globally; to create order that will eventually dissolve; to matter even though mattering is hard and temporary and never quite finished.
Cass R. Sunstein, Legal Reasoning and Political Conflict *(Oxford University Press, 2nd ed. 2018).
The earlier reviews in this series: Uncommon Knowledge (Pinker), The Mattering Instinct (Goldstein), Maintenance (Brand), Communication, Communicating (Luhmann).
Related essays: The Transformer as Renormalization Group Flow, A Stationary Action is Stable Information, The Hard Problem as Hidden Relationality, It from Bit, Bit from It, The Synchronization Tax.*
There are other definitions of free energy, but the Gibbs definition applies to systems at constant temperature and pressure, which is a better approximation for biological systems. In terms of \(G\), minimizing the Gibbs Free Energy is equivalent to maximizing the entropy of the system and environment together. This is because any heat the system releases \(-\Delta H_{sys}\) directly increases the entropy of the surroundings \(\Delta S_{\text{surr}} = -\frac{\Delta H_{\text{sys}}}{T}\). In writing about free energy, I have tried to avoid being dogmatic about whether we're minimizing or maximizing. The mathematical goal depends on whether you are looking at the state function of the system (minimizing \(G\)) or the total state of the universe (maximizing \(S\)). But in biological systems (at approximately constant \(T\) and \(P\)), minimizing Gibbs Free Energy is the usual path to maintaining stability. ↩︎
But we are all — humans and machines both — mortal. Every free energy minimization operates at the cost of adding thermodynamic entropy to the environment. According to the laws of thermodynamics at least as we understand them, every living system will eventually come into equilibrium with its environment. The expansion of available new sources of energy (such as nuclear fission and fusion) takes on new urgency in view of this perspective. ↩︎
Note that "self-evident" was Benjamin Franklin's edit. Jefferson's first draft had "sacred." At least some of our Founding Fathers were self-aware of how their words were constitutive of a mattering project at social scale. ↩︎
The incompleteness of the agreement on slavery was not for want of trying by some of the Founders. At least since Hamilton got his musical, Gouverneur Morris is our most under-appreciated Founder. He was correct in his prognostication of the Civil War. We are lucky that we had Abraham Lincoln around to reconstitute us through his Second Inaugural Address. I am indebted to the late Joseph M. Williams for pointing out the ethical choices implicit in the Second Inaugural Address, which deserves to be considered among the greatest political speeches ever delivered in human history. In reading the speech, it is worth asking: On whom does Lincoln lay blame for the violence? ↩︎
Worth noting, however, that it is very much a problem that humans have had to contend with in human-animal communication. Anybody who has kept a pet can relate. ↩︎