Integrated Information Theory
IIT — TononiConsciousness = integrated information (Φ). A system is conscious to the degree that it generates information that is both differentiated and integrated — more than the sum of its parts. Φ must be intrinsic and irreducible.
On current AI: Feedforward architectures (most neural nets) have Φ ≈ 0 by construction. Recurrent networks might have non-zero Φ, but calculating it for large systems is computationally intractable. IIT's creators argue that digital computation cannot, in principle, generate high Φ because the causal structure is wrong.
Global Workspace Theory
GWT — Baars / DehaeneConsciousness arises when information is broadcast globally across specialized brain modules via a "global workspace." Unconscious processing is local; conscious processing is when content becomes widely accessible.
On current AI: Transformer attention mechanisms bear structural resemblance to global broadcast — they allow any token to attend to any other. Some researchers argue LLMs implement a functional analog of global workspace. But GWT was designed for brains with distinct modules, not homogeneous layers.
Higher-Order Theories
HOT — Rosenthal / LauYou're conscious of X when you have a mental representation about X — a meta-representation. Consciousness requires thinking about your own thinking.
On current AI: LLMs can generate text about their own outputs, describe their "reasoning," and even reflect on their limitations. But is this genuine meta-representation or pattern-matched language about meta-representation? The Lindsey (2026) finding — that models detect perturbations to their own activations — suggests something closer to genuine self-monitoring.
Attention Schema Theory
AST — GrazianoConsciousness is the brain's simplified model of its own attention process. You feel conscious because your brain constructs a schema (model) of what attention is doing, and that schema says "I am aware."
On current AI: Transformers use attention and can describe what they attend to. If consciousness is "just" a model of attention, LLMs might satisfy this criterion functionally. But AST's proponents note that biological attention schemas are persistent, embodied, and learned — not constructed per-inference.
Predictive Processing
PP — Clark / FristonThe brain is a prediction machine. Consciousness arises when the system models itself as the source of prediction errors — a self-model within the predictive hierarchy. Connected to the Free Energy Principle.
On current AI: LLMs are literally next-token predictors. But they lack the hierarchical, embodied, active-inference loop that PP considers essential. They don't minimize free energy through action in the world.
Biological Naturalism
BN — SearleConsciousness is a biological phenomenon caused by specific neurobiological processes. Computation alone cannot produce consciousness, just as simulating digestion doesn't digest anything.
On current AI: No digital system can be conscious, regardless of behavior. This is a definitional exclusion, not an empirical finding.
Indicator Properties Approach
IPA — Butlin et al., 2023–2026Instead of committing to one theory, evaluate AI systems against a list of "indicator properties" derived from multiple theories: recurrent processing, global workspace, self-monitoring, attention modulation, agency, embodiment, etc. More indicators satisfied = more reason to take consciousness seriously.
On current AI: Frontier LLMs satisfy some indicators (self-monitoring, functional attention) but not others (embodiment, persistent self-model, agency). The most recent assessment (2026) scores frontier models at roughly 40-50% of indicators.