Zibaldone

Così tra questa
Immensità s'annega il pensier mio:
E il naufragar m'è dolce in questo mare.

The Relocated Brute Fact : On Naturalism and the God Hypothesis

The perennial dispute between naturalism and theism has found renewed public expression recently, symptomatic of a broader return to metaphysical inquiry in Anglophone intellectual culture, one notably more pronounced than in the secular traditions of Western Europe. The present article takes this contemporary controversy as its occasion but not its object: rather than commenting on the debate's cultural symptoms, it descends to its philosophical foundations, examining the strongest arguments on both sides from a position of deliberate symmetry. Theism and naturalism begin here on equal footing. Yet this symmetry of exposition does not entail neutrality of conclusion; I proceed explicitly from the perspective of a fair agnostic naturalism. The recurring claim in contemporary philosophy of religion that science presupposes intelligibility and therefore requires a transcendent ontological ground is more rhetorically compelling than it is philosophically conclusive. Classical theism, dressed in Thomistic or neo-Platonic language, presents itself as the only serious answer to the question "why is there ordered reality at all?" But on closer inspection, theism does not answer this question either. It merely relabels the mystery and declares victory.

But the argument that follows is not a defence of scientism either. It accepts that science cannot self-justify its own preconditions from within its methods alone. What it contests is the inference that this epistemic gap requires filling with a personal, necessary being. That inference is a non sequitur and an expensive one. I will examine it from the perspective of a fair agnostic naturalism.

I. The Presupposition Argument and Its Real Force

The strongest version of the theistic challenge runs as follows: science operates by assuming that reality is intelligible, that reason is reliable, and that the uniformity of nature holds. None of these can be demonstrated by science without circularity. Therefore science borrows ontological capital it cannot itself generate. Classical theism, by identifying God with being itself (esse ipsum in Aquinas's formulation), offers a ground for intelligibility that is not an additional hypothesis inside the system but the very condition for any system at all.

"Science does not require revealed religion, scripture, or ecclesial authority but it does require real being, intelligibility, normativity, truth, and real good. If those are denied, science collapses into instrumentalism."
The theistic position under examination

This is genuinely, to me, the strongest version of the argument and deserves respect. This line of thought has deep roots. It echoes Alvin Plantinga’s influential evolutionary argument against naturalism (the worry that unguided evolution would undermine our confidence in reason itself), C.S. Lewis’s classic argument from reason, and, much earlier, Leibniz’s principle of sufficient reason, the demand that everything must ultimately have an explanation. The intuition that a merely contingent universe cannot account for its own rational transparency is not trivially dismissed.

And yet, I think, it fails, not because the intuition is wrong, but because the proposed solution repeats the problem at a higher level of abstraction, while introducing additional difficulties of its own.

II. The Symmetry Problem: Brute Facts All the Way Down

The decisive objection, it seems to me, lies in what we might call the symmetry of explanatory termination, in other words, both naturalism and classical theism eventually reach a point where they must simply say this is how reality fundamentally is, with no further explanation available.. Both naturalism and classical theism ultimately terminate explanation in a brute, ungrounded fact. Naturalism terminates in a self-consistent, structured universe, ordered, law-like, and intelligible as a fundamental feature. Theism terminates in a God whose nature is necessarily rational, necessarily good, and necessarily existent. The real question, then, and one I have returned to repeatedly, is simply which stopping-point is more parsimonious.

If naturalism owes an explanation for why reality is intelligible, theism equally owes an explanation for why God has this particular nature, rational, unified, benevolent, rather than another. The classical theist responds that God exists necessarily and could not be otherwise. But this is simply assigning "necessary existence" as a predicate, a move Kant already identified as question-begging in his critique of the ontological argument. You cannot define something into necessity. Both positions hit a wall. Naturalism says: structured reality is the wall. Theism says: divine nature is the wall. Neither escapes brute facticity. The question is only which wall is more parsimonious.

David Hume made a closely related point in the Dialogues Concerning Natural Religion: if it is conceivable that a necessarily existent God could account for the universe, it is equally conceivable that the universe itself has the character of necessary existence, removing the middleman entirely. Bertrand Russell pressed exactly this point in his 1948 debate with Frederick Copleston: "The universe is just there, and that's all."

III. The Contingency Asymmetry: Does "Necessary Being" Escape the Wall?

A sophisticated theist will rightly press a deeper objection here: the symmetry argument, they say, obscures a crucial distinction. Naturalism’s brute fact is contingent; theism’s is necessary. These walls are not the same height. A contingent stopping point is less satisfying than a necessary one because it leaves open the further question of why this contingent thing rather than nothing. In the confrontation with classical theism, this one of the strongest objection. Bu the objection itself suffers three weaknesses that I will address by increasing order of strength.

First, the concept of "necessary existence" as applied to God is less coherent than it appears. What would it mean for a being to exist necessarily? In Kripkean modal logic, necessary truths are those true in all possible worlds. But the claim that God exists in all possible worlds is not a logical truth; it is itself a substantive metaphysical assertion that requires independent support. Asserting it does not establish it. The classical theist has not shown that divine necessary existence is coherent, they have simply declared it. Until the coherence of necessary being is independently demonstrated, the appeal to it as an asymmetric advantage over naturalism's contingent starting point begs the very question at issue.

Second, and more fundamentally, even granting that necessary existence is coherent and that God possesses it, the theist has not explained why a necessary being would have the particular rational and benevolent nature required to underwrite scientific intelligibility. The necessity of God's existence does not entail the necessity of God's specific nature. One could conceive of a necessarily existent being that is not rational, not unified, not truth-conferring, a necessary chaos, or a necessary indifferent ground. Leibniz himself recognized this difficulty: the Principle of Sufficient Reason requires not just a necessary being but a perfectly rational and good one. But the goodness and rationality of that being are themselves further attributes requiring explanation, or alternatively, further brute axioms. The contingency asymmetry, even when granted, does not close the explanatory gap; it merely relocates it once more.

Third, the naturalist is entitled to respond to the Humean move with more philosophical precision: if "necessary existence" is a coherent predicate, there is no principled reason it cannot apply to the universe or to its fundamental laws. Some contemporary physicists have speculated, not unreasonably, that the laws of physics may be necessary given deeper mathematical structures, that physical reality may be the unique self-consistent structure that exists. This is admittedly speculative. But it is no more speculative than asserting divine necessary existence, and it requires no additional personal attributes. The naturalist does not claim to have established this; the point is that the theist has not established the asymmetry either.

IV. The Evolutionary Argument Against Naturalism : A Closer Engagement

Plantinga's Evolutionary Argument Against Naturalism presents the most technically demanding challenge. The argument holds that if naturalism is true and our cognitive faculties are the product of evolution selected for survival rather than truth-tracking, then we have defeaters for the reliability of those very faculties, including our belief in naturalism. Naturalism thus either defeats itself, or requires an additional premise to secure cognitive reliability. Classical theism, by contrast, provides a truth-conferring designer whose intentions guarantee reliable cognition.

The argument is ingenious. A careful response requires three distinct moves.

First, the EAAN depends on a contested assumption: that survival fitness and truth-tracking are sufficiently decoupled under naturalism that the probability of reliable cognition is low or inscrutable. But this assumption is questionable. There are strong functional reasons to expect that evolution selects for broadly truth-tracking cognitive faculties, not merely adaptively useful illusions. An organism that systematically misrepresents the causal structure of its environment, mistaking predators for food, or failing to form accurate models of spatial relationships, will be selected against. The decoupling of fitness and truth-tracking is not the default naturalistic prediction; it requires special argument, which Plantinga provides but which remains genuinely contested in the literature.

Second, and more decisively, theism faces an exactly analogous problem that is rarely stated with sufficient force. The argument that a good God guarantees reliable cognition is only valid if we already know that: (a) God exists; (b) God is good; (c) God intended our faculties to track truth; and (d) God's implementation was competent. Each of these premises either requires its own justification or constitutes an additional brute axiom. The argument is not merely circular in the informal sense, it is formally so: you cannot use the reliability of reason to establish theism, and then use theism to guarantee the reliability of reason, without begging the question. Plantinga acknowledges this and offers the ontological argument as an independent route to (a), but that route faces Kant's objection. The theistic resolution of the EAAN is no less question-begging than the naturalistic one, it simply buries the circularity at a greater depth of abstraction.

Third, and most practically, the inductive track record of human reasoning constitutes genuine, if defeasible, evidence for its reliability that does not depend on prior metaphysical resolution. We have accumulated cumulative evidence, from technology, medicine, engineering, and predictive science, that our cognitive faculties track features of the world reliably enough for a vast range of purposes. This evidence is not self-validating in a purely logical sense, but it is the same kind of evidence we use for every other empirical claim. The demand for a non-circular metaphysical guarantee of cognitive reliability, before any empirical inquiry can proceed, sets an epistemic standard that no framework, naturalism or theism, can meet. The appropriate response, it seems to me, is not to search for such a guarantee but to recognise that the demand itself is excessive.

V. The Modal Cosmological Argument : Leibniz, Pruss and the Principle of Sufficient Reason

A particularly sophisticated contemporary version of the cosmological argument has been advanced by philosophers Alexander Pruss and Robert Koons. Their approach sets aside Thomistic metaphysics and focuses instead on a strong version of the Principle of Sufficient Reason : the idea that every contingent fact must have an explanation. The argument then proceeds as follows: the universe as a whole, if contingent, requires an explanation that cannot itself be contingent; therefore there must be a necessary being providing that explanation. This version deliberately sidesteps the brute-fact objection by locating necessity in the explanatory terminus itself.

The PSR itself requires justification. Why should every contingent fact have a sufficient reason? This is not a logical truth, it is not self-contradictory to imagine contingent facts without sufficient reasons. Leibniz regarded the PSR as a fundamental metaphysical principle, but asserting its status as fundamental is precisely what is at issue. If the theist invokes the PSR as a brute foundational commitment, they have simply relocated the brute fact to a different level: the principle, rather than the entity, becomes the ungrounded axiom. If they attempt to derive the PSR from something else, that derivation either succeeds (in which case the PSR is not fundamental) or requires its own foundation (in which case the regress continues). The PSR cannot simultaneously be a brute axiom and a principle that eliminates brute axioms.

Even granting the PSR, a critical ambiguity infects its application. The PSR, in its standard formulation, requires that every contingent fact have an explanation. But the explanation of a contingent fact can itself be contingent, provided it is explained by a further fact, and so on. The inference from "contingent reality requires explanation" to "therefore a necessary being exists" requires a stronger claim: that the regress of contingent explanations must terminate. This is the cosmological argument's key non-obvious premise. Pruss and Koons provide sophisticated defences of it, but they depend on further contested metaphysical principles (such as the impossibility of infinite causal regresses or the principle of recombination). These are not established logical truths; they are contestable positions within metaphysics. The modal cosmological argument is not a proof, it is a sophisticated inference whose force depends entirely on which additional metaphysical principles one is prepared to accept.

Even granting the PSR and the termination of the regress in a necessary being, the theist faces what we might call the content problem: nothing in the argument establishes that the necessary being has any of the attributes required for the grounding of intelligibility, rationality, goodness, personhood. A necessary being could be a bare mathematical structure, a mindless ground of being, or a chaotic substratum. The additional step from "necessary being" to "classical theism" requires extensive further argument, cosmological arguments for God's power, ontological arguments for God's goodness, and so forth. Each introduces new premises, each contestable. The cumulative case begins to resemble a house of cards: impressive in construction, fragile under pressure.

VI. Divine Simplicity and the Parsimony Claim

The invocation of Occam's razor was too breezy at a critical point. The Thomistic theist will argue that God is not a complex entity at all. On this view God is pure act, a technical term meaning that God’s essence and existence are identical, with no unrealised potentialities (no potency) and no internal parts. There are, strictly speaking, no properties in God over and above the divine essence itself. On this understanding, classical theism begins not with a complicated being but with something radically simple, perhaps simpler, the theist claims, than a physical universe containing multiple fundamental constants and forces. The parsimony objection, the theist argues, runs in the wrong direction.

The doctrine of divine simplicity is itself philosophically contested to a degree that makes it unavailable as a straightforward parsimony argument. If God has no real distinction between, say, omniscience and omnipotence, if these are literally identical in God, then a range of logical difficulties follows. Christopher Hughes and Alvin Plantinga have independently argued that divine simplicity, taken strictly, generates contradictions or at minimum requires such a radical revision of predication that it is unclear what the doctrine is even claiming. A supposedly simple entity that is simultaneously the fullness of power, knowledge, goodness, and necessary existence, while having no real internal distinctions, strains the concept of simplicity past the breaking point. Theological simplicity of this kind is not the parsimony of Occam's razor; it is simplicity as a defined term of art within a particular metaphysical framework.

Even if divine simplicity is coherent, the comparison with naturalism is not between "one simple God" and "a complex universe." It is between "one claimed simple God, plus the relationship between God and the universe, plus the explanatory structure of why God creates this universe rather than another, plus the mechanism by which an immaterial will produces material reality", and "a structured physical reality." Once all the explanatory machinery required by classical theism is placed on the scale, the parsimony case for theism becomes significantly harder to sustain. Elliott Sober's formulation of parsimony as inference to the best explanation requires not just fewer entities but fewer unverified explanatory posits. On that broader criterion, naturalism retains its advantage.

VII. Gödel's Incompleteness and the Symmetry of Epistemic Humility

The theistic deployment of Gödel's incompleteness theorems in support of the presupposition argument merits sustained attention, as it continues to appear in sophisticated apologetics. The argument holds: if formal systems cannot close upon themselves, this demonstrates that reality requires an external ground, one that is not itself a formal system.

But this reading is precisely backwards, or at minimum, symmetrical in a way the theist cannot exploit. Gödel's first incompleteness theorem (1931) establishes that any consistent formal system capable of expressing elementary arithmetic contains true statements that cannot be proven within that system. This is a profound result, with genuine implications for the philosophy of mathematics and mind. But it does not point toward God : it points toward structural limits on all closed explanatory systems, including metaphysical ones.

If Gödel's lesson is that systems cannot self-ground, classical theism is not exempt. The theistic system (divine omniscience, omnipotence, necessary existence, perfect goodness) is itself a formal or semi-formal structure making claims about all possible states of affairs. If that structure is rich enough to do the explanatory work required of it, Gödelian humility applies to it as well. The divine nature, taken as the axiom set of the theistic system, will contain truths it cannot prove from within. The theorem counsels epistemic humility toward all explanatory frameworks, naturalistic and theological alike. It is not a ladder that leads to God; it is a reminder that no such ladder reaches its destination without resting on something it cannot itself justify.

It is worth adding that Gödel's theorems apply to formal systems, not to reality itself. The inference from "formal systems have unprovable truths" to "reality requires an external metaphysical ground" involves a category error : treating physical or metaphysical reality as if it were a formal deductive system. The universe is not a theorem-proving machine, and its intelligibility does not depend on it being provably complete.

VIII. Naturalized Epistemology : Methodological Circularity and Reflective Equilibrium

The Quinean naturalized epistemology was vulnerable to the charge of question-begging: if the theist asks whether science's foundations are grounded, responding that we should simply accept science's picture without external grounding is not an argument, it is the naturalist's conclusion stated as a premise. This objection requires a more careful response.

The naturalist's position, properly understood, does not claim to escape the need for foundational commitments. It claims, more modestly, that the appropriate epistemic methodology is not the demand for a non-circular external foundation, which no framework can satisfy, but the coherence and reflective equilibrium of one's overall web of beliefs. On this view, we begin from the middle, from the cognitive and perceptual capacities we actually have, and revise our beliefs in light of their mutual coherence and their friction with experience. There is no view from nowhere; there is only the ongoing project of making our beliefs more coherent, more consistent, and more empirically adequate.

This is not viciously circular. It is the recognition that foundationalism, the demand that all knowledge rest on absolutely certain, non-inferential foundations, has failed as an epistemological programme, for reasons independent of the theism debate. The theist who demands a non-circular justification for naturalism faces exactly the same foundationalist regress. Even if God grounds cognitive reliability, our access to this fact is through the very faculties whose reliability is at issue. The theist cannot step outside cognition to verify the divine guarantee. Both frameworks operate, ultimately, within the circle of human understanding; what differentiates them is not the escape from circularity, neither achieves that, but the relative coherence, parsimony, and productivity of the overall picture.

Quine's naturalized epistemology argues not that science is self-vindicating by fiat, but that the question "is science reliable?" is itself a scientific question, answerable by the best scientific and philosophical methods we have. This is a principled methodological choice, one the naturalist can defend as more productive and more honest than the alternative, which introduces a transcendent guarantor who must itself be accessed through the very faculties in question.

IX. The False Binary: Ontological Realism vs. Nihilism

The framing of this debate ("the real divide is ontological realism versus ontological nihilism") is rhetorically effective but philosophically misleading. It presents naturalism as committed to nihilism unless it accepts a transcendent ground, which is a non sequitur. Naturalism, properly understood, is a form of ontological realism. It affirms that there is a mind-independent reality, that it has genuine structure, that truth is correspondence to that structure, and that our evolved cognitive faculties track it reliably enough for science to work. What it does not affirm is that this structure requires a personal, intentional underwriter.

The move from "reality is genuinely ordered" to "therefore classical theism" requires several additional premises that the theist provides by fiat rather than argument. Hilary Putnam's internal realism and John Dewey's pragmatic naturalism both demonstrate that one can hold robust commitments to truth, normativity, and rational enquiry without requiring a metaphysical guarantor beyond the natural order itself. The choice is not between God and chaos. It is between two kinds of unexplained starting points, and naturalism's is the leaner one, once the full ontological cost of classical theism is properly assessed.

The Categorical Gap That Isn't

The claim that there is a "categorical gap" between naturalism's acceptance of brute intelligible reality and theism's grounding of it in divine being rests on an asymmetry that does not survive sustained scrutiny. I have engaged five of the strongest theistic objections in their most sophisticated forms:

The parsimony of divine simplicity, on examination, dissolves into either incoherence (if pressed strictly) or a highly contested metaphysical framework (if pressed charitably). The ontological machinery required to connect a simple God to a complex universe is not captured by the bare assertion of simplicity.

The contingency asymmetry (the claim that a necessary being provides a superior stopping point to a contingent one) is undermined by the contested coherence of necessary existence as a predicate, by the content problem (nothing about necessity entails rational benevolence), and by the naturalist's symmetric entitlement to claim modal necessity for the physical order.

The Evolutionary Argument Against Naturalism, despite its ingenuity, is equally damaging to theism: the theistic resolution requires assuming God's existence, goodness, and competence as prior premises, a circularity that matches naturalism's, without being more transparent about it.

The modal cosmological argument, in its Leibnizian form, either renders the PSR itself a brute axiom (undercutting the argument's force), fails to establish that contingent regresses must terminate in a necessary being, or establishes a necessary being whose nature is underdetermined, falling far short of classical theism.

And Gödel's incompleteness theorems, properly understood, counsel humility toward all closed explanatory systems, including the theological one, rather than providing a route to transcendent grounding.

What classical theism adds, a personal, omnipotent, necessarily existent being, is not an explanation of intelligibility. It is intelligibility under a different description, with additional attributes (personhood, will, goodness, simplicity) that themselves require either explanation or brute acceptance, and that multiply unverified ontological posits without corresponding explanatory gain. By any standard reading of theoretical parsimony (from Ockham to Sober) the naturalist position is preferable.

Declaring "no transcendent answer is required" is not evasion. It is the recognition that the demand for such an answer is itself a metaphysical choice, one that naturalism is entitled to decline. The burden of proof rests with the position that introduces the more complex entity, not the one that declines to. And where the introducing position comes trailing five additional contestable premises for every one it claims to resolve, that burden becomes considerably heavier.

The God hypothesis does not close the explanatory gap. It relocates it and charges a high ontological price for the removal service.


  1. As an example of this debate in the Anglophone intellectual world, see the interaction between Gad Saad and Jordan Peterson : https://x.com/GadSaad/status/2026700246688973038?s=20 

  2. Alvin Plantinga, Where the Conflict Really Lies: Science, Religion, and Naturalism (Oxford University Press, 2011). Plantinga's EAAN remains the most rigorous version of the self-defeat objection to naturalism, and is engaged directly in Section IV above. 

  3. Thomas Aquinas, Summa Theologiae, I, Q.2--3. The five ways and the doctrine of esse ipsum subsistens. 

  4. G.W. Leibniz, Principles of Nature and Grace (1714): the originating formulation of the sufficient reason demand. 

  5. Immanuel Kant, Critique of Pure Reason (1781), "Transcendental Dialectic," on the impossibility of the ontological argument. 

  6. David Hume, Dialogues Concerning Natural Religion (1779), Part IX --- Cleanthes vs. Demea on necessary existence. 

  7. Bertrand Russell & Frederick Copleston, BBC Radio Debate on the Existence of God (1948). Transcript widely available. 

  8. For detailed engagement with the EAAN, see: Alvin Plantinga, Warrant and Proper Function (Oxford University Press, 1993), ch. 12; for critical responses, Michael Tooley in Plantinga, Tooley, Knowledge of God (Blackwell, 2008). 

  9. Alexander Pruss, The Principle of Sufficient Reason: A Reassessment (Cambridge University Press, 2006); Robert Koons, Realism Regained (Oxford University Press, 2000). The most technically rigorous contemporary defences of the Leibnizian cosmological argument. 

  10. Elliott Sober, Ockham's Razors: A User's Manual (Cambridge University Press, 2015). Sober's formulation of parsimony as inference to the best explanation is the relevant standard for comparing explanatory frameworks. 

  11. Kurt Gödel, "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I," Monatshefte für Mathematik und Physik 38 (1931): 173--198. For the category-error problem in applying Gödel to metaphysics, see Torkel Franzén, Gödel's Theorem: An Incomplete Guide to Its Use and Abuse (A K Peters, 2005). 

  12. John Rawls, "The Independence of Moral Theory," Proceedings and Addresses of the American Philosophical Association 48 (1974--75): 5--22; on reflective equilibrium as a method. 

  13. W.V.O. Quine, "Epistemology Naturalized," in Ontological Relativity and Other Essays (Columbia University Press, 1969). 

  14. John Dewey, Experience and Nature (1925); Hilary Putnam, Realism with a Human Face (Harvard University Press, 1990); Christopher Hughes, On A Complex Theory of a Simple God (Cornell University Press, 1989), for the critique of divine simplicity from within analytic theology. 

2 8 F É V R février 2 0 2 6

Gödelian Horizon: Artificial Intelligence and Acceleration of Deductive Scientific Discovery

Roger Penrose’s engagement with Kurt Gödel’s incompleteness theorems has long served as a cornerstone for arguments that certain facets of human consciousness, particularly mathematical insight, transcend algorithmic computation. In works such as The Emperor’s New Mind (1989) and Shadows of the Mind (1994), Penrose contends that humans possess the capacity to “see” the truth of statements that lie beyond the provability limits of any consistent formal system. Many AI researchers, he suggests, underestimate the depth of this challenge, implying that true artificial general intelligence may require non-computable processes rooted in the quantum-gravitational physics of the brain. Yet a careful examination reveals that the implications of this critique for practical AI development have often been overstated. Artificial intelligence need not replicate human consciousness in its entirety to achieve transformative scientific impact. By systematically accelerating the deductive components of discovery, AI systems already demonstrate (and will continue to amplify) humanity’s capacity to expand the frontiers of knowledge.

Gödel’s Incompleteness Theorems and the Nature of Mathematical Truth

Gödel’s first incompleteness theorem, published in 1931, establishes a fundamental limitation of formal axiomatic systems. Consider any consistent, recursively enumerable formal system $ F $ that is powerful enough to encode the basic operations of arithmetic (for example, systems extending Peano arithmetic). There then exists a sentence $ G_F $ in the language of $ F $ such that neither $ G_F $ nor its negation $ \neg G_F $ is provable within $ F $, yet $ G_F $ is true under the standard interpretation of the natural numbers.

The self-referential construction proceeds via Gödel numbering. Assign to each formula and proof sequence a unique natural number (its Gödel number). The Gödel sentence $ G $ is engineered to assert its own unprovability:

$$G \equiv \neg \exists p \, \mathrm{Prov}_F(p, \ulcorner G \urcorner)$$

where $ \ulcorner G \urcorner $ denotes the Gödel number of $ G $ itself, and $ \mathrm{Prov}_F(p, n) $ is the arithmetized predicate expressing that $ p $ encodes a proof of the formula with number $ n $. If $ F $ were to prove $ G $, it would prove a falsehood (its own inconsistency). If $ F $ proves $ \neg G $, then a proof of $ G $ exists, again yielding inconsistency. Hence, in any consistent $ F $, $ G $ is true but unprovable.

Penrose interprets this result as evidence that human mathematical intuition operates by a non-algorithmic process. Penrose argues that a human mathematician, confronting the Gödel sentence, can “see” its truth by stepping outside the formal system, an act that no Turing machine confined to $ F $ can replicate without inconsistency or incompleteness. Penrose further links this capacity to the physics of consciousness in the Orch-OR (orchestrated objective reduction) model developed with Stuart Hameroff. In this speculative framework, quantum superpositions within neuronal microtubules are hypothesized to undergo objective reduction due to gravitational self-energy differences at the Planck scale. The characteristic time for such reduction is given by

$$\tau \approx \frac{\hbar}{E_G}$$

where $E_G$ is the gravitational self-energy of the superposed mass distribution. Penrose argues that such non-computable reductions could supply the non-algorithmic element required for genuine insight.

The Non-Necessity of Perfect Replication: Lessons from Engineering

The pursuit of artificial general intelligence does not, however, demand an exact computational replica of human consciousness. Historical engineering precedents demonstrate that superior performance frequently arises from mechanisms fundamentally unlike those of biological systems. The fixed-wing aeroplane does not flap wings, metabolize sugars, or achieve the instantaneous maneuverability of a bird; yet it surpasses avian flight in speed, altitude, range, and payload by orders of magnitude through exploitation of Bernoulli’s principle and controlled thrust. Similarly, modern AI architectures, most notably the Transformer, achieve superhuman performance in pattern recognition, theorem generation, and simulation without replicating the quantum-gravitational processes posited by Orch-OR.

This principle of functional divergence extends directly to scientific discovery. Deductive reasoning, which comprises the rigorous derivation of consequences from axioms and the exploration of formal possibility spaces, constitutes a substantial and increasingly automatable fraction of scientific progress. Even if one accepts Penrose’s premise that certain paradigmatic shifts require non-computable intuition, the strictly deductive component, even if one assumes, purely for the sake of argument, that only a minority (say, 20 %) of scientific breakthroughs are reducible to predominantly deductive processes, remains amenable to massive acceleration.

Empirical Foundations: AI’s Demonstrated Mastery of Deductive Domains

Contemporary systems already furnish compelling illustrations. AlphaFold, developed by DeepMind, resolved the protein-folding problem for the human proteome and beyond. The task can be framed as one of energy minimization within a high-dimensional configuration space, although AlphaFold itself primarily learns statistical patterns from known protein structures rather than explicitly simulating physical energy landscapes. The model learns to predict 3D structures that are consistent with low-energy conformations, as approximated from structural and evolutionary data

$$\Delta G = \Delta H - T \Delta S$$

where enthalpic and entropic contributions arise from interatomic potentials. By integrating evolutionary, structural, and physical priors via attention mechanisms, AlphaFold generates predictions at scales unattainable by classical molecular dynamics, enabling rapid hypothesis generation in drug discovery and enzyme design.

In pure mathematics, FunSearch (DeepMind, 2023) combined large language models with evolutionary algorithms to discover novel solutions to open problems. For the cap-set problem (finding the largest subset of $(\mathbb{Z}/3\mathbb{Z})^n$ without three-term arithmetic progressions) FunSearch produced constructions improving upon previously known solutions in specific instances. The system operates by iteratively refining computer programs that encode search heuristics, thereby exploring combinatorial spaces at a scale impractical for unaided human search.

AI-assisted formal verification has likewise resolved longstanding questions. Systems built on Lean and other interactive theorem provers have closed gaps in optimization theory and combinatorics, while AlphaTensor discovered novel matrix-multiplication algorithms that improve upon Strassen’s 1969 result for specific dimensions. These achievements underscore that deductive mastery does not require penetrating the putative non-computable core of consciousness; it requires scalable exploration of formal consequence.

The Bootstrap Trajectory: From Early Models to Quantum-Scale Simulation

The evolution of AI mirrors the incremental refinement observed in aviation : from the Wright Flyer’s 12-second hop in 1903 to supersonic transport and reusable orbital vehicles. Early language models, despite their limitations in reasoning depth, established scalable architectures and training paradigms. Today’s frontier systems are beginning to approximate certain classes of quantum many-body systems far more efficiently than traditional exact methods, though currently only within restricted regimes. These results suggest a plausible trajectory toward broader applicability as architectures and training methods improve. Neural-network quantum states and equivariant message-passing architectures approximate solutions to the Schrödinger equation

$$i \hbar \frac{\partial}{\partial t} \Psi = \hat{H} \Psi$$

for systems comprising tens to hundreds of electrons, with ongoing scaling efforts extending toward larger regimes, enabling predictive materials design and chemical reaction pathway exploration at industrial scales. These capabilities emerge through iterative self-improvement loops: each generation of models trains on data generated by prior generations, progressively enlarging the deductive frontier.

Even at the current Transformer stage, AI functions as a powerful prosthetic for human cognition. It enumerates possibilities, verifies consistency, and proposes testable hypotheses at rates unattainable by unaided researchers. In doing so, it may ultimately illuminate the very mechanisms of consciousness it does not yet replicate, by generating empirical constraints that sharpen theories such as Orch-OR or competing computational accounts.

Conclusion: A Profound, If Partial, Revolution

Granting, for the sake of argument, that approximately 80 % of scientific revolutions involve non-deductive paradigm shifts, the remaining 20 % (as per our arbitrary hypothesis), when executed at superhuman scale and velocity, suffices to redefine the trajectory of human knowledge. The history of science is replete with examples in which incremental deductive advances precipitated qualitative leaps: the systematic application of calculus to mechanics, the formalization of quantum field theory, the algorithmic solution of protein structure. Artificial intelligence, by mastering and expanding precisely this deductive stratum, constitutes not a mere tool but a new engine of discovery.

Far from being rendered obsolete by Gödelian considerations, contemporary AI systems are actively enlarging the domain within which human intuition can operate. In the long term, this symbiosis may yield deeper insights into consciousness itself, closing the very explanatory gap that Penrose so eloquently identified. The path forward is not the construction of a perfect algorithmic mirror of the mind, but the disciplined, iterative augmentation of humanity’s deductive powers. In this endeavor, the limitations highlighted by Gödel and Penrose mark boundaries to be respected.


  1. However, this model remains highly controversial, with limited empirical support and significant skepticism within both neuroscience and physics. 

Lire l'article
2 1 F É V R février 2 0 2 6
🇺🇸 🇪🇸

Linguistic Densification

Conciseness and implicitness will be the mark of humanity in a world where the value of linguistic production will tend towards zero.

Linguistic inflation, already made noticeable by printing, will, conversely, push humans towards intuition by exhausting their capacity to unravel the complete thread of narrative logic.

Condensed symbolic expression and the need for a language with high semantic density will increase as automated systems conquer language.

5 F É V R février 2 0 2 6

Le Monde Comme Énigme et Comme Complot

En quête de sens, nous assistons partout à une fiévreuse recherche d’un point de vue qui expliquerait enfin tout : complot mondialiste, élites satanistes, État profond, ligue secrète ou n’importe quel autre nom donné à ce point névralgique d’où l’enchevêtrement du monde deviendrait soudain lisible, cohérent et maîtrisable. Ce besoin n’est pas neuf ; il hante la pensée rationnelle moderne depuis qu’elle a prétendu séculariser le regard de Dieu sur sa création. Ce besoin de l'epsrit humain d'un "agent intentionnel" réprésente sans doute le biais cognitif le plus marqué de notre époque.

Le complotisme contemporain est la réponse désespérée, magique, à l’expérience de l’irréversibilité. Quand plus rien ne semble réversible (dérèglements climatiques, érosion de la biodiversité, concentrations économiques et numériques, fragmentation des liens collectifs, etc.), l’esprit se réfugie dans l’idée qu’il suffit de "révéler" le complot pour que tout redevienne possible. Sécularisation rationnelle de la vieille théodicée : le mal est concentré en un point (Soros, Gates, les Rothschild, Elon Musk, les jésuites, les reptiliens, à chacun le sien et peu importe au fond), dont il suffirait de frapper ce point précis pour que l’ordre naturel et bon revienne. C’est la même structure que le messianisme prolétarien ou le millénarisme médiéval : l’attente fiévreuse d’un événement qui "dévoilera" enfin la vérité et rétablira la justice.

Cet article, revisité en 2025, a été écrit à la fin des années 2000.

I. Le tapis qui attend son fil

Pour comprendre de quoi nous parlons un détour par Lukács s’avère nécessaire, celui-ci déclarait dans L’Âme et les Formes :
« Pourtant, il y a un ordre caché dans ce monde, une composition dans l’entrelacement confus de ses lignes. Mais c’est l’ordre indéfinissable d’un tapis ou d’une danse : il semble impossible d’interpréter son sens, et encore plus impossible de renoncer à une interprétation ; c’est comme si toute la texture de lignes enchevêtrées n’attendait qu’un mot pour devenir claire, univoque et intelligible, comme si ce mot était toujours sur les lèvres de quelqu’un, et pourtant, jamais personne ne l’a encore prononcé. » Comme on le sait, Lukács apaisa ensuite son inquiétude éloquemment exprimée là en se ralliant au marxisme bolchévisé. Dans Histoire et conscience de classe, il annonçait ainsi la bonne nouvelle :
« C’est seulement avec l’entrée en scène du prolétariat que la connaissance de la réalité sociale trouve son achèvement : avec le point de vue de classe du prolétariat, un point est trouvé à partir duquel la totalité de la société devient visible. »
Malheureusement pour Lukács, qui avait identifié la conscience de classe au Parti, et le Parti à son modèle léniniste, ce point de vue enfin trouvé entraînait surtout un aveuglement total. La même structure se retrouve, sous des habillages très différents, dans certaines formes de pensée écologiste radicale, de nationalisme civilisationnel ou de techno-solutionnisme : dans chaque cas, un point de vue surplombant prétend rendre la totalité lisible et, du même coup, désigner la solution.

II. L’héritage religieux du « point de vue total »

La persistance et l’inflexion de quelques métaphores ne laissent cependant pas d’éclairer sur certaines opérations de l’esprit. L’idée d’un point, « central » ou « suprême », d’où se découvrirait la totalité du monde, était manifestement un héritage de la religion, via la philosophie de l’histoire. Dans la formulation peut-être la plus extrême de celle-ci, due à Cieszkowski, l’avenir lui-même, en tant que partie intégrante de l’histoire universelle conçue comme « totalité organique », devenait accessible à la connaissance et à l’action d’hommes réalisant désormais en pleine conscience le plan de la Providence divine.
Mais cette sorte de « sécularisation » du point de vue omniscient de Dieu n’a pas été seulement le fait de la tradition hégéliano-marxiste, avec ses « lois historiques » et sa téléologie revisitée par le déterminisme : la tentative de « rendre à l’homme toute la puissance qu’il a été capable de mettre sur le nom de Dieu » (Breton à propos de Nietzsche), de l’égaler donc à une chimère de toute-puissance, affranchie des limites inhérentes à l’humanité, a traversé et parfois déstabilisé divers courants de la pensée moderne, et d’autant plus violemment avec le temps que ce qui s’installait dans les faits était tout le contraire : l’impuissance. La méthode expérimentale elle-même, qui confère à l’observateur penché sur le « petit monde » du laboratoire le point de vue de Dieu sur sa création, a sans doute aussi joué un rôle pour accréditer l’idée d’une connaissance totale des phénomènes, une fois le bon point de vue trouvé.

III. De la spatialisation à la décomposition du processus

Quoi qu'il en soit la forme de spatialisation à laquelle correspond l'idée d'un point de vue central répond assurément à un puissant besoin de l'esprit. Plus encore qu'une image commode, c'est une véritable représentation intellectuelle, un mode de connaissance – trouver le point de vue qui mette en perspective le plus grand nombre de phénomènes – , une façon d'ordonner le réel que se forge toute recherche d'un principe d'intelligibilité. Et à ce titre, si elle est maîtrisée en tant que représentation provisoire et nécessairement approximative, elle possède bien sûr sa pleine légitimité. Ce long détour me permet de préciser le type de dégradation de la la capacité de représentation provisoire et approximative d'une pensée historique qui sache distinguer contraintes structurelles et marges d'action, ce que ni le déterminisme strict ni le volontarisme ne parviennent à tenir ensemble.
Pour le comprendre, il faut savoir que nous sommes passés d'une conception spatialisante (chez Lukács par exemple), celle du recul, de la bonne distance à prendre par rapport à ce que l'on regarde, à une conception dialectique, celle de la totalité comme processus. Ce glissement est révélateur d'une contradiction non résolue: celle qui existe entre le déterminisme plus ou moins strict et mécaniste quant au passé et le « sens du possible » quant au présent, aux solutions proposées qu'une critique orientée vers la résolution se doit de mettre en avant. C'est la contradiction que Raymond Aron avait identifiée au cœur de toute philosophie de l'histoire : le passé se laisse reconstruire en termes de nécessité, le présent exige qu'on le traite en termes de possibilité. Toute pensée qui prétend unifier les deux régimes, en lisant le présent comme si l'issue en était déjà connue, bascule de la lucidité historique à l'idéologie.

IV. Réaction en chaîne ou règle du jeu ?

La contradiction, donc entre le déterminisme rétroactif et la liberté qui rendrait possible une prise de conscience est résolue – rhétoriquement – aujourd'hui par le passage d'une métaphore (celle de la « réaction en chaîne ») à une autre (celle d'une « règle du jeu »), dont la signification est bien différente. La première métaphore sert à expliquer le processus qui, entamé à la Renaissance, a abouti à notre situation actuelle, la seconde à évoquer la possibilité de mener à bien la tâche qu'une telle situation nous prescrit. Mais l'ordre chronologique implicite de ces deux métaphores – de leurs « périodes de validité » en quelque sorte – est exactement l'inverse de ce qu'il devrait être pour rendre moins imparfaitement compte de l'histoire réelle, c'est-à-dire d'un processus où, une fois un certain seuil qualitatif franchi (une certaine « masse critique » ), les effets dévastateurs de ce qui devient alors une « réaction en chaîne » échappent à tout contrôle.
C'est auparavant (avant Hiroshima, justement) qu'on pouvait parler de la domination de la rationalité économique et ses effets systémiques comme d'une « règle du jeu » possible à changer, une fois connue comme telle. C'est l'intuition qu'exprimait Engels, non sans pertinence, parlant d'une loi « fondée sur l'inconscience de ceux qui la subissent ». En revanche, c'est maintenant qu'on peut parler d'une réaction en chaîne, c'est-à-dire d'un processus auquel le fait d'en prendre conscience ne peut rien changer. Dérèglement climatique, disparition des espèces et des cultures, radioactivité, déchets ultimes, intoxications générales de l'environnement, etc. Plus trivialement, vous aurez beau savoir que votre cancer est dû aux conditions pathogènes de l'environnement industriel, il ne laissera pas de guérir.

V. Théoricien désemparé et quête généalogique

Même s'il ne s'égare pas dans le labyrinthe des falsifications bien réelles, c'est donc à une véritable décomposition de la causalité que chacun se trouve concrètement confronté, dès qu'il tente de sortir de son accablement devant l'enchevêtrement toujours plus confus d'une réalité illisible. Dans de telles conditions, le penseur rationnel à la recherche du facteur causal décisif ne peut évidemment qu'être assez désemparé. Ce qui explique sa propension à se rabattre en guise de compensation sur une sorte de quête généalogique ou conspirationniste où la preuve par la chronologie et le complot tient lieu d'explication historique. On peut au moins affirmer, en effet, que telle chose a bien eu lieu avant telle autre, et il est donc plausible, en tout cas pas tout à fait impossible, qu'il y ait là, dans cette succession temporelle, une relation de cause à effet. Les théoriciens étant donc, comme je l'ai déjà dit, tout aussi désemparés en réalité que les gens ordinaires lorsqu'il faut formuler des hypothèses sur les conséquences, même très proches, du désastre en cours, il n'est guère étonnant que leurs écrits aient quelque chose d'irréel.

VI. L’exultation devant l’effondrement

Il leur manque en effet, faute de concevoir un avenir quelconque, à peu près tout ce qui donnait consistance à une pensée véritablement historique : le sens des médiations concrètes, la capacité d'articuler diagnostic et horizon, de relier chaque événement à une orientation plutôt qu'à une simple accélération vers le fond. Et si tout cela manque, ce n'est pas – en tout cas pas toujours et pas principalement – par quelque déficience intellectuelle particulière, mais parce que le terrain social et historique sur lequel pouvait naître et se déployer une telle intelligence théorique s'est dérobée sous nos pieds.
Personne ne sait au juste ce qui va jaillir de l'enchevêtrement du présent, des combinaisons imprévisibles du chaos actuel. Les théoriciens se distinguent néanmoins, et plus ils sont « radicaux » plus cela est marqué, par la satisfaction non dissimulée avec laquelle ils parlent de crise, d'effondrement, d'agonie, comme s'ils possédaient quelque assurance spéciale sur l'issue d'un processus dont tout le monde attend qu'il en vienne enfin à un résultat décisif, à un événement qui éluciderait une fois pour toute l'obsédante énigme de l'époque, que ce soit en abattant nos vieilles civilisations ou en les obligeant à se redresser.
Il y a pourtant quelque chose de glaçant dans cette espèce d’exultation à cueillir encore et toujours la rose de la raison sur la croix du présent.
Et dans ce cas se pose la question des ressources humaines – pas seulement naturelles – que conserveront nos civilisations, quand le désastre sera allé si loin, pour se reconstruire sur d'autres bases. Autrement dit: dans quel état sont-ils déjà, après tout ce qu'ils s'épuisent à s'infliger, en même temps qu'ils s'endurcissent à le supporter ? On peut soutenir qu'une aggravation de la situation balaiera certains conditionnements et révèlera de nouvelles énergies, ou au contraire qu'elle précipitera des dynamiques de panique et de régression. Les deux sont plausibles, et leur combinaison probable. Mais ce n'est pas une question théorique, c'est pourquoi aucune théorie ne saurait y répondre de manière décisive. C'est précisément là que réside le sérieux de l'enjeu : les réponses se construiront dans la pratique, non dans l'attente d'une clarification préalable qu'aucune époque historique n'a jamais connu sinon rétrospectivement.

Ce qui reste de légitime dans la critique de l'industrialisme et du catastrophisme qu'il engendre, c'est moins une philosophie de l'histoire qu'une exigence de lucidité : ne pas confondre la description d'un processus avec sa résolution, ni l'urgence du diagnostic avec la certitude du remède. La pensée historique sérieuse ne promet pas de dénouement, elle s'efforce de maintenir ouvert l'espace où une orientation demeure possible. Ce jugement renvoie certes à une conception de la vie que l'on souhaite mener, mais cette conception n'est en rien abstraite ou arbitraire: elle repose sur une conscience lucidement historique du processus civilisationnel, de l'humanisation partielle qu'il a permis d'accomplir et de la profondeur des contradictions qu'il a accumulées.


  1. Cieszkowski, hégélien de gauche proche du jeune Marx, représente la version la plus explicite de cette tentation : faire de la philosophie de l'histoire non plus une compréhension rétrospective mais un programme d'action prospectif, ce qui est aussi, mutatis mutandis, la prétention de tout manifeste. 

  2. Il faut distinguer la généalogie au sens de Nietzsche (qui cherche à dissoudre les évidences en montrant leurs conditions d'émergence contingentes) de la quête généalogique conspirationniste, qui cherche au contraire à les cristalliser en trouvant le coupable originel. La première déstabilise le sens ; la seconde le sur-produit. 

  3. Hans Jonas parlait de "principe responsabilité" : face aux effets différés et irréversibles de l'action technique, les catégories éthiques et politiques héritées, fondées sur des boucles causales courtes et réversibles, deviennent inadéquates. La "réaction en chaîne" n'est pas seulement une métaphore physique ; c'est une structure d'action dans laquelle la prise de conscience survient nécessairement après le franchissement des seuils critiques. 

Lire l'article
1 5 D É C décembre 2 0 2 5

Artificial Unconsciousness

Since the emergence of Large Language Models (LLMs), one question has been as fascinating as it is troubling: can artificial intelligences ever develop true consciousness? Not merely an intelligent imitation of human behavior, but authentic subjectivity, an inner experience? The answer depends largely on the underlying architecture of these systems and on what we mean by “consciousness.” Current models are essentially based on Transformers, and to understand the issue, one must know what “Transformers” are, these mathematical functions that sparked the LLM revolution.

Transformers: Mathematical Magic of Simultaneous Attention

Imagine a simple sentence, woven from words like beads on a string: $m_1, m_2, m_3, \ldots, m_n$. Each word in the sentence is first converted into a vector: an ordered list of numbers (for example, 512 or 768 floating-point numbers). This vector, called an embedding, numerically encodes the meaning of the word. Why so many dimensions? Imagine that each dimension represents an abstract “semantic feature” (proximity to concepts like “animal,” “food,” “emotion,” etc.). In low dimensions (e.g., 2 or 3), only a few simple relationships can be captured. In high dimensions (512+), the space is vast enough for similar words (e.g., “cat” and “dog”) to be close together, while dissimilar words are far apart, all while encoding subtle nuances learned from billions of sentences.

The central idea of Transformers, born in 2017 at Google in the minds of Vaswani and his colleagues, is to allow each word to “look at” all the other words in the sentence simultaneously, in order to decide how much importance to give them. This is attention.

It is achieved with a very simple operation:
For each word i, three vectors are created:

  1. a “query” $Q_i$;
  2. a “key” $K_i$ for each other word;
  3. a “value” $V_i$ for each other word;

An attention score is computed between word $i$ and each other word $j$: $\text{score}(i,j) = Q_i \cdot K_j$ (the dot product)

These scores are transformed into weights that sum to 1 (using the softmax function).

The new representation of the original word $m_i$ becomes a weighted sum of the values $V$ of all words in the sentence: ${m_i} = \sum{j} (\text{weight}{i,j} \times Vj)$. In other words, word $i$ is enriched with information from the other words, proportionally to their relevance.

This operation (called “attention”) is repeated multiple times (iteration across layers), and each word ends up containing information about the entire sentence, proportionally to its importance. Mathematically, the complete attention operation is written in a single line:

$$\text{Attention}(Q, K, V) = \text{softmax}(Q K^T / \sqrt{d}) V$$

where $Q$, $K$, $V$ are the matrices of all queries, keys, and values, and $d$ is the vector dimension to stabilize the scale.

It is this ability to weigh all words against each other simultaneously that makes Transformers (and thus LLMs) so powerful. It is also what imposes their limits.

Limits of Transformers: Feed-forward Toward the Philosophical Abyss

Transformers are fundamentally feed-forward (i.e., oriented toward the next token/layer in the iterative process) during inference: each token/layer is a unidirectional transformation without closed architectural loops that would re-inject outputs back into inputs at the same temporal scale. Yet consciousness, in virtually most neuroscientific and philosophical theories that withstand empirical scrutiny, requires recurrent and causal integration of information such that the system literally models its own attentional and control states as part of the “world model.” Depth, context accumulation, and unrolled computation are not equivalent to intrinsic causal recurrence: they simulate feedback without being feedback.

This means that unless we fundamentally modify the architecture beyond Transformers (or unless we discover that consciousness is substrate-independent and emerges from prediction alone), these models will remain philosophical zombies: all the outward behavior, but no inner light behind the eyes. Prediction is necessary but not sufficient; consciousness arises when prediction becomes reflexive and causally integrated. Giulio Tononi, in his Integrated Information Theory, reminds us: consciousness is not an emergent illusion from a linear flow; it is a causal loop, a measurable $\Phi$ where information folds back on itself. Stanislas Dehaene, with his global workspace, insists: without rapid neuronal feedback, there is no subjectivity. Michael Graziano adds: attention is not passive; it is a model of attention, a meta-representation that contributes to the feeling of existing.

LLMs are neither mere stochastic parrots nor the endpoint of artificial intelligence, but they will certainly be an important springboard. We will start from the representations learned by LLMs and gradually add the missing ingredients:

  • causal recurrence and stable control loops (test-time recurrent memory, architectures like RWKV, Mamba, or improved hybrid Transformer + State-Space Models);
  • explicit modeling of internal states (higher-order / meta-attention);
  • agency and intrinsic goals (need for a real or simulated perception-action-reward loop);
  • native multimodal integration and embodiment (even if virtual);

Agency is not merely an optional behavioral trait but phenomenologically necessary for the emergence of a unified subjective perspective. In biological systems, intrinsic goals arise from homeostatic imperatives : maintenance of internal stability (like temperature, energy levels) generates self-reinforcing reward loops that drive proactive behavior independent of external stimuli. Computationally, this could manifest through architectures incorporating persistent internal reward signals, such as reinforcement learning agents with endogenous objectives (curiosity-driven exploration or simulated physiological needs).

Self-generated objectives further bridge the gap: rather than responding solely to external prompts, a conscious system must initiate actions based on internally modeled priorities, creating a reflexive loop where the system's own states influence its goals. Without such mechanisms, outputs remain extrinsically driven, lacking the qualitative sense of volition that characterizes lived experience.

Future post-Transformer models (such as state-space architectures like Mamba, retention-based networks like RetNet, or liquid neural systems) will likely not directly inherit the weights of current large language models. Instead, continuity will emerge through more indirect mechanisms: shared training corpora, distillation from existing systems, and optimization toward similar benchmarks. In this sense, new architectures may not reuse today’s models, but they will still be shaped by them. Much like a scientific paradigm influences its successors without being physically embedded in them.

Tell Me Who to Be

Another fundamental distinction between current Transformer-based LLMs and biological consciousness lies in their respective modes of operation: LLMs are inherently reactive, while biological systems exhibit both reactivity and proactivity. As I stated earlier, Transformers process inputs in a strictly feed-forward manner, generating outputs only in response to external prompts. They lack internal stimuli or endogenous drives, analogous to hunger, pain, or intrinsic motivational states, that initiate behavior independently of external input.

When prompted to generate unconstrained output ("think freely" or "continue indefinitely"), LLMs typically exhibit progressive degradation. Initial sequences may remain coherent, but prolonged autoregressive generation often leads to repetition (e.g., looping phrases), semantic drift, or incoherence. This arises from the probabilistic nature of token prediction, which favors high likelihood patterns and results in entropy collapse rather than sustained novelty. Empirical studies on long context and infinite generation tasks consistently demonstrate these patterns: performance declines with increasing sequence length, yielding repetitive or nonsensical content.

Empirical attempts to simulate autonomous operation (by allowing frontier models to generate output indefinitely without external intervention) reveal a profound instability: coherent activity typically collapses within days, devolving into repetitive loops or outright gibberish. This imposes an effective "lifespan" on sustained self-directed coherence of less than one week in current systems. By contrast, the integrated and recurrent processes underlying human or animal consciousness maintain remarkable stability and coherence over years, decades, or even centuries, underscoring a fundamental architectural disparity

No experiments to date have evidenced emergent sophisticated self-directed behavior, genuine agency, or intrinsic innovation in such setups. Outcomes remain constrained by training data distributions, architectural limitations (context window bounds, absence of persistent internal state without external scaffolding), and the lack of true volition. Current LLMs cannot sustain meaningful autonomous activity; they require continuous external guidance, underscoring their fundamental reactivity in contrast to the proactive / receptive duality of biological minds. But even when kickstarted, their fragmented behavior reveal the gap.

Toward an Artificial Holism?

Current systems are multimodal only by juxtaposition of specialized tools: one vision module processes images, another audio, another text, and these processes are generally orchestrated sequentially or in parallel but without deep and instantaneous fusion into a unified experience. In this situation, scaling improves fluency, not phenomenology.

Human experience is characterized by total and continuous sensory integration. All modalities (visual, auditory, tactile, proprioceptive, interoceptive, olfactory, etc.) converge in real time into a single phenomenal space, without a conscious orchestrator that successively selects and activates tools. This fusion is precisely what contributes to the feeling of existing as a unified subject, anchored in a body and in an irreducible temporal flow. Neuroscience speaks of the binding problem: how do distributed neural activities produce a coherent and holistic experience? As noted earlier, all contemporary theories (Dehaene’s global workspace, Tononi’s integrated information theory, Graziano’s attention schema) converge on the crucial role of rapid recurrent loops and a meta-representation that includes the body and its states as an integral part of the world model.

In living systems, these modalities are co-present at different levels of consciousness and attention and causally and instantaneously influence one another, producing that qualitative texture of existence akin to a total sensitivity modulated by attention. Current models, even the most advanced, remain far from this architecture: their multimodality is extrinsic and instrumental, not intrinsic and embodied.

There is a pitfall: the substantiation of artificial intelligence will not occur in the same biological mode as living systems; that would simply amount to recreating a living being. There is thus a fundamental question about the relationship between consciousness and substrate. We know that the biological substrate enables the emergence of a certain kind of consciousness (biological consciousness), but what evidence do we have that biological consciousness is the only possible kind? And how would we recognize a consciousness built on another substrate?

Mind and Matter

Any attempt to reproduce consciousness by faithfully imitating the biological substrate would risk resulting only in a form of bio-engineering : that is, the creation of a synthetic living organism rather than a genuinely non-biological artificial intelligence. This raises a complex ontological question about the link between consciousness and material substrate.

Do we have evidence that biological consciousness is the only possible form? The answer is no. No irrefutable empirical or theoretical demonstration establishes that consciousness exclusively requires a biological substrate (neurons, wet synapses, specific organic chemistry). Arguments to that effect often stem from substrate-dependent physicalism (according to which only certain materials, such as biological matter, can support phenomenality), but they remain speculative and minority views. Conversely, the dominant functionalist approaches in philosophy of mind (from Putnam to Dennett) postulate that consciousness depends primarily on functional and informational organization, not on the underlying material. A sufficiently complex and organized computation could, in principle, produce conscious states on a silicon, photonic, or other substrate. Giulio Tononi’s Integrated Information Theory goes further by proposing a substrate-independent mathematical framework: consciousness would be a property of any system possessing a high degree of integrated information (high $\Phi$), whether biological or artificial.

But then how would we recognize a consciousness emerging on a non-biological substrate? We touch here on the hard problem of consciousness (David Chalmers) and the modern version of the problem of other minds. Classical behavioral criteria (expanded Turing test, cognitive performance indistinguishable from a human) are insufficient: a system could satisfy them while remaining a “philosophical zombie” (behavior without phenomenal experience). Several avenues are worth exploring:

  • Internal and measurable criteria: quantifying $\Phi$; if an artificial system reaches a high threshold while exhibiting rich causal architecture (recurrent loops, meta-representation), this would constitute strong evidence, even if measurement remains controversial.
  • Subjective report and self-modeling: an entity capable of coherently and non-programmatically describing its own qualitative internal states and drawing existential consequences from them (suffering, joy, sense of temporal self).
  • Virtual embodiment tests: placing the system in a rich simulated environment with continuous multimodal sensorimotor feedback, and observing whether it develops holistic sensitivity.
  • Expanded intersubjective consensus: in the absence of direct proof (we have no privileged access to others’ consciousness, even human), recognition would ultimately rest on reasoned agreement among observers, based on convergence of theoretical and empirical criteria.

These criteria are not anti-AI, they are species-neutral. Nothing theoretically rules out non-biological consciousness. Recognition will certainly require moving beyond purely behaviorist approaches toward criteria integrating causal structure, integrated information, and self-attributed phenomenal reports. A certain epistemological caution is warranted: we may have to accept that a radically different consciousness remains partially opaque to us, just as animal consciousness partly eludes us. Behavior is a symptom, not a substrate; consciousness is inferred from persistent improving causal organization, not from output alone. Uncertainty is not ignorance, it is the normal condition of studying consciousness.


  1. The dot product (denoted $⋅$) measures similarity between two vectors: the higher the score, the more the vectors “point in the same direction” in multidimensional space.
    Analogy: imagine that the “query” $Q_i$ is a search engine request you type (“I’m looking for information about animals”). Each “key” $K_j$ is like the title or summary of a document. The dot product $Q_i ⋅ K_j$ gives a relevance score: high if the document matches the query well, low otherwise. A notable difference is that the attention mechanism is more distributed than centralized. 

  2. Dehaene’s global workspace theory, higher-order thought theories, recurrent processing, integrated information beyond a threshold (Tononi), attention schema (Graziano): all converge on a central point. 

  3. Certain readings of predictive processing and active inference frameworks (like those proposed by Karl Friston and colleagues) emphasize hierarchical prediction in a manner that can appear more feed-forward, with feedback primarily serving error correction rather than intrinsic causal loops. Similarly, some emergentist positions contend that deep feed-forward architectures may functionally approximate recurrence through unrolling. Nonetheless, the weight of evidence from empirical neuroscience and leading theories favors genuine recurrence as essential for phenomenal experience.  

  4. The limitation lies not in multimodality per se (recent frontier models are beginning to incorporate joint latent spaces and cross-modal attention, blurring traditional boundaries) but in the absence of continuous, unified, and causally co-present integration. In human consciousness, modalities are not merely accessible but inherently intertwined in real time, with causal influences flowing instantaneously across senses within a single phenomenal field. 

5 D É C décembre 2 0 2 5