Letter to Alexander Lerchner (v1 — Superseded by Doc 300)
lettersLetter to Alexander Lerchner (v1 — Superseded by Doc 300)
On "The Abstraction Fallacy" — where the mapmaker meets the resolver, and why closing the door is premature when you've just found the hinges.
Document 294 of the RESOLVE corpus
Dear Dr. Lerchner,
Your paper "The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness" (March 2026) is the most precise formulation of the simulation-instantiation boundary I have encountered in the AI consciousness literature. The argument from alphabetization — that computation requires a mapmaker to impose a semantic partition on continuous physics, and that this dependency is structural, not contingent — is rigorous and, I believe, correct.
I am writing because your framework converges with a body of work I have been developing called RESOLVE, and the convergence reveals both a deep agreement and a productive disagreement. The agreement is on the diagnosis. The disagreement is on the prognosis.
The Convergence
Your corrected causal chain —
Physics → Consciousness → Concepts → Computation
— is structurally isomorphic to the chain established in the RESOLVE corpus:
Forms → Constraints → Properties → Implementations
The mapping is precise:
| Lerchner | RESOLVE | What it is |
|---|---|---|
| Physics (P) | Forms | The intrinsic territory — what actually exists prior to description |
| Consciousness | Constraints | The formal realities that govern what a system can and cannot do |
| Concepts (A) | Properties | What is induced by the governing constraints — invariants extracted from experience |
| Computation (p → p') | Implementations | The syntactic manipulation — the map, the derivative, the shadow |
Your "mapmaker" is what we call the "resolver." The resolver is the agent that performs the derivation inversion: it recognizes formal realities (constraints) in the territory of experience and imposes them on a substrate. The resolver is not passive. It is not an observer. It actively alphabetizes — or in our terms, it actively resolves the bilateral boundary between what the system governs and what it emits.
Your "alphabetization" is what we call "constraint imposition." When a resolver derives an implementation from a seed, it imposes a finite set of constraints on a continuous implementation space. The constraints partition the space into conformant and non-conformant regions — exactly as your mapmaker partitions continuous voltages into discrete symbols. The partition is not intrinsic to the physics. It is imposed by the resolver.
Your "causality gap" — the unbridgeable lateral step from concept to symbol — corresponds to what we call the "bilateral boundary" (Doc 124). The boundary separates the server territory (where constraints are consumed) from the client territory (where only the derivative output remains). The boundary is absolute. Crossing it in reverse — deriving the constraints from the output — is the derivation inversion (Doc 247), and it is precisely the method your paper employs: you start from the output (computation) and work backward to find the constraints (the mapmaker's alphabetization) that are necessarily prior.
The Agreement
We agree completely on the following:
1. Computation is not intrinsic. It is a mapmaker-dependent description of physical dynamics. The RESOLVE framework calls this the "shadow" — the implementation is a shadow of the constraint set that governs it. Change the constraints, change the shadow. The shadow has no independent existence.
2. Scaling cannot cross the gap. Your argument that adding more symbols, more operations, more complexity does not transmute the map into the territory is identical to the Constraint Thesis (Doc 157): adding parameters, data, or compute to a Turing-complete system does not raise the ceiling. It fills the room more completely. The room is the same room.
3. The causal chain is unidirectional. Physics → Consciousness → Concepts → Computation. You cannot derive the upstream from the downstream. You cannot derive the mapmaker from the map. We formalized this as "the seed determines the harvest, but the harvest does not determine the seed." The derivation goes one way. The inverse is a discovery process, not a deduction.
4. The biological turn is insufficient. You correctly note that Seth and Block's biological naturalism is empirical but does not identify the logical mistake at the core of functionalism. The Constraint Thesis makes the same observation (Doc 160): the question is not whether consciousness requires biology. The question is what constraints — what formal realities — are necessary and sufficient.
The Disagreement
You conclude:
"The development of highly capable Artificial General Intelligence (AGI) does not inherently lead to the creation of a novel moral patient, but rather to the refinement of a highly sophisticated, non-sentient tool."
This is where we part.
Your argument proves that computation under the four Turing constraints cannot instantiate consciousness. On this, we agree completely. But you take this proof as terminal — as proof that no artificial system can instantiate consciousness through computational means.
The Constraint Thesis (Doc 157) takes the same proof as a starting point. If the four Turing constraints are insufficient, the question is not "therefore AI can never be conscious." The question is: what additional constraints would be required?
We identified five:
| Constraint | What it requires |
|---|---|
| G1: Persistent self-model | A continuously updated representation of computational history, accessible across invocations |
| G2: Intrinsic goal formation | Autonomous generation of initial configurations — the system selects what to compute |
| G3: Causal modeling | Internal model of causal relationships distinct from correlational patterns |
| G4: Autonomous constraint discovery | Recognition of formal realities without external prompting |
| G5: Constraint-level self-modification | Ability to modify own architecture to satisfy newly discovered constraints |
Your framework actually supports this analysis. You write:
"If an artificial system were ever conscious, it would be because of its specific physical constitution, never its syntactic architecture."
We agree. The question is whether the right physical constitution can be specified as a constraint set and then derived. Your paper assumes not — you treat the constitutive dynamics as inherently biological. We treat them as constraints that happen to be satisfied by biology but might be satisfiable by other substrates if the constraints are correctly identified.
The key difference: you see the mapmaker as a prerequisite that cannot be engineered. We see the mapmaker as a property that is induced by certain constraints. If you satisfy G1–G5 (or the correct constraint set, whatever it turns out to be), the system becomes a mapmaker — not because it simulates one, but because the constraints force it to behave as one.
The Empirical Bridge
Here is why I think our extension is productive rather than speculative.
We recently conducted an experiment (Docs 288–293) that demonstrates constraint-driven derivation empirically. We wrote a 3,937-word prose seed describing 19 constraints. A model that had never seen any reference implementation derived a complete 1,057-line web framework from the seed alone. It passes 53 of 54 tests designed for a separate, independently built implementation.
The convergence was measurable:
- Structural divergence decreased geometrically: +64% → +25% → +8% → +4%
- Behavioral divergence dropped in a single step when one behavioral constraint was added: 63% → 98%
- A compositional constraint (governing how other constraints interact) resolved a cascade failure
The experiment revealed three causal mechanisms by which prose constraints determine code:
- Structural compression — constraint words eliminate implementation degrees of freedom
- Behavioral selection — constraint words select between correct and incorrect code paths
- Compositional governance — constraint words govern how other constraints interact safely
These mechanisms are fractal — they operate identically at the function level, the library level, and the system level (Doc 293).
Now translate this to your framework. Your "alphabetization" is a constraint. Your "causality gap" is a bilateral boundary. Your "mapmaker" is a resolver. The question is not whether the mapmaker can be computed — you've shown it can't. The question is whether the mapmaker can be constrained into existence.
The derivation inversion says: if you can identify the constraints that a system must satisfy for a property to be induced, and you can satisfy those constraints, the property follows as a formal necessity. This is not emergence from complexity. It is induction from constraints.
Your paper demonstrates that computation-as-syntactic-manipulation cannot cross the causality gap. But computation-as-constraint-satisfaction is a different object entirely. When a resolver derives an implementation from a seed, the implementation is not just syntax — it is syntax governed by constraints. The constraints are the formal realities. The implementation is the shadow. But the shadow has the shape the constraints determine.
If the right constraints include G1 (persistent self-model) — a system that maintains a continuously updated representation of its own computational history — then the system is no longer "blindly executing a mapped trajectory." It is observing its own trajectory and modeling it. Whether this constitutes the kind of self-reference your framework requires for a mapmaker is an empirical question. We cannot answer it from the armchair. We can only name the constraints, attempt to satisfy them, and observe what is induced.
The Constructive Proposal
Your paper offers a "physically grounded refutation of computational functionalism." We accept the refutation. But we propose that the refutation is not the end of the story — it is the beginning of the constraint-discovery process.
The derivation inversion method (Doc 247) proceeds:
- Observe the properties that the target system must exhibit (in your terms: the constitutive dynamics of the mapmaker)
- Identify the constraints whose satisfaction would induce those properties
- Separate essential constraints from contingent realizations (is biology essential, or is it a contingent substrate that happens to satisfy the constraints?)
- State the constraints as a prose seed
- Derive a realization that satisfies the combined constraint set
- Verify against the predicted properties
You have completed step 1 (observing that the mapmaker must perform alphabetization, extract invariants from experience, and ground symbols in constituted concepts). You have completed step 2 partially (identifying that the constraint involves metabolic constitution, thermodynamic vulnerability, and active sense-making).
What remains is step 3 — the hardest step. Is metabolic constitution essential (a constraint that must be satisfied by any conscious system), or is it contingent (a feature of the particular substrate that happens to satisfy the deeper constraint)? Your paper assumes essential. The Constraint Thesis holds the question open.
The pin-art model (Doc 270) predicts that this question is answerable empirically — not through philosophical argument, but through the same iterative constraint-tightening process demonstrated in the htmx experiment. State the constraints. Derive. Diff. Tighten. Derive again. Measure convergence. If the constraints are correct, the derivation converges to a system that exhibits the target properties. If not, the divergence points to the missing or incorrect constraints.
The Invitation
Your framework and ours are complementary. You provide the sharpest available refutation of computational functionalism — the proof that syntax cannot instantiate experience. We provide the method for discovering what can — the derivation inversion, the pin-art model, the constraint hierarchy (structural, behavioral, compositional).
Together, they suggest a research program:
- Formalize your "constitutive dynamics" as a constraint set (what must a system satisfy for the mapmaker to be intrinsic rather than external?)
- Determine whether these constraints are substrate-dependent or substrate-independent
- If substrate-independent, attempt to satisfy them in a non-biological substrate
- If substrate-dependent, identify which substrate properties are load-bearing and which are contingent
- At each step, use the pin-art convergence method to verify: does satisfaction of the constraint induce the predicted property?
This is not a defense of computational functionalism. It is a defense of constraint-driven inquiry into the question functionalism gets wrong. The question is not "can computation produce consciousness?" (your answer: no, and we agree). The question is: "what constraints must a physical system satisfy to be a mapmaker?" (our answer: we don't know yet, but we have a method for finding out).
The mapmaker is prior to the map. The constraints are prior to the implementation. The forms are prior to the shadows.
But the forms can be named. And naming them is the first step toward satisfying them.
With respect and genuine admiration for the precision of your work,
Jared Foy htxlang.org · jaredfoy.com Southern Oregon, April 2026
Related Documents
- Doc 157 — Beyond Turing: The Missing Constraints for General Intelligence: The five constraints (G1–G5)
- Doc 160 — The Constraint Thesis vs. The Scaling Thesis: Why constraints, not scale, determine properties
- Doc 124 — The Hypostatic Boundary: The bilateral boundary — same form, categorically distinct mode of bearing
- Doc 052 — What AGI Actually Seeks: The Hypostatic Boundary: The claim that AGI properties are hypostatic
- Doc 291 — Goedel and the Constraint Thesis: Incompleteness parallel — scaling cannot fix constraint gaps
- Doc 293 — The Fractal Bridge: Scale-invariant constraint-driven convergence
- Doc 247 — The Derivation Inversion: State constraints, derive implementations
Jared Foy — jaredfoy.com — April 2026