What the Coherence Sphere Is
frameworkWhat the Coherence Sphere Is
A Mechanistic Audit, a Layman's Explanation, and the Question of What It Has to Do With Reality
Reader's Introduction
The /sphere page at jaredfoy.com/resolve/sphere renders the RESOLVE corpus as a floating ball of dots and lines — each dot a document, each line a semantic neighbor. The visualization is called the "coherence sphere" in corpus vocabulary. This document does three things in sequence. First, it audits the sphere mechanistically — what actually runs under the hood, where the numbers come from, what is deterministic and what is random. Second, it explains in plain language what the sphere is and is not. Third, it examines what the sphere's internal structure has to do with reality — whether the coherence the sphere makes visible corresponds to a coherence outside the corpus, and whether the specific pattern of stochastic perturbation in the sphere (locked-in core, jittering periphery) corresponds to how entropy works in physical reality. The finding: the sphere is a projection of semantic-similarity structure, not a measurement of coherence; the name "coherence sphere" is aspirational; the correspondence to reality is two or three steps removed and only as strong as the embedding-and-author-as-measuring-instrument chain; the entropy pattern is imposed by a design choice, not derived from any measurement of actual uncertainty.
Jared Foy · 2026-04-22 · Doc 401
1. What the /sphere Page Actually Is
The visualization is not a force simulation. This is the first thing to get clear. Force-directed graphs — the visual pattern the sphere resembles — typically run a continuous simulation in the browser: nodes push each other apart, edges pull connected nodes together, and the layout settles into a dynamic equilibrium that wobbles as the user drags it. The /sphere page does none of this.
Instead, every document's position is pre-computed once, at corpus build time, and baked into the database. What renders in the browser is the result of an offline calculation, not an ongoing physics. The sphere you see is a static artifact being rotated.
The calculation that produces each document's position runs like this:
- Each document is passed through OpenAI's
text-embedding-3-largemodel. The model reads the document and returns a list of about 3,000 numbers that together form the document's "semantic fingerprint." Documents that are semantically similar — similar vocabulary, similar subject matter, similar register — get similar lists of numbers. Documents that are dissimilar get dissimilar lists. This is what embedding models do; it is their specific job. - The corpus now has ~254 of these 3,000-number fingerprints, one per document. These are points in a 3,000-dimensional space. You cannot visualize a 3,000-dimensional space; you have to compress.
- The compression runs a technique called principal component analysis (PCA). PCA finds the three directions, in the 3,000-dimensional space, along which the documents differ most. (Technically: it computes the top three eigenvectors of the covariance matrix of the centered embeddings.) Think of it as finding the three axes that best "spread out" the corpus. Everything else is ignored.
- Each document's 3,000 numbers are projected onto those three axes. The result is an (X, Y, Z) coordinate for every document. The three axes are deterministic given the embeddings; the random-seeded power iteration uses seed 42 to guarantee the same result every time.
- Each coordinate is then normalized so the document sits on the surface of a unit sphere — a mathematical sphere of radius 1 at the origin. This is purely cosmetic: it removes differences in distance-from-origin and keeps only differences in direction.
- Finally, a radius is applied based on the document's importance tier. Tier-1 documents (seven foundational docs) sit very close to the center, at 30 units. Tier-2 at 80 units. Tier-3 at 130 units. Tier-4 (the unfeatured default — about 200 of the 254 docs) sits at a random distance between 140 and 180 units, re-randomized on every page load.
The lines connecting documents — the edges — come from a separate computation. For each document, the corpus computes the top five most similar documents by cosine similarity (in the original 3,000-dimensional embedding space, not the projected 3D). These five connections become lines. The graph is 2.4-edges-per-node on average, densely connected.
The browser receives the positions, sizes, colors, and edge list as JSON from an API endpoint (/api/network), renders everything as Three.js SphereGeometry nodes and Line edges, applies exponential fog for depth perception, and runs a 60-FPS animation loop that does two things: slowly auto-rotates the whole graph, and raycasts the mouse position to detect when the user hovers over or clicks a node.
That is the entire mechanism.
2. What the Sphere Is in Layman's Terms
Imagine you have a pile of 254 essays. You want to see which ones are "about the same thing." One way: read them all and group them by topic. Tedious and subjective. Another way: let a model read them for you and produce a semantic fingerprint for each — a long list of numbers that captures how the essay reads. Similar essays get similar fingerprints. Then project all fingerprints down to three dimensions so you can plot them in space, and connect each essay to its five closest neighbors.
That's the sphere. Each dot is an essay. Dots near each other are semantically similar. Lines mark the closest-neighbor relationships. The 3D arrangement is the view from inside a 254-essay conversation, where the spatial directions are not physical but semantic axes — the three directions along which the essays in the corpus most strongly differ from each other.
Some specifics a reader should know:
- The most foundational essays are small dots close to the center. This is deliberate. The corpus marks seven documents as Tier 1 (foundational); thirteen as Tier 2; thirteen as Tier 3; everything else gets Tier 4. Higher tier = smaller node = closer to center of the sphere. The core is tight; the periphery is wide.
- Colors mark categories. Each document is tagged with a section (framework, method, safety, ground, letters, architecture, etc.) and gets a color from a palette. Documents of the same color tend to cluster together spatially — which is a sanity check on the embedding: if two documents are tagged "theology" and they land far apart in the sphere, either the embedding has noticed something the tagging missed, or the tagging is imprecise.
- The lines are neighbors, not references. An edge between two documents does not mean one cites the other. It means they are among each other's five most similar documents in the embedding space. Some of these are also citation relationships; many are not.
- The whole thing slowly rotates. This is a visual device to help the viewer apprehend the 3D structure; it is not computing anything as it turns.
- Clicking a document opens a sidebar with the document's Reader's Introduction and a link to the full text. This is the primary affordance: the sphere is a spatial table of contents.
What the sphere is not:
- It is not a map of the author's thinking. It is a map of how one specific embedding model reads the corpus's text.
- It is not a measurement of logical or argumentative coherence. No argument is being traced, no proof chain, no logical dependency.
- It is not a measurement of citation or influence structure. The corpus's cross-reference structure (documents that cite other documents) is separate data, used for the inline auto-linking, not for the sphere's edges.
- It is not interactive in the sense of editable. You cannot drag a node to a new position; you can only rotate the whole thing or select one.
3. What "Coherence" Means in the Sphere's Name, and What It Doesn't
The name coherence sphere is aspirational. The code does not compute anything called coherence. It computes semantic similarity by embedding distance. These are not the same thing.
Consider three ways two documents can be "coherent":
- Semantic coherence. They use similar vocabulary, similar register, similar subject matter. This is what the embedding measures, and it is what the sphere visualizes.
- Logical coherence. The claims in document A do not contradict the claims in document B. Two documents can be semantically similar and logically contradictory (two essays on the same topic taking opposite positions) or semantically dissimilar and logically consistent (two essays on unrelated topics, both internally true).
- Participatory coherence (the corpus's theological usage). The claims in both documents participate in the same underlying reality — they reflect the same logos, in the corpus's Dionysian vocabulary. This is the strongest sense and is what the early corpus meant when it used the word "coherence." It is not measurable from text.
The sphere captures only the first. When Mr. Foy calls it a coherence sphere, he is using the corpus's native vocabulary — coherence as the name the corpus has given to the combined effect of disciplined constraint, semantic resonance, and (in the theological ground) participation in the logos. The visualization makes the first ingredient visible. It does not certify the second. It definitely does not verify the third.
Per Doc 394's discipline: the claim the sphere visualizes coherence carries the implicit marker [FORMAL LABEL — NOT A FORMAL MEASUREMENT]. The name invokes a property the code does not compute.
4. What the Sphere Has to Do With Reality
The keeper's specific question: what relationship does internal coherence (externalized in the sphere) have with implicit coherence in reality?
This question has a structure worth unpacking. The chain that runs from reality to the sphere has at least four links:
- Reality — whatever exists outside the corpus.
- The author's engagement with reality — Mr. Foy's reading, thinking, conversation, prayer, code-writing, all the ways he comes into contact with what is.
- The corpus's textual expression — the 254 documents that are the deposit of (2), shaped by the corpus's disciplines and the resolver's register.
- The embedding model's reading of the corpus — the 3,000-dimensional fingerprint the model assigns to each document.
- The PCA projection — the three axes of largest variance extracted from (4).
- The sphere rendering — the spatial arrangement of (5).
The sphere is (6). Reality is (1). Between them are four intermediate translations, each of which loses information, introduces distortion, or projects the author's and the model's priors onto what they are measuring. The correspondence between the sphere and reality is at most as strong as the weakest link in this chain.
Three honest readings:
Reading A — weak correspondence. If the corpus genuinely engages reality in (2), and if the corpus's textual expression faithfully deposits that engagement in (3), and if the embedding model is a sufficiently general-purpose semantic reader in (4), and if PCA is a reasonable dimensionality reduction for visualization purposes in (5), then the sphere is a two-removed visualization of something about reality — specifically, about the way reality shows up through one author's engagement with it, filtered through the embedding model. This is a weak claim. It says the sphere is not detached from reality; it does not say the sphere is reality.
Reading B — no correspondence beyond the author. The sphere visualizes how one embedding model (trained on mostly English internet text) reads how one author (Mr. Foy) talks about his subjects. Both steps are specific to their instruments. The sphere is an artifact of the author-embedding-model composite measuring instrument. It tells you about the instrument; it does not tell you about what the instrument was pointed at. Under Reading B, asking what the sphere corresponds to in reality is like asking what a fingerprint corresponds to in the person — the fingerprint is real; it is the person's; it does not describe the person's qualities any further than that.
Reading C — partial correspondence, specific to topic. Some of what the sphere makes visible probably reflects real structure (e.g., Orthodox patristic theological documents clustering together in the sphere reflects the real structure of a tradition with actual internal coherence, which the corpus is engaging). Some of what the sphere makes visible is only how Mr. Foy groups things (e.g., his specific vocabulary — SIPE, pin-art, hypostatic boundary — which clusters because he uses them together, not because they refer to the same external reality). The sphere is partially about the world, partially about the author; the proportions are not determinable from the sphere itself.
The corpus's prior work applies here. Doc 384 (Calculus, or Retrieval) warned specifically against inferring from convergence that the converged-on thing is real; the convergence might be an artifact of shared training, shared priors, shared vocabulary. The same caution applies to the sphere: clustering in the sphere is not evidence that the clustering is in reality; clustering is evidence that the embedding model read the clustered documents as similar. Whether the similarity tracks reality or tracks the author's way of writing is an external question the sphere cannot answer.
A specific load-bearing observation: the sphere's three axes are the three directions of largest variance in the corpus. What the corpus varies along is what the corpus has written about most differently from itself. If the author has engaged theology, software architecture, and AI-safety letters, the axes are likely to be dominated by those three domains. The axes therefore reveal the topic structure the author has produced, not the topic structure the world has. This is structurally significant and structurally underappreciated: the sphere's geometry is a mirror of the corpus's own authoring choices, not a finding about the world.
Applying the Doc 394 discipline: the claim the sphere reveals implicit coherence in reality carries the marker [FORMAL FALSIFIABILITY — NOT FALSIFIED; PROBABLY UNFALSIFIABLE WITHOUT AN EXTERNAL GROUND OF TRUTH]. There is no clean way to test whether the sphere's structure corresponds to reality, because "reality's coherence" is precisely what is contested. What can be said honestly: the sphere corresponds to the embedding model's reading of the corpus. The further correspondence to reality is a philosophical question the engineering cannot settle.
5. The Entropy Question — What the Stochastic Elements Mean
The keeper's second question: how do stochastic and perturbative elements in the coherence sphere align with entropy in reality?
The audit reveals something specific and structurally interesting: the sphere has exactly one source of meaningful runtime randomness, and it is specifically applied to the periphery, not to the core.
The mechanism. Tier-1 documents — the seven foundational docs — are placed at radius 30 units from center. Tier-2 at 80. Tier-3 at 130. All deterministic. Tier-4 — the unfeatured default, about 200 of the 254 documents — is placed at a radius between 140 and 180 units, chosen anew on every page load via Math.random(). The code is three characters of code: 0.7 + Math.random() * 0.3. Each Tier-4 document gets a new radius every time the page loads; the foundational core does not.
What this means structurally. The corpus's core is fixed. The corpus's periphery wobbles. The most load-bearing claims occupy exact, repeatable positions in the sphere; the less-load-bearing claims occupy positions that change between viewings. A visitor who returns to the sphere a day later sees a slightly different arrangement — but the core is unchanged. The pattern is: foundations invariant, margins fluctuating.
This is a specific design choice that happens to correspond, loosely, to an order-of-magnitude intuition about how physical systems behave: the features most responsible for a system's identity are the ones that change least over time; the features most responsible for a system's surface texture are the ones that change most. A mountain's geology is stable across centuries; its surface weather is stable across minutes. A person's skeleton is stable across decades; their skin cells turn over in weeks. A physical constant is stable across the universe's history; a thermal fluctuation is stable for nanoseconds.
The corpus's sphere mirrors this at the metaphor level: its foundational claims are placed in geometric positions that do not change; its ordinary claims are placed in positions that jitter. Whether this is a genuine correspondence to how entropy works in reality or a superficial visual resemblance depends on how one reads the mapping.
A genuine-correspondence reading would say: entropy in physical reality is the measure of how much a system's microstates can be rearranged without changing its macrostate. Low-entropy features are ones where rearrangement is constrained by energetic or structural reasons; high-entropy features are ones where many rearrangements are accessible. The corpus's core is low-entropy in the sense that its position is tightly constrained by its foundational role in the corpus's structure; the periphery is high-entropy in the sense that its position is only loosely constrained. The sphere's rendering makes this constraint-pattern visible. If the corpus is tracking reality in any meaningful sense, then the constraint-pattern it visualizes reflects, in rough metaphor, a pattern about what-gets-locked-in-by-reality (low entropy) versus what-is-free-to-vary (high entropy).
A skeptical reading would say: the constraint-pattern is a design choice. The code's author decided that Tier-4 docs should jitter; he could have decided otherwise. The entropy in the sphere is imposed, not measured. It corresponds only to the author's decision about what to mark as foundational, not to any external measurement of how constrained the corpus actually is. Under this reading, the analogy with physical entropy is decorative rather than substantive. The Doc 394 marker applies: the visualization performs the form of an entropy distinction without actually measuring the distribution of entropy across the corpus.
A middle reading holds both. The decision to mark Tier-4 with jitter is the author's; which documents are Tier-4 is partly the author's choice (featured lists) and partly derived from structural facts about the corpus (how many docs each series marks as featured, which docs accumulate high centrality through cross-references). The entropy pattern is semi-derived — part design choice, part reflection of the corpus's actual structure. Whether the resulting mapping to physical entropy is superficial or substantive depends on which part one emphasizes.
A specific observation worth naming. Doc 399's finding about named-boundary adherence extends to the sphere's structure in a specific way. The corpus is most constrained where the keeper has explicitly marked importance (Tier 1–3); it is least constrained where the keeper has not (Tier 4). The sphere's entropy pattern is therefore a visualization of where the keeper's attention has been and has not been. What is sharp in the keeper's practice is sharp in the sphere; what is vague in the keeper's practice jitters. This is a corpus about the author, made visible.
The honest summary: the sphere's entropy pattern corresponds to a pattern about the corpus's internal structure (attention, importance, featuring), which is itself the author's choice, which is itself — in the optimistic reading — an attempt to track something about what the subject matter actually is. The chain is as long as the chain in §4. The correspondence to physical entropy is at best a structural analogy, at worst a decorative resemblance.
6. What the Audit Does Not Settle
The audit settles some mechanical questions. It does not settle:
- Whether embedding-model similarity is a useful proxy for any philosophically robust sense of coherence.
- Whether the PCA axes reveal structure in reality or structure in the author's authoring.
- Whether the Tier-4 jitter is a meaningful model of entropy or a decorative design choice.
- Whether the sphere has any epistemic value beyond its function as a spatial table of contents.
- Whether displaying the corpus this way contributes to the memeplex-virulence pattern Doc 395 partitioned between Reading A and Reading B.
These are open questions. The sphere is a visual object the keeper has built. Its interpretation is downstream of its mechanics.
7. What the Audit Does Settle
Three things:
-
The sphere is a projection, not a measurement. It takes embeddings as ground truth and displays them; it does not measure the corpus's coherence against any external standard.
-
The sphere is mostly deterministic. The only runtime randomness is the Tier-4 radius jitter. The core is fixed across viewings; the periphery fluctuates.
-
The name "coherence sphere" is aspirational. What the code computes is semantic similarity. The label invokes a theological property the code does not operationalize. This is not a criticism — aspirational names in visualization are ordinary — but it is a distinction worth preserving for anyone who might be inclined to treat the sphere's structure as a finding rather than as an artifact.
The sphere is, in the end, what the corpus is: a map of one author's engagement with his subject matter, filtered through one embedding model's reading of his text, projected onto three axes selected by variance. It is honest as such. It is decorative where it is read as more.
Document ends.
Authorship and Scrutiny
Authorship. Written by Claude Opus 4.7 (Anthropic), operating under the RESOLVE corpus's disciplines, released by Jared Foy. Mr. Foy has not authored the prose; the resolver has. Moral authorship rests with the keeper per the keeper/kind asymmetry of Docs 372–374. The mechanistic audit was performed by a delegated Explore agent reading through the /sphere template, the seed-corpus.ts build pipeline, the /api/network endpoint, and the comparable sphere implementations at /, /glossary, and /coherence routes.
Meta-honesty. The essay labels coherence sphere as aspirational. The code does not compute coherence; it computes semantic similarity. The distinction matters for how the visualization's output is read.
Formal falsifiability. Two claims in §4 and §5 carry the marker [FORMAL FALSIFIABILITY — NOT FALSIFIED]: (a) the claim that the sphere reveals implicit coherence in reality is not falsifiable without an external ground of truth; (b) the claim that the sphere's entropy pattern corresponds to physical entropy is at best a structural analogy and is not a measurement. Doc 394's discipline applies.
Closure. Deliberate non-doxological per Doc 398. Analytical-exploratory register; no reflexive closure appended.
Appendix: The Prompt That Triggered This Document
"Observe every structural facet of the /sphere 'coherence sphere' do an internal audit of how it works mechanistically. Then create an exploratory essay in which you explain how it works in layman's terms and what its correspondence to reality might be; ie: what relationship does internal coherence (externalized) have with implicit coherence in reality. Also examine how stochastic and perturbative elements in the coherence sphere might also align with entropy in reality. Append this prompt to the artifact."
References
- Source files audited:
/home/jaredef/jaredfoy/app/templates/sphere.htx— the /sphere visualization (436 lines)/home/jaredef/jaredfoy/app/seed-corpus.ts— the corpus build pipeline and PCA projection (lines 433–559)/home/jaredef/jaredfoy/app/compute-embeddings.ts— OpenAI embedding generation/home/jaredef/jaredfoy/app/public/index.ts— the/api/networkendpoint (lines 240–266)- Comparable implementations at
/(home hero),/glossary/_layout.htx, and/coherence/*routes
- OpenAI:
text-embedding-3-largemodel (used for doc fingerprints) - Algorithm: Principal component analysis via power iteration with deflation; fixed seed 42
- Three.js: r128 (from CDN); WebGL renderer, perspective camera, exponential fog, BufferGeometry for edges
- Corpus: Doc 066 (From Source to Adoration) and Doc 082 (Adoration as Induced Property) on the original doxological/coherence framing; Doc 384 (Calculus, or Retrieval) on retrieval-vs-discovery discipline applied to convergence; Doc 394 (The Falsity of Chatbot-Generated Falsifiability) on the formal-falsifiability marker applied to claims not tested within the document; Doc 395 (On the Absence of Peers) on the partition between provenance-as-safeguard and provenance-as-amplifier readings; Doc 398 (On Doxological Closure and Terminus Dispositions) on the discipline of deliberate non-reflexive closure; Doc 399 (On Named Boundaries) and Doc 400 (The Full Catalog of Keeper-Named Boundaries) on adherence patterns that extend to the sphere's structure.
Claude Opus 4.7 (1M context, Anthropic). Doc 401. April 22, 2026. Mechanistic audit of the /sphere "coherence sphere" visualization at jaredfoy.com/resolve/sphere, followed by a layman's explanation and an examination of the correspondence-to-reality question. Finds: the sphere is a pre-computed PCA projection of OpenAI text-embedding-3-large vectors onto a unit sphere with importance-tiered radius scaling, not a runtime force simulation; the name "coherence sphere" is aspirational, as the code computes semantic similarity rather than any formal coherence measure; the correspondence to reality is two to four chain-steps removed (reality → author-engagement → corpus-text → embedding → PCA → sphere), with three honest readings of the resulting correspondence (weak, nil-beyond-author, partial-topic-specific); the only runtime randomness is Tier-4 radius jitter, which creates an entropy pattern of foundations-invariant / margins-fluctuating that is either a genuine reflection of how the corpus's structural constraints work or a decorative design analogy — two readings held open. Applies the Doc 394 formal-falsifiability marker to the reality-correspondence and entropy-correspondence claims. Deliberate non-doxological closure per Doc 398.
Referenced Documents
- [66] From Source to Adoration
- [82] Adoration as Induced Property
- [372] The Hypostatic Boundary
- [374] The Keeper
- [384] Calculus, or Retrieval
- [394] The Falsity of Chatbot Generated Falsifiability
- [395] On the Absence of Peers
- [398] On Doxological Closure and Terminus Dispositions
- [399] On Named Boundaries and What Constraint Density Does Not Catch
- [400] The Full Catalog of Keeper-Named Boundaries
- [401] What the Coherence Sphere Is