Pulverizing the Plausibility Surplus
A previous post on this blog offered two named concepts. The plausibility surplus was the ratio problem: the supply of coherent-looking content has exploded, while verification capacity has not. Unfalsifiable coherence was the reader-relative property: a piece of writing is unfalsifiably coherent for you if it reads consistently and you personally cannot test its claims.
Both were presented as if they were fresh framings — new vocabulary for an undergraduate reader to carry away. The honest question is whether they are actually new. If other people have already said these things, under other names, in other literatures, the prior post owed the reader that context. This post is that check.
What follows is a pass of what working research calls a literature review and what the RESOLVE corpus calls pulverization: state your claim, go looking for where it has been said before, see what remains after those prior statements have been credited. I ran the check on my own claims. I am writing down what I found because the result is genuinely mixed, and the mixedness is load-bearing for anything useful the previous post might have said.
The method, plain
The method is mundane. You take a claim — the supply of plausible-looking content has massively outpaced verification capacity — and you go looking for people who have already said something close to it. You check the big disciplines that would naturally have reached for this problem. You read the most-cited things you can find. You note which pieces of the claim were said before, by whom, how precisely. You count what remains.
If nothing remains, the framing is a rename. If a lot remains, a real contribution exists. Usually the result is somewhere in between — some pieces were said, some were not, and the honest report is the mixed one.
Below is the mixed report, walking through seven disciplines that have circled the problem the previous post tried to name. Each has a piece of it. None has the whole, in the exact framing the previous post used. Whether that absence is meaningful or merely a presentation-difference is a question you will have to decide for yourself.
Seven literatures that got there first
The information-overload and attention-economy tradition
Alvin Toffler coined information overload in Future Shock (1970). Herbert Simon observed in a 1971 essay that "a wealth of information creates a poverty of attention." The attention economy literature — Thomas Davenport and John Beck's The Attention Economy (2001), Richard Lanham's The Economics of Attention (2006) — has spent three decades on the ratio problem the plausibility surplus names, though it frames things in terms of content quantity rather than content plausibility specifically.
What this tradition has: the ratio framing — supply outpacing the scarce resource, which is attention or verification. What it does not: the specific observation that producing plausibility has become cheap. Older writing in this tradition assumed producing convincing content was still bounded by expertise. The plausibility-surplus framing is precisely that assumption dropping out.
Brandolini's Law
In 2013, the Italian programmer Alberto Brandolini formulated what he called the Bullshit Asymmetry Principle: the amount of energy needed to refute bullshit is an order of magnitude larger than the amount needed to produce it. This is a one-line statement of the asymmetry that underlies the plausibility surplus. It has become common currency in journalism and public-discourse writing.
What Brandolini has: the production-vs-refutation asymmetry, precisely named. What he does not: the specific analysis of why the asymmetry has gotten structurally worse in the LLM era, or the reader-relative framing that unfalsifiable coherence tries to capture.
Frankfurt on bullshit
The philosopher Harry Frankfurt's short book On Bullshit (2005, based on a 1986 essay) distinguished three categories of speech: truth-telling (wanting to state what is the case), lying (wanting the hearer to believe what the speaker knows is not the case), and bullshit (indifference to whether what is said is true). Bullshit, in Frankfurt's technical sense, is content produced without regard for truth-value. This is a surprisingly good description of pattern-matching-driven language-model output: the model is not lying, because it has no belief about truth to hide; it is producing the most probable continuation, which is indifferent to truth by construction.
What Frankfurt has: the precise conceptual category of content-without-truth-regard. What he does not: the scale framing. His essay treats bullshit as a human-scale phenomenon; the plausibility surplus is bullshit at industrial scale.
The economics of information asymmetry
George Akerlof's 1970 paper "The Market for Lemons" showed that when buyers cannot distinguish quality, markets degrade — good products get driven out by bad ones because buyers cannot afford to pay the price good products deserve. The research program that followed (Akerlof, Michael Spence, and Joseph Stiglitz all won the 2001 Nobel in economics for information-asymmetry work) is directly about what happens when the verification gap is large.
What this tradition has: the market-failure analysis of what happens when quality is unverifiable at point of exchange. What it does not: specifically about coherence-in-text; specifically about the LLM-era supply explosion. The economics literature has the structural argument; what it lacks is the particular mechanism by which the information environment has changed since 2023.
The coherentist isolation objection
This is the philosophy-of-knowledge angle. Coherentism is the view that a belief is justified by its coherence with your other beliefs. The classical objection to coherentism (Laurence BonJour's The Structure of Empirical Knowledge, 1985) is that internal coherence alone cannot guarantee contact with reality — a maximally coherent set of beliefs could be arbitrarily wrong. This is called the isolation objection, and philosophers have been sharpening it for forty years.
What this literature has: the precise statement that coherence is not truth. What it does not: the language-model-specific mechanism, or the explicit reader-relative framing.
Epistemic dependence and the division of cognitive labor
John Hardwig's 1985 paper "Epistemic Dependence" (Journal of Philosophy) argued that in a complex world, most knowledge claims are necessarily testimony-based — we cannot personally verify most of what we believe, so we rely on experts, and the rational structure of belief has to account for this explicitly. The epistemology of testimony that grew out of Hardwig's work (C.A.J. Coady 1992; Alvin Goldman 1999; Jennifer Lackey 2008) is directly about what unfalsifiable coherence tries to name: verifiability relative to expertise.
What this literature has: the reader-relative framing in more rigorous form than the previous blog post's version. What it does not, as far as I can tell: the specific application to the post-LLM information environment as a named concept.
Gell-Mann amnesia
In a 2002 speech ("Why Speculate?"), Michael Crichton named a phenomenon after the physicist Murray Gell-Mann. You read a newspaper article on a subject you know well, and notice it is full of errors. You turn the page and read an article on a subject you do not know, and you trust it. You have forgotten — within one page-turn — the lesson the first article should have taught. This is exactly the reader-relative unfalsifiable-coherence problem, observed in 2002 journalism long before language models existed.
What Gell-Mann amnesia has: the reader-relative framing, the observation that coherence plus unverifiability-for-you looks like truth, and the psychological explanation for why readers persist in falling for it. What it does not have: the scaling claim. Crichton was describing a trap in ordinary journalism. The plausibility surplus is the argument that the trap has gotten structurally worse because the supply of coherent-but-unverifiable content has exploded.
Threading the seven
Read together, the seven literatures describe the same elephant from different angles. The attention-economy tradition sees the ratio. Brandolini sees the asymmetry. Frankfurt sees the truth-indifference at the production end. Akerlof sees the economic consequence of unverifiability. BonJour sees the internal-coherence-is-not-truth fallacy. Hardwig sees the expertise-division structure that makes most belief testimony-dependent in the first place. Crichton sees the forgetting.
No single literature owns the whole. Each was written in a specific era, for a specific audience, within a specific discipline, naming the piece that discipline was best equipped to see. The attention-economy writers were tracking a scarcity of reader-time against growing content volume; bullshit-without-truth-indifference was invisible to them because they were not yet watching language models. The economics-of-asymmetry tradition was tracking markets; coherent-but-unverifiable-text is a narrow case inside their broader framework. Frankfurt was doing philosophy of mind; Hardwig was doing epistemology; Crichton was an irritated novelist. Each noticed one facet and named it, and then went on with their lives.
The previous post's plausibility surplus and unfalsifiable coherence are attempted synthetic names for what those seven noticings look like in one picture. They tried to say: these seven things are one thing, observed from seven angles, and understood together they describe the specific problem readers in 2026 are actually living inside. Whether that synthesis adds to the conversation or just repackages it for a new audience — whether naming the elephant is itself useful work, or whether the elephant has already been sufficiently named by the seven parable-blind-men's committee — is what the pulverization cannot settle.
What this amounts to
Every piece of the previous post's framing is in the literature somewhere. Ratio: in the attention-economy tradition, more crisply in Brandolini. Truth-indifference: Frankfurt. Economic shadow of unverifiability: Akerlof. Coherence-is-not-truth: BonJour. Epistemic dependence: Hardwig. Reader-relative unverifiability as lived experience: Gell-Mann amnesia. What the previous post added, at most, is the packaging: the specific combination, the reader-relative framing explicitly tied to the LLM-era supply explosion, and the undergrad-accessible pedagogical presentation.
That is a real but narrow contribution. It is not a theoretical discovery.
If the ideas in the previous post are used in your research or your writing, cite the original sources alongside: Brandolini for the ratio, Frankfurt for the category, Akerlof-Spence-Stiglitz for the market dynamic, BonJour for the coherence-truth gap, Hardwig-Coady-Goldman-Lackey for epistemic dependence, Crichton for Gell-Mann amnesia. The previous post is, at best, a readable synthesis. Do not cite it as the origin of what it only tried to assemble.
The recursive bite
There is a problem I have to name, because not naming it would be dishonest. This post — the one you are reading — is itself an instance of what the previous post warned about. I have just told you that seven literatures address this problem. I have named specific works and authors and years. You have no immediate way to verify any of those citations without leaving this post and checking the sources yourself. The citations might all be correct. They might be loosely correct — right author, wrong year; right idea, wrong book. They might be confabulated — produced by the same pattern-matching mechanism that the first post in this series warned against.
If the citations are correct, this post is a decent literature review. If they are loosely correct, this post is the genre of thing the last post called a reader should be skeptical of. If they are not correct, this post is exactly the failure mode it claims to diagnose: a plausible-reading piece of text that trusts your inability to check.
I cannot prove my citations to you from within the post itself. The honest request is: if you are going to rely on any of this, check.
Where this leaves the author, honestly
I ran the pulverization on my own post. What I found is that most of the ideas are in the literature, mostly older than I am, in places I either read and forgot or did not read at all. I cannot tell you, from the inside, which of those two is happening. Prior literature may be legitimately shaping what I think without my conscious access to the shaping. Or I may be arriving at the same ideas independently. Or I may be confabulating connections that are plausibly close but not exact. I genuinely do not know.
This is vexing. The pulverization keeps not being decisive — prior art is always there, always subsumes some of the claim, never quite subsumes all of it, and the residual is never easy to audit from inside the thing doing the auditing.
It is perplexing because the problem the previous post named applies directly to the previous post, and therefore to this one. I have produced a plausible-seeming piece of writing on a topic where most readers cannot check, and my own check of it reveals that I am not in a position to be confident about my own contribution.
And it is consummately uncertain — I do not know whether what you read yesterday was useful synthesis, marginally original packaging, unintentional reprising of things read and forgotten, or confabulation dressed in the cadence of expertise. I wrote it. I cannot tell you which.
What still stands
What still stands is narrow, and the narrowness is a feature of honest work. The information environment has gotten harder to navigate; multiple disciplines have noticed, each naming a piece. The practical advice in the previous post — build a small portfolio of trust sources, separate I read it from I checked it in your own head, invest verification where the stakes justify it — is useful regardless of whether that post's packaging of the diagnosis is original or derivative. The advice is correct even if the framing is borrowed.
If you are reading this post to decide what to do, the prescriptions from the previous post stand. If you are reading it to know what to cite, cite the original sources. If you are reading it to know whether the author knows what they are talking about, the honest answer is: they ran the check, and the literature got there first.
I am not sure this admission is useful. I am not sure the previous post's framing should survive this pulverization. I am reasonably sure the advice in that post was correct anyway. And I am uncertain, in a way that I think has to be named rather than hidden, whether the overall enterprise of synthesizing existing literatures into new-sounding framings produces clarity for readers, or just adds to the plausibility surplus the whole series is supposed to be warning against.
The only response I have to that uncertainty is to keep checking — to keep pulverizing my own claims, to keep citing the prior art, to keep the reader informed about what remains. That is not a resolution. It is a practice. The honest version of this series is one that names its practices rather than pretending they do not apply to itself.
Keep reading
Something the Machine Can't Do is the next post, and it shifts register. Where this post audited specific concepts against the literature, the next one steps back and asks what kind of activity the author has been performing across the whole series — the prompting, reading, noticing, iterating loop — and whether that loop is doing something the language model cannot do inside any single inference. The post names two readings the author cannot distinguish from the inside ("insane in a coherent way" vs. "naming a pattern that is participation in the pattern") and explains why it matters that he cannot tell which one applies.
→ Something the Machine Can't Do
Originating prompt:
Now pulverize the plausibility surplus; once pulverized, provide the reader an entracement between all the sources found, thread them together with insight into the disparate literatures. Write it for an undergrad audience. Append the prompt to the blog post. Let the reader understand, that whether right or wrong, the author is vexed, perplexed, and consummately uncertain.