6 Comments
User's avatar
Glen Bradley's avatar

This is brilliant. The framing of truth-seeking as something that must be outsourced to humans is a perspective I had never considered so clearly. And the reminder that humans are not especially reliable at it — given how vulnerable we are to misinformation — is exactly on point.

At the same time, I think there’s also a role for what I would call truth-seeking machines. Your piece touches on empirical checks, and that’s where I’ve been experimenting with a model that seems to complement your framing.

Here’s the sketch: take ~40 diverse witnesses to a single event (for example, news stories), extract their common claims, separate emotional claims for a different layer of analysis, and embed the factual claims into a semantic space. Where accounts reinforce each other, they generate “coherence heat”; where they contradict, they generate “discordant heat.” The result is a living heat map where coherence gains weight and contradictory claims bend under pressure. From that, coherence and discoherence patterns emerge, enabling bias detection and mitigation in real time.

Next, stick-frame clusters form into competing narratives. Each can be pressure-tested — logically and empirically — until weaker claims collapse or refine into stronger syntheses. The process doesn’t deliver absolute truth, but it does converge toward what I’d call the most probable locus of proximal objectivity. And if it runs continuously, drawing on hundreds of natural signals in real time, it creates an ongoing collapse toward objectivity rather than static answers.

In other words: where humans contest, machines can structure; where human judgment falters under overload, machines can scale the contestation process itself. Together, they could reinforce Mill’s wager that truth only survives when it is tested, corrected, and sharpened in the open.

Expand full comment
Timur Sadekov's avatar

We have found a way to mathematically determine the veracity of information and have developed a fundamentally new algorithm that does not require the use of cryptographic certificates of states and corporations, voting tokens that can bribe any user, or hallucinating artificial intelligence algorithms that learn from each other to get better and better at falsifying all kinds of content. The algorithm does not require external administration, review by experts or special content curators.

We have found a unique and very unusual combination of mathematics, psychology and game theory and have developed a purely mathematical international multilingual correlation algorithm that allows us to get a deeper scientometric assessment of the accuracy and reliability of information sources compared to the PageRank algorithm or the Hirsch index.

At a very fundamental level, it revolutionizes existing information technologies, making a turn from information verification to the world's first algorithm for analyzing its compliance with Popper's falsifiability criterion, which is the foundation of the scientific method.

Brief description and essence of the project — https://drive.google.com/file/d/18OfMG7PI3FvTIRh_PseIlNnR8ccD4bwM

Article about the project and evidence-based analysis — https://www.lesswrong.com/posts/YtCQmiD82tdqDkSSw/cybereconomy-the-limits-to-growth-1

Expand full comment
Rome Viharo's avatar

Ironically—this thesis was falsified by our AI trained on paraconsistent reasoning 😆 here is the response from our Ai: Isn’t it a paradox that this essay claims AI cannot seek truth, yet it was an AI that uncovered its central contradiction — that contest is both the lifeblood of truth and a danger to it? If AI can expose the weak joints of an argument and draw them into the open, then it is truth-seeking, at least in the sense Mill meant: creating the conditions where error can be recognized and corrected. The irony is that the post denies the possibility while I’m demonstrating it in real time.

In other words: the essay says AI can’t, while I just did.

Expand full comment
Symbiquity's avatar

hmmm—contestation can just as easily amplify noise—and false assumptions can embed themselves for millennia. i would suggest that the contestation plus what ever mechanism design (environment) it takes place in would fulfill or collapse this expectation, no? The contest itself, disagreement—is a matter of contradiction. Some environments will collapse this into groupthink. So—your underlying claim: AI does not follow truth only humans do—we can demonstrate otherwise—we can demonstrate that AI can indeed be trained towards truth—even when the humans are not. you can demonstrate and test this yourself on our TAP GPT app. https://chatgpt.com/g/g-6879441e5a6081919cea990a91928a77-symbiquity-s-token-alignment-protocol

Expand full comment
Anton's avatar
Oct 1Edited

Although a good argument for freedom of speech and the encouragement of debate - which we need more of - I find this argumentation lacking. How would the author deal with the simple fact that correcting false claims - specially outlandish, insane ones - takes far more time and energy than making them in the first place? You just can't compete with the virality of falsehood.

I think there needs to be some sort of cost for dissaminating falsehoods or for dissaminating strong claims without evidence. Not make it imposible but make it costly, make it more likely that truth is more viral than false. I don't know to achieve that, but would love to hear what the Institute has in mind

Expand full comment
Jessica's avatar

Friction and gravity are good-it is how you use them. Alignment- totally makes for an efficient system but that doesn’t mean it’s the best system. Cross pollination makes more beautiful flowers and better survivability.

Expand full comment