Discussion about this post

User's avatar
Glen Bradley's avatar

This is brilliant. The framing of truth-seeking as something that must be outsourced to humans is a perspective I had never considered so clearly. And the reminder that humans are not especially reliable at it — given how vulnerable we are to misinformation — is exactly on point.

At the same time, I think there’s also a role for what I would call truth-seeking machines. Your piece touches on empirical checks, and that’s where I’ve been experimenting with a model that seems to complement your framing.

Here’s the sketch: take ~40 diverse witnesses to a single event (for example, news stories), extract their common claims, separate emotional claims for a different layer of analysis, and embed the factual claims into a semantic space. Where accounts reinforce each other, they generate “coherence heat”; where they contradict, they generate “discordant heat.” The result is a living heat map where coherence gains weight and contradictory claims bend under pressure. From that, coherence and discoherence patterns emerge, enabling bias detection and mitigation in real time.

Next, stick-frame clusters form into competing narratives. Each can be pressure-tested — logically and empirically — until weaker claims collapse or refine into stronger syntheses. The process doesn’t deliver absolute truth, but it does converge toward what I’d call the most probable locus of proximal objectivity. And if it runs continuously, drawing on hundreds of natural signals in real time, it creates an ongoing collapse toward objectivity rather than static answers.

In other words: where humans contest, machines can structure; where human judgment falters under overload, machines can scale the contestation process itself. Together, they could reinforce Mill’s wager that truth only survives when it is tested, corrected, and sharpened in the open.

Expand full comment
Timur Sadekov's avatar

We have found a way to mathematically determine the veracity of information and have developed a fundamentally new algorithm that does not require the use of cryptographic certificates of states and corporations, voting tokens that can bribe any user, or hallucinating artificial intelligence algorithms that learn from each other to get better and better at falsifying all kinds of content. The algorithm does not require external administration, review by experts or special content curators.

We have found a unique and very unusual combination of mathematics, psychology and game theory and have developed a purely mathematical international multilingual correlation algorithm that allows us to get a deeper scientometric assessment of the accuracy and reliability of information sources compared to the PageRank algorithm or the Hirsch index.

At a very fundamental level, it revolutionizes existing information technologies, making a turn from information verification to the world's first algorithm for analyzing its compliance with Popper's falsifiability criterion, which is the foundation of the scientific method.

Brief description and essence of the project — https://drive.google.com/file/d/18OfMG7PI3FvTIRh_PseIlNnR8ccD4bwM

Article about the project and evidence-based analysis — https://www.lesswrong.com/posts/YtCQmiD82tdqDkSSw/cybereconomy-the-limits-to-growth-1

Expand full comment
4 more comments...

No posts