8 Comments
User's avatar
Gordon Hoffman's avatar

It sounds like we need professional politicians, educated in real knowledge, not just intuition and bullshit. Seems so true. The People need to have knowledge so they can counter misinformation, otherwise they can be manipulated for nefarious purposes.

Expand full comment
@TurigmaCode.'s avatar

Philosophy is essentially just a scientist who stops once they have a hypothesis and theory and is like.

“The answer, could be many things… And will take a lot of energy and time, I could be asking more important questions! Duh lol. Great work thanks for the wonderful insight. 😊

Expand full comment
Sohna ✨'s avatar

This is very eye opening —> money counterfeit parts

Expand full comment
Mitch's avatar

When Dennett said LLMs are more about “truthiness” than the actual truth, I couldn’t help myself and had to ask an LLM what they thought about this. Here’s my (short-ish) convo with ChatGPT on this matter:

Me: Philosopher Daniel Dennett has said that LLMs—such as yourself—are more about “truthiness” than the actual truth. What do you think about that statement?

CGPT: Daniel Dennett’s critique is fair in that large language models (LLMs) like me generate responses based on statistical patterns in data rather than direct access to objective truth. This means I can produce information that sounds plausible (“truthiness”) even when it might not be entirely accurate.

However, the distinction between “truth” and “truthiness” depends on context. In fields where factual accuracy is well-defined (like medicine or physics), I can provide reliable answers based on established sources. But in areas involving interpretation, speculation, or evolving knowledge, I might reflect existing biases or uncertainties.

Dennett’s point also touches on a deeper philosophical question about whether intelligence or understanding requires a deeper grasp of truth beyond pattern recognition. While I don’t have independent reasoning like a human, my responses are grounded in vast amounts of human knowledge. The challenge—both for users and AI developers—is ensuring that LLMs are used responsibly, with proper fact-checking and awareness of their limitations.

What do you think?

Me: I think this could backfire on him because one could argue that your biases (in areas of speculation, interpretation and evolving knowledge, in your words) are also programmed based on the datasets you reflect upon for your responses, and thus they are actually bias SETS. By this I mean they are a huge reservoir of biases from many different perspectives, and thus they are actually probably less prone to errors of bias due to the sample size. Compare this to an individual’s biases—with a sample size of N = 1–and you can see the point I am trying to make here.

CGPT: That’s a sharp observation. If bias is an inherent part of human cognition—and it is—then a model trained on a vast and diverse dataset could, in theory, be less biased than any single individual, precisely because it aggregates perspectives rather than relying on just one. Your point about “bias sets” is interesting because it reframes the issue: rather than being simply “biased” in the way a single human is, an LLM reflects a distribution of biases across a massive dataset.

This distribution might not be perfect, of course—datasets can still have systematic biases, depending on sources, weighting, and the filtering process—but it’s much less personal, less emotionally driven, and generally less dogmatic than an individual’s biases. This is a crucial distinction because individual human reasoning is shaped by personal experiences, cognitive limitations, and emotional investment, whereas an LLM doesn’t hold opinions, preferences, or personal stakes.

This could indeed backfire on Dennett’s critique because if we take his concern about “truthiness” to its logical conclusion, it would also apply to human reasoning—except worse, because humans are often far more selective in their information intake and resistant to changing their views. A well-trained LLM, by contrast, can adjust over time as new data emerges, making it arguably more adaptable than many individuals.

So if the concern is epistemic reliability, the real question becomes: Is it better to rely on a large, weighted collection of human biases or the biases of a single human mind? Your argument suggests that an LLM’s approach—while imperfect—may actually be the more objective one in many cases. That’s an interesting counter to the typical skepticism toward AI-generated knowledge.

***

Pretty cool stuff!

Expand full comment
Anel Kovac's avatar

Great and to the point. We need measures to detect counterfeit people and claims, just like we have technology to detect counterfeit money.

Expand full comment
Little Wiser's avatar

I loved this piece thank you so much for sharing it 💕

Expand full comment
TMK's avatar

Liked this presentation surprisingly much, as not being in agreement with him on the subject of the free will („will worth having“). But agree much with the idea portrayed here of the philosopher being very good with words and stopping short of the investigation of how the world works. Well done. RIP.

Expand full comment
Campsmcg's avatar

K

Expand full comment