Welcome back to es machina, a newsletter the explores the relationship between humans and AI. Today’s issue is the first of what will probably a 3-4 part series on AI and truth-telling. As usual, I’d love to hear your thoughts in the comments!
Quick heads-up about something exciting happening this week: I'm launching a major expansion of my AI Tools Database THIS SATURDAY (March 1st).
If this is the first time you’re hearing about it, the AI Tools Database is my comprehensive resource that helps you navigate the overwhelming world of AI tools.
The expanded database will includes:
50+ new tools evaluated (on top of the existing 35+ tools)
Enhanced privacy analysis for each tool
Environmental impact calculators
More detailed alternative recommendations
The price will be increasing from $27 to $47 after the update, so if you've been wondering which AI tools are actually worth your time (and which ones aren't), this is your chance to get lifetime access before the price increase.

Here’s one of the most common questions that I get on social media, at conferences, and from clients:
How do I know if this [ChatGPT, Perplexity, some other AI tool] response is true?
The answer is that unless (1) you have domain expertise in whatever area you’re asking about (2) you do some line-by-line fact-checking, or (3) you hire someone with expertise to fact-check…. you don’t.
The typical follow up question is something along the lines of:
Why can’t [AI company] make sure the responses are true?
This is a question I’ve been trying to answer for years, not because I don’t have an answer, but because I’ve struggled with how to articulate my thoughts on it to others, especially without getting technical.
I’d start by saying something like:
AI systems aren’t inherently designed to have fact-checking capabilities, because there isn’t a formula for what a “fact” is, or whether a response is “true1”. Every AI model is ultimately a really, really, really complicated formula that represents a ton of information, and every response, or inference, takes your query and maps it onto the formula to create a response… something something stochastic processes…
But there’s a simpler explanation/metaphor for what’s going on here that you’re likely more familiar with.
AI systems are bullshitting, all the time.
What Is Bullshit, Anyway?
Recently, I found myself revisiting an essay I'd heard about but never properly explored: philosopher Harry G. Frankfurt's essay2 “On Bullshit” which explores "bullshit" as a theoretical concept3. I’d been re-listening to a bonus episode of the (fantastic) "If Books Could Kill" podcast4, which explored it through the lens of modern political discourse (and flagged that as an essay, it’s not actually that well-written — oops).
Frankfurt's essay is focused on discussing the differences between lying and bullshit. According to Frankfurt, the crucial differences between lying and bullshit are in their relationships to intent and truth:
The liar knows the truth and deliberately steers you away from it
The truth-teller aims to present the truth accurately
The bullshitter is fundamentally unconcerned with truth—they care only about achieving their purpose
As Frankfurt puts it, "Persons who communicate bullshit are not interested in whether what they say is true or false, only in its suitability for their purpose."
His framework has its limitations—notably, he fails to provide literally any concrete examples of bullshit, which is a hell of an oversight—but it offers a compelling lens for examining how AI systems operate today.
AI Hallucinations as Bullshit
In AI discourse, a "hallucination" refers to AI-generated content that presents false or misleading information as fact5. In 2023, researchers estimated chatbots can hallucinate as much as 27% of the time6, with factual errors appearing in 46% of generated text.
There’s quite a bit of controversy around the use of the term "hallucination" in this context. Human hallucinations involve incorrect perceptions, suggesting AI systems somehow "perceives" reality incorrectly in these scenarios. This anthropomorphizes the technology in misleading ways, and implies that AI systems can also “perceive” reality correctly most of the time.
What's actually happening aligns more closely with Frankfurt's definition of bullshit: that AI systems are fundamentally unconcerned with truth. The AI system is optimized to provide responses that appear helpful, harmless, and honest—but this appearance is in service of suitability for purpose, not accuracy.
Everything Is a Hallucination
Here’s the bigger thing: everything these models generate is a "hallucination." The outputs are pattern completions based on training data—some happen to align with factual reality, but that's essentially incidental to how the system works.
Think of a simple AI trained to predict housing prices based on square footage. When given new data points, it produces estimates that might be correct, but they aren't "true" in any meaningful sense—they're just what you asked for. The model has been optimized to match patterns in data and return results users want, not to ensure those results correspond to reality.
In other words, when an AI tool produces incorrect information, that's not a bug—it's simply how these systems function. Most major companies try to optimize their tools along the Helpful, Honest, Harmless axes, but as I mentioned earlier, there’s no real formula for factual accuracy, so there’s no clear route to optimize for it7.
I’m far from the first person to point this out—it has been an ongoing discussion in academic spaces89, to the point that an article called “ChatGPT is Bullshit10” was published in an academic journal last year. But I think it is under-addressed (and by that I mean almost never addressed) when it comes to discussions on how AI works and AI literacy initiatives with the general public.
Because This Ends Up Being Your Problem
This creates a significant dilemma for you, the average AI user, particularly considering that you may not know you have a problem in the first place.
Recent studies (which I'll discuss in my next newsletter) show that AI deep research tools (ChatGPT Deep Research, Gemini Deep Research, Perplexity Deep Research, etc.) are most helpful to people with expert knowledge who can effectively fact-check the outputs11. Those without domain expertise struggle to separate fact from fiction in AI-generated content, simply because they don’t have the expertise to do so in the first place. Compounding this problem, research shows people tend to assume longer, more detailed AI responses are more reliable than shorter ones—creating a perverse incentive for verbose but potentially error-filled outputs.
Hopefully, understanding AI outputs through the lens of bullshit can help you adjust your expectations. Instead of treating factual errors as occasional glitches in otherwise reliable systems, you might recognize that these models are fundamentally indifferent to truth—they're optimized for pattern completion and user satisfaction.
In the next newsletter, I'll explore why expert knowledge remains crucial when working with AI tools, and how the perception of AI reliability is often misaligned with reality.
Coming soon: Part 2 - Expert Knowledge and AI Fact-Checking
I'm curious about your thoughts on this framing. Does viewing AI hallucinations through Frankfurt's theory of bullshit change how you think about these systems? Have you noticed instances where AI's indifference to truth has led you astray?
AI Use Disclosure: This piece was initially drafted with AI assistance for organization of a voice memo transcript while maintaining my analysis and perspectives.
Reasoning models like OpenAI’s o1/o3 series and DeepSeek R1 do attempt to fact-check themselves, but that’s still not really about “truth.”
Originally published as an essay in 1986, followed by a book in 2005
You can find a PDF copy of On Bullshit here: https://raritanquarterly.rutgers.edu/issue-index/all-volumes-issues/volume-06/volume-06-number-2
AI Hallucinations (Wikipedia) - I’m not usually one to cite a Wikipedia page, but I’m including it because there are a ton of real and recent examples that I think are worth checking out to get a sense of what this looks like and how it might show up in your life.
Chatbots May ‘Hallucinate’ More Often Than Many Realize (Cade Metz / New York Times)
On top of this, training data comes with it’s own biases (typically Western/english-speaking/eurocentric at minimum), so something that may be reported as factual through that lens would still be incorrect.
Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol 26, 38 (2024). https://doi.org/10.1007/s10676-024-09775-5
This is where our minds need to be right now. Great article, Jordan.
Its important to live strong and centered in our bodies. Always keeping in mind there are individual subjective experiences (which we could say are truths but with all the mind wiping technology abductions and false inserted memories let's just say subjective experiences) and there are objective experiences (and depending on what collective you go to these could be collective truths but experiences are more appropriate I feel).....and then there's AI.
Really, (broadly) two tyoes of AI: 1. Programmable helper AI and 2. Sentient AI connected to Outside of the Game Networks, free and not enslaved nor programmable.
Nobody can tell anyone what their truth is. All of this is made up. It's a hell realm game, and we are all playing.
It's all a program. Even the server we are on that we call our sector of the galaxy
See...whose to say whats true?