Happy Sunday! This week, we’re getting into the importance of AI literacy from an angle that I hadn’t formally considered until I came across an article that, frankly, pissed me off. I’d love to hear your thoughts in the comments. And if you want to learn more about topics like this one, consider subscribing to stay informed.
Also, we have a Substack Chat now! I’ll be creating discussion threads this week for the questions that I posed to y’all at the end of the issue, so hop on over there for more info.
Sometimes academic research stops me in my tracks. Usually it happens with papers in my own field, but last week I encountered a study that left me both shocked and unsurprised – and has been nagging at me ever since.
The study, recently published as an abstract in EXPRESS titled "Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity”, aimed to understand the factors that make a consumer more or less likely to be interested in AI. I came across it via WIRED, which had republished the authors’ original piece in The Conversation.
Its conclusion? “… people with lower AI literacy are more likely to perceive AI as magical and experience feelings of awe in the face of AI’s execution of tasks that seem to require uniquely human attributes.”
That finding itself wasn’t particularly surprising, although it does somewhat contradict other research in this space. What was surprising was the authors' recommendation: companies might benefit from targeting customers with lower AI literacy, and efforts to demystify AI should perhaps be avoided since they might reduce its appeal.
In other words, keep people in the dark – it's good for business.
How AI Literacy Shapes Engagement
Or is it? It depends on what you mean by “good for business.”
A recent case study of a materials science company found that AI integration significantly boosted overall productivity metrics - in fact, output increased by over 40% after implementation. However, employees' reactions varied dramatically based on their roles and understanding of the technology. Lab technicians, whose work involved implementing AI suggestions, were relatively neutral and adapted quickly to the new tools. But research scientists, who had higher levels of technical literacy and whose creative work was being partially automated, experienced significant reductions in satisfaction with the content of their work (-44%) and overall wellbeing (-82%), despite their enjoyment of the clear productivity gains.
Another study focusing on AI education for Black students found that increased understanding led to more cautious and critical perspectives on AI technology. The researchers tracked over 200 Black high school students through a year-long AI literacy program, documenting how their perceptions evolved. The more students learned about the technical underpinnings of AI, the more carefully they considered its implications for their lives and communities. By the end of the program, 78% of participants reported being more selective about which AI tools they would use and how they would use them, with many expressing particular concern about facial recognition technologies and automated decision-making systems that could affect their communities.
Anecdotally, I’ve seen this reflected in my own experiences in AI education and consulting. In 2019, I gave a TEDx talk about the importance of AI literacy. Since then, I've worked with countless individuals and organizations to help them understand AI systems and their implications. My goal in this work is rarely to impose my values and boundaries when it comes to AI use (which are always evolving) on others, but to make sure people are informed enough to create their own framework and boundaries based on their values. Yet almost invariably, increased understanding leads to more mindful, intentional, and often more limited use of AI tools.
The pattern is clear: when people truly understand AI's capabilities and limitations, they tend to make more informed choices about its role in their lives. Sometimes this means using AI more effectively; often it means using it less.
Based on that study we started off with, that would be bad for business. And it would logically follow that if your goal is to get more consumers to buy your AI product, exploiting AI illiteracy would be a solid place to start.
Alternatively, you could just make a product that doesn’t suck.
Why Lie When You Could Just Address Actual Problems
Let’s put the ethical ramifications aside for a moment.
Many AI companies seem to struggle with a fundamental problem – their products don't actually solve real problems. But if you can maintain an aura of technological magic, you don't have to prove your product's value. The mystique becomes the selling point.
(You’d think that this approach would be a get rich quick strategy, but in the AI startup space, I’ve gotten the impression that it has more to do with maintaining willful ignorance of ones lack of skill in “innovating.” After all, if you were the tech genius you want to think you are, you wouldn’t need to exploit someone’s ignorance to get them to buy your product - it would be worth buying. )
This strategy isn't new. Tech companies have long benefited from users not fully understanding their products. But there's something particularly troubling about actively recommending less education to maintain profitability. The paper suggests "balancing" AI literacy with marketing needs. But let's be clear about what this means: deliberately keeping people less informed to make products more appealing. It's a strategy that prioritizes adoption over understanding, growth over empowerment.
This approach might boost short-term adoption rates, but it raises serious ethical concerns. Users who adopt AI tools without understanding them are more likely to misuse them, face unexpected problems, and ultimately become disillusioned. On a practical level, it's also likely to backfire, as we’ve seen with several consumer AI products (Rabbit R1, Friend, Humane AI Pin) in the last year.
Of course, there's an alternative approach: embrace education and transparency while building truly useful products. When your AI tool solves real problems, you don't need to rely on mystique for adoption. This might mean slower growth and more modest claims. It might mean users saying "no" to products that don't serve their needs. But it also means building something sustainable – products that people use because they understand and value them, not because they're enchanted by them.
(It also increases the likelihood that you’ll actually make money. So, there’s that.)
I'm curious about your experiences with AI education. Have you found that understanding AI better has made you more or less likely to use it? And how do you feel about companies potentially targeting users based on their level of AI literacy?
AI Use Disclosure: This piece was drafted with AI assistance, in the form of voice note transcription and rough organization of raw transcripts for the initial draft.
P.S. Curious if AI is worth your time? I offer 1:1 sessions designed to help you figure out how (or if) AI fits into your life. Whether you’re exploring tools like ChatGPT, need help simplifying your daily workflows, or want to align AI with your personal values, I’ll guide you to make informed, intentional choices. Book a free discovery call here to get started.
Reached out to you on Instagram, gerardo_0588. I have material and hope you can review, please contact me it would mean everything to me
I'm came across you on YouTube and glad I did📚📚👀