How Should We Talk About Artificial Intelligence?
Looking back on my thoughts on AI from 2021.
Hey y’all! I’ve been under the weather this week, so instead of the Wednesday round-up, I’m resurfacing a piece I wrote for The Black Agenda in 2021.
The Black Agenda mobilizes top Black experts from across the country to share transformative perspectives on how to deploy anti-racist ideas and policies.
Paid subscribers can read the full essay, but I’d highly recommend picking up a copy of the book regardless - it’s backordered on Bookshop, so check your local bookstore (or Amazon) - I think a lot of the perspectives still resonate four years later.
Want to move from vague AI concerns to clear choices? Join my workshop this Monday/Tuesday (Feb 17 + 18) to craft your personal AI manifesto. We'll map your actual AI touchpoints (including the ones you don't know about), build clear evaluation criteria, and create a system for making intentional choices about how this technology fits into your life.
Early bird spots still open ($77)! Can't make it live? You'll get the recordings!
P.S. - Free subscribers get 20% off with code BUILD20, and paid subscribers get 50% off with a different code that you’ll find after the paywall. :)
As you may have noticed, I’ve always had a fascination with stories and language - specifically, the stories we tell ourselves, and the language we use to do so. This essay focuses on that in the context of how AI discourse (at the time, but also now) was extremely fragmented, even within academic circles, and how language was often used to divide groups who would otherwise likely share similar aims and values.
If anything, I think this has gotten a bit better within academic discussions about AI, partially due to some of the recommendations that I made actually being implemented at major AI conferences, and much worse in the broader public AI discourse. IMO, this is largely a reflection of broader trends in public discourse, particularly as it relates to conservative focus on the aesthetics of progressivism and reliance on the straw man fallacy to argue over semantics1 instead of the substance of an issue2.
Perhaps I’ll write a revised version/follow-up on this piece for a future issue. Either way, I’d love to hear your thoughts on this essay, and whether you think it still applies in 2025.
Also, I did end up reading 60 books that year - in fact, I overshot by 44 books. :)
Early in 2020, I challenged myself to read 60 books by the end of the year. I used to be an avid reader, before reading journal articles became central to both my 9-5 job as a PhD student and my 5-9 job as a science communicator and science writer. Through reading, I learned the power that ink on a page and pixels on a screen could have to change someone’s worldview, transport someone to an entirely new fantastical world, or to distance people from their actions and personal accountability.
On January 1st, 2021, I hadn’t come anywhere close to 60 books in 2020. In spite of that, the experience was still enlightening. As the artificial intelligence community went through its nth reckoning on racism, sexism, and algorithmic fairness, I watched researchers touted as pioneers in the field of artificial intelligence in the books I was reading disparage the work of researchers trying to develop systems as equitable as they were powerful, as well as the character of the researchers themselves via social media, or, in more severe cases, via emails sent directly to the researchers themselves.
In particular, I watched the language opponents to algorithmic fairness research used to distance themselves from any personal responsibility over the algorithms they developed and the power they wield as researchers. [9] Much of it focused on reframing themselves as victims, under attack from “militant liberalism”, and framed algorithmic fairness research that is predominantly quantitative in nature as factually unfounded political advocacy instead. At the same time, I started to see an increase in reporting on how major AI research companies were advising employees who filing HR complaints about racial and gender discrimination to receive mental health counseling or take medical leave, without actually addressing the perpetrators of said discrimination. [10] In effect, since entering the field of artificial intelligence, I’ve watched researchers from marginalized groups be blamed for systemic problems that they both did not create and which actively work against them, often as they try to make steps towards improving the system.
Keep reading with a 7-day free trial
Subscribe to es machina to keep reading this post and get 7 days of free access to the full post archives.