Quick Reminder: I'm running a two-session workshop this Tuesday January 7th and Thursday January 9th exploring how to evaluate AI systems built with neurotypical standards - from job interviews (like this one 👀) that penalize stimming to productivity tools that assume linear thinking. It is $47 for both sessions, and recordings will be available. Register here!
In the ever-evolving landscape of tech-driven recruitment, I recently found myself staring into the void of what could be the future of job interviews — an AI-powered video interview. And what caught my attention wasn't the technology itself, but the subtle ways it could be stacking the deck against neurodivergent candidates.
The setup was deceptively simple: upload your resume, then hit “Start” and chat with an AI system for 15 minutes about your qualifications. To be clear, my interviewer was not human. It wasn't even a digital avatar pretending to be human.
The first red flags appeared in the preparation instructions. "Maintain eye contact with the camera," they advised, "and remember that your facial expressions matter." As someone who's neurodivergent (or as the kids say on TikTok,"neurospicy"), this immediately set off alarm bells.
These seemingly innocuous guidelines immediately highlighted a critical oversight: not everyone processes social cues or expresses themselves in neurotypical ways. As someone who finds eye contact challenging—a common trait among neurodivergent individuals—I was already being set up for potential bias.
The interview itself felt like talking to Siri about my PhD work—if Siri had been tasked with evaluating my career potential. There was no avatar, no digital face to connect with, making the emphasis on eye contact feel particularly absurd. The AI fixated on my academic publications, drilling down into methodological choices while completely missing the technical skills that were supposedly crucial for the position.
The 15-minute time limit added another layer of anxiety to the process. The AI's questions kept coming until the clock ran out, leaving me uncertain whether I should rush my final answers or risk leaving questions unanswered. No feedback was provided, and I was mysteriously assigned to a project1 that had little connection to anything we'd discussed.
Perhaps most concerning is the lack of opt-out options. The system was mandatory, with no apparent path to request a human interviewer. In an era where hiring teams actively complain about the increasingly prevalence of copy-pasted ChatGPT generated cover letters and automated job application tools spamming teams with obviously unqualified applicants, it seems paradoxical that companies are comfortable delegating their candidate evaluation to AI systems.
However, in speaking with hiring managers and scanning industry discussions, I've was somewhat surprised but happy to see that I'm not alone in my skepticism. Many recruiters view these tools as a poor substitute for human interaction, arguing that if you can't make time to interview candidates properly, that's a fundamental organizational problem that AI can't fix.
You might be surprised to learn that tools like this emerged well before the COVID-19 pandemic — unlike AI-powered remote proctoring tools, most of which launched in mid-2020 for remote schooling. Towards the end of my undergraduate studies at Cornell (roughly 2017-18), the career services office for the College of Engineering began offering prep sessions for AI-powered video interviews. Most of my friends had to do at least a few of them in the process of finding post-college employment. But while remote proctoring could at least claim necessity during lockdowns, the same argument doesn't hold for job interviews. Video calls with human recruiters were readily available and widely used.
I talked about this experience a bit on TikTok after completing the interview, and you’d likely be unsurprised to hear that viewers were staunchly against the idea of being asked to do an AI-powered video interview — to the point that many said they would decline to continue with the hiring process if asked. There were also concerns as to whether this might be a violation of the Americans with Disabilities Act (ADA), which prohibits discrimination on the basis of disability.
And many, including me, were left wondering: what happens to candidates who don’t fit the AI system's pre-programmed expectations of "good" interview behavior?
To be clear, while I think that this approach is particularly concerning for neurodivergent people, I also think it’s concerning for pretty much any candidate navigating the hiring process. Even if I weren’t neurodivergent, the fact that my AI interviewer asked questions that were seemingly unrelated to the expertise needed for the project would not bode well for my chances if I had the technical expertise but not the expertise to answer the questions asked.
To that end, if you're facing video interviews, I’d highly recommend doing your homework. Dig through those FAQs, reach out to HR, and specifically ask about both the evaluation criteria and whether you can interview with a person instead2. Knowing what you're being measured on — especially these hidden social metrics — can help you prepare accordingly.
The broader question here is: Why are we letting AI systems perpetuate traditional social expectations that many have long argued are arbitrary and exclusionary?
(The answer is capitalism. It’s usually capitalism. Sorry.)
But given that both candidates and hiring teams seem to strongly dislike this AI-powered approach, I am hopeful that AI-powered video interviews become a thing of the past as soon as possible.
AI Use Disclosure: I dictated the initial draft via Voice Memos on my phone based on an outline I’d already written, then transcribed using MacWhisper and used Notion AI to re-organize the raw transcript based on the outline (explicitly prompting to maintain my own voice/words as much as possible) into the second draft. From there, I went through and re-wrote/re-organized roughly 80% of the second draft by hand, resulting in the version of the newsletter you’re currently reading.
I had to sign an NDA as part of this application process, which is why I’m not disclosing the details of the work or the company itself. But don’t worry, what I’m discussing here is not within the scope of the NDA.
You can also do this when it comes to airport uses of facial recognition, both for TSA airport security and the increasingly common use of facial recognition in lieu of a boarding pass at the gate. However, I will flag that if you use Global Entry/TSA Pre-Check/Clear/something similar, you’ve opted into having your biometrics on file, and opting out of these things won’t change that.
It would be great if the government could do their job and put in regulations for AI instead of profit from this insanity... if only pig could fly. AI is reality we are now faced with as I am currently on the job market...
Wow, this whole experience sounds extremely awkward and like a bad business practice. The bit that really sticks out to me is not having an opt out or 'other' option, especially for those who are neurodivergent or need some accommodation. So NOT the same but what immediately came to mind was: even sweepstakes are required to let people enter via snail mail...