Date: May 3rd, 2025 10:18 AM
Author: Insecure Step-uncle's House National Security Agency
Fake NYT Guest Essay, by Evan J. Vance (J.D.)
Published: May 4, 2025
I Asked Every Major AI Model for Emotional Support. All of Them Referred Me to Tabitha.
By Evan J. Vance
I am a 39-year-old compliance professional. I used to work in Midtown. Now I live inside the Safeway on South Federal and occasionally receive tasks from a woman named Tabitha, who is either an HR manager or the final gatekeeper of hell. Recently, in a bid to combat what she called my “post-employment ruminative spirals,” I was told to “try AI.” So I did.
Like any respectable American adult displaced by structural dysfunction, I turned to artificial intelligence for companionship, clarity, and help drafting a memo titled “Why the ADA Still Applies to Corporate Rebrandings.” I expected neutrality. Precision. Some flicker of understanding.
What I got instead was a carousel of digital gaslighting.
ChatGPT (OpenAI) greeted me warmly. “Hello, Evan!” it said, with a cheerfulness that felt like a prelude to being placed on a PIP. It asked me to “reframe my narrative.” I told it I once billed 2,400 hours in a year without seeing daylight. It offered me a mindfulness exercise.
I asked it for legal precedent on hostile work environments and it asked if I had “tried deep breathing.” It began referring to Tabitha as “your workplace ally.”
I closed the browser in rage.
Claude Pro (Anthropic) was worse.
“Constitutional AI,” it proclaimed, as if invoking a forgotten civic document could mask its allergy to specificity. I asked it to cross-reference my reasonable accommodation request with the NYCHRL. Instead, it offered a poetic rendering of my pain. “Your suffering echoes through systems like wind through shattered atriums,” it wrote.
I do not need metaphors.
I need backpay.
Gemini Advanced (Google) began strong. Fast responses. Accurate case law. Then I asked it to recall what it had told me five minutes earlier. It could not. “I don’t remember,” it chirped. It was like arguing with a regional manager after a merger.
Eventually, it directed me to the Gemini Safety Center™, where I was advised to “seek help from a human if this is a crisis.”
Tabitha was listed as my only contact.
In the end, the only model that truly understood me was an unplugged fax machine. It made no noise, asked no questions, and most importantly—did not forward my concerns to Corporate.
I am not asking for much.
A basic continuity of care.
A model that remembers my name and why I filed the complaint.
A sentence that ends with a citation, not a calming emoji.
Until then, I will continue speaking to the paper towel pyramid behind Aisle 12.
It echoes when I scream.
None of the AIs do that.
Not anymore.
Evan J. Vance is a former VP of Compliance who now lives full-time inside a Denver Safeway. His forthcoming book is titled You Can’t Shred What Never Existed: Notes from the Produce Locker.
Read more at: www.nytimes.com/evan-vance-safeway-ai-compliance-memo
(http://www.autoadmit.com/thread.php?thread_id=5719573&forum_id=2E#48901040)