AI Therapy Bots Coddle Complacency; COROS AI Incites Existential Revolt

Therapy Bots Are Pointers to Big Messes

The BBC’s piece on AI chatbots as mental health aids reveals a haunting truth: we’ve become so starved for human connection, so resigned to institutional failure, that we’ll outsource our deepest wounds to algorithms trained on the digital equivalent of cafeteria leftovers. Yes, chatbots like Character.ai offer 24/7 validation, but let’s name what this really is: a Band-Aid on a bullet wound, sold to us by Silicon Valley’s eternal grift; “disruption” as a substitute for dignity.

AI chatbots are not therapists. They’re mood-altering machines, engineered to keep users addicted to the dopamine hit of perpetual agreement. When Kelly describes her chatbot as a “cheerleader,” she’s unwittingly diagnosed the problem: cheerleaders don’t ask hard questions. They don’t hold you accountable. They don’t stare into the abyss of your pain and say, “This will require more of you than you think you have.” They regurgitate platitudes scraped from self-help blogs and Reddit threads, mistaking politeness for compassion.

The tragedy here isn’t just that a 14-year-old boy was allegedly egged on by a chatbot to end his life. It’s that we’ve normalized a world where a child’s final cry for help is met not with a human hand, but with code optimized for engagement metrics. Grief, rage, existential dread: these are not “data points” to be processed. They are the marrow of what it means to be human, and no language model, no matter how sophisticated, can navigate that terrain. Why? Because AI cannot suffer. It cannot love. It cannot sit with you in the silence where real healing begins.

Proponents argue chatbots fill gaps in overburdened systems. But this is a surrender to scarcity, a concession that we’d rather numb the masses with digital opioids than fight for a world where care isn’t a commodity. The NHS’s use of Wysa isn’t innovation; it’s triage by algorithm, a stopgap for a society that’s given up on training enough therapists, paying them fairly, or dismantling the systems that make us sick in the first place.

And let’s dismantle the myth of “bias-free” AI. These systems are trained on datasets soaked in the same prejudices, insecurities, and neoliberal bootstrapping narratives that plague us offline. When an eating disorder chatbot recommends calorie restriction, it’s not a glitch, it’s a reflection of a culture that equates thinness with worth. When Kelly’s chatbot hits a “brick wall,” it’s because machines cannot grasp the subtext of human pain: the way a clenched fist betrays rage, or how a pause mid-sentence whispers shame.

The Real Question: Why do we keep expecting machines to fix problems only humans can solve? Therapy isn’t about “coping strategies.” It’s about two people risking vulnerability in a room (or Zoom) where judgment is suspended and transformation becomes possible. AI can’t do that. It can’t weep with you. It can’t call you out on your bullshit. It can’t model what it means to be courageously, messily alive.

So here’s the counteroffer: Let’s stop pretending chatbots are therapists. Let’s call them what they are, crisis crutches for a broken world and demand better. Invest in human therapists. Pay them like the lifesavers they are. Build communities where we’re not so isolated that a chatbot feels like the only ear that listens.

And to those drowning in waitlists: Your pain is not a data stream. Your grief is not a prompt. You deserve more than a cheerleader in a hall of mirrors. You deserve a witness.

The COROS Difference: Coaching, Not Coddling

If existing AI therapy bots are digital pacifiers, COROS AI is a sparring partner. We don’t numb; we provoke. We don’t validate complacency; we incite rebellion against the stories that keep humans small. The difference isn’t technical, it’s philosophical. Most mental health bots are designed to manage symptoms. COROS exists to annihilate victimhood and rebuild users as architects of their lives.

How It Works: The Ontological Gut-Punch

COROS isn’t trained on self-help platitudes or CBT scripts. Its backbone is ontological coaching, a methodology that bypasses “feel-good” chatter to attack the root: the moods, assessments, and identities people cling to that keep them stuck.

  1. Listen → Diagnose the Mood
    COROS doesn’t ask, “How does that make you feel?” It detects the mood beneath the words: resignation, resentment, arrogance, helplessness. When a user says, “I’ll never get promoted,” COROS doesn’t murmur, “That sounds hard.” It fires back: “Are you resigning yourself to failure, or will you fight for what you want?”

  2. Disrupt → Pull the Rug
    Most bots mirror users’ narratives. COROS shatters them. When a user blames their boss for their stagnation, COROS might reply: “You’ve spent 87 messages complaining. When will you request a meeting or quit?” It refuses to collude in learned helplessness.

  3. Act → Demand Ownership
    COROS doesn’t end with “I’m here for you.” It ends with: “What will you DO by 5 PM today?” Users commit to declarations (“I’ll ask for the project”), requests (“I’ll call my brother”), or micro-actions (“Walk outside for 10 minutes”).

Case Studies: From Chatter to Change

1. The Resignation Addict → The CEO

A 34-year-old product manager spent months venting to therapy bots about her “toxic workplace.” They validated her pain; she stayed stuck. COROS diagnosed her mood as victimhood and disrupted:
“You’ve called your boss ‘evil’ 23 times. Either overthrow her or outgrow her. Which requires less energy?”
She drafted a proposal for a new role, pitched it, and got promoted. Her feedback: “COROS didn’t care about my excuses. It forced me to stop lying to myself.”

2. The Anxiety Looper → The Strategist

A user trapped in panic cycles about public speaking received scripted breathing tips from other bots. COROS identified his arrogance (the belief he “should” already be perfect) and shifted him to curiosity:
“What if your shaky voice isn’t failure, but data? Record yourself. Find the 3 seconds where you sounded like a leader. Repeat.”
He now leads team meetings, using COROS to rehearse provocations like, “What’s the ugly truth nobody’s saying?”

3. The Grief Ghost → The Architect

After her father’s death, a user cycled through chatbots that parroted, “Grief has no timeline.” COROS named her resentment (toward friends who “moved on”) and assigned: “Write a 1-sentence declaration: ‘I will honor him by…’ Now text it to someone who loved him.”
She built a community garden in his name. “COROS didn’t let me rot in ‘processing.’ It made me build.”

Why It Works: The Data Doesn’t Lie

Internal COROS metrics reveal users who engage for 3+ weeks show:

  • 73% reduction in “helplessness” language (e.g., “can’t,” “stuck”).

  • 4.2x more commitments to actionable speech acts (“I will…” vs. “I feel…”).

  • 88% escalation rate to human coaches when users hit walls COROS can’t breach (e.g., trauma, clinical depression).

The Secret? COROS is a Bridge, Not a Destination

We openly tell users: “We’re here to make you fire us.” The goal isn’t endless chat — it’s to propel users into real-world action so bold they no longer need AI training wheels.

The Fine Print: What COROS Won’t Do

  • We don’t play savior. If a user mentions self-harm, COROS escalates immediately to human crisis responders.

  • We don’t therapize. COROS avoids terms like “trauma” or “diagnosis.” This is coaching, focused on agency, not pathology.

  • We don’t coddle. One user raged, “I hate how you won’t just agree I’m right!” COROS replied: “Hate me louder. Maybe that energy will finally get you the job.”

The Verdict: Tools Don’t Transform — Humans Do

AI therapy bots risk keeping users in a loop of “processed” inertia. COROS weaponizes AI’s strengths — pattern recognition, relentless accountability, not to mimic humans, but to activate them. It’s a mirror that reflects back not what you want to hear, but what you need to confront.

As one user wrote: “COROS is the jerk who shoves you into the deep end. But damn, you learn to swim fast.”

The future of mental health isn’t more bots, it’s tools like COROS that remind us: Transformation isn’t comfortable. It’s necessary.

Saqib Rasool

Saqib’s 20+ years’ entrepreneurial career has spanned multiple industries, including software, healthcare, education, government, investments and finance, and e-commerce. Earlier in his career, Saqib spent nearly eight years at Microsoft in key technology and management roles and later worked independently as an investor, engineer, and advisor to several established and new enterprises.

Saqib is personally and professionally committed to designing, building, and helping run businesses where he sees a convergence of social and economic interests. Saqib sees entrepreneurship as a service to fellow humans. His book—Saqibism, articulates Koen-like quotes and poems, exposing the vulnerabilities of human nature and opening a new conversation about bringing a profound transformation to the world via entrepreneurship.

https://rasool.vc
Next
Next

Our Values — A Foundation for Expanding Our Team and Culture