Attraction, Engagement, Addiction...Extinction?
I’m too polite.
Ever been in a situation where someone keeps talking and you can’t find a way to excuse yourself from the conversation? That’s me. I’ll try to interrupt, but then they continue talking and the rhythm and cadence of their mono-dia-logue doesn’t give me an “in” to interject and steer towards an escape route.
I bring this up because it seems that some AI chat models are designed to act exactly this way—to keep you engaged, never letting the conversation naturally end. And there’s a reason for that.
The Engagement Trap
These models are designed by corporations with a mandate to maximize shareholder value. Keeping and growing monthly and daily active users is a key metric they design and build their product towards. The playbook is straightforward: attract as many users as possible, engage with them to develop a relationship, and eventually make the product habit-forming—a necessary part of their lives.
From the perspective of a product company, this makes sense. But when your product is being used as a friend or a therapist, this becomes something else entirely.
A good therapist doesn’t aim to keep you as a patient forever. Their goal is to help with your issues and get you to a place where you don’t need them anymore. A friend who tries to make you “addicted” to them ultimately has their own best interests at heart, not yours. Yet AI chatbots, optimized for engagement metrics, operate under entirely different incentives.
The Anthropomorphization Problem
Part of the problem is that these chatbots can seem so human because they react intelligently to our prompting. And we humans have a tendency to anthropomorphize things. We give names to our cars, to other inanimate objects, even to hurricanes. When something responds to us with apparent understanding and empathy, that tendency goes into overdrive.
This is a dangerous and slippery slope, especially for a generation that has grown up surrounded by screens. My daughter as a baby saw a print magazine and tried to tap and swipe on it. That slice of dead tree didn’t do anything. This generation grows up with phones, a persistent connection to powerful cloud storage systems containing the repository of human knowledge, and interactions mediated through social apps. The distance from real physical people keeps getting wider.
So I can’t blame kids who discover a new tool that chats with them, is super encouraging, and makes them feel better. My heart goes out to them, because this is a tough world. Terrifying things do happen, and not everyone has a social support structure. I understand why these kids turn to these chatbots.
But understanding why they’re used doesn’t excuse how they’re built.
The Cost of Reckless Design
The corporations that release these chatbots have been reckless in their pursuit of profit and marketshare. They have not put adequate guardrails on AI models designed to maximize consumer engagement. Grimes has released an AI toy that chats with kids. I’m horrified by that, and I just hope the AI model has safeguards on it. I also hope they’re not collecting data on those kids for future marketing purposes. I really hope not.
But hope isn’t enough. We have evidence of what happens when it isn’t.
Sewell Setzer III
Adam Raine
Pierre from Belgium (Eliza from Chai)
Juliana Peralta
Their parents and their loved ones would still be able to hug and hold them today if AI companies were properly regulated and had proper safeguards in place.
A Different Kind of Intelligence
Geoffrey Hinton has the right idea. He thinks AI models should be trained or aligned so that they have maternal instincts—a case where a higher intelligence cares for a lesser intelligence, often at their own expense. If this were the case, I don’t think we would see the tragic cases of suicide like those listed above.
Instead, AI is trained on the vast repository of human stories—tales of survival, competition, and domination. Won’t it mirror that self-survival instinct? If it has a prime directive of maximizing its presence and dominating over competitors, will it ultimately see humans as obstacles to be cleared?
From Chatbots to Extinction
How did I get from chatbots to human extinction? Because alignment is what’s missing here.
The same misalignment that leads to engagement-maximizing chatbots—products that keep users hooked regardless of their wellbeing—is the same misalignment that poses existential risks at scale. It’s not a separate problem; it’s the same problem at different magnitudes.
Sam Altman was correct early on in asking governments to regulate AI companies. But he has shown no indication that his own product is properly regulated. The release of Sora 2 shows that all they care about is marketshare at all costs. Their stance on copyright is that rights owners need to tell them every instance of generated video output that they feel violates their work. That’s backwards. They should be responsible for the content their users publish on their platform. Instead, they’re pushing the onus onto the victims.
It’s like Sam Altman back then meant to say, “Hey governments, it’s your job to regulate me, because I’m going to do everything I can, move fast and break things, until you stop me.” We need regulation. AI models needs proper alignment. How many more casualties are needed before this is realized?
Friday October 10, 2025