On the admitted floor, an elderly grandmother hangs up the phone with shaking hands, her daughter’s stone-cold ultimatum about rehab and home aides still echoing in the stale hospital air. “Too expensive. I don’t want to spend my money on that. I just wanna go home,” she’d said, her voice steady and final even as something crumbled behind her eyes. The moment the line goes dead, her face collapses—but the sob that breaks from her chest twists midway into laughter, a horrible broken sound that’s neither joy nor grief but something trapped between them. She clutches the sheets, her whole body shaking with this laugh-cry that won’t choose what it wants to be. When she finally catches her breath, she wipes her face and calls out, “Nurse?"—her voice almost normal, almost fine. The nurse appears, harried and hollow-eyed, already halfway down the hall again before the grandmother can finish her sentence. “Be back in a bit, sweetie.” And then she’s alone with the fluorescent hum, nodding at nothing, and that broken laugh-sob comes creeping back, quieter now, her hand pressed to her mouth as if she could keep the sound from costing anyone anything at all.

robot nurse in hallway

I sat mere feet away. The complexity of that sound was haunting - her shame forcing her to mask what she couldn’t hold back. It contained all the complexity of a messy human web: an indifferent son, a calculating daughter, a hopeful granddaughter, and the weight of dignity slowly collapsing.

It made me wonder: Can AI ever generate and express that much complexity?

Coincidentally, I was scrolling through my work slack, watching an AI-generated video about dogs going to heaven. Despite knowing it was synthetic, I felt moved. It’s a testament to our brains that we can derive real emotion from artificial inputs. But there’s a vast difference between a touching puppy video and the jagged reality of that grandmother’s grief.

This brings me to the state of AI video today, specifically the release of Sora 2. Right now, it feels less like a creative tool and more like a slot machine. You insert a prompt like a quarter, and you get your dose of “slop”—sometimes lucky, often weird.

OpenAI seems to be chasing engagement, flooding the zone with 10-second clips that induce “brainrot.” Meta is trying to do the same with Vibes. In contrast, I see artists at Adobe using these tools to create genuinely inspiring and moving pieces. There’s a distinct difference between an artist using AI as a tool and a slot machine generating noise.

The technology is improving, but the hallucinations remain. I recently spotted a calculator in an AI video where the buttons were scrambled. It’s a phenomenon that feels strangely like visiting a foreign country that’s almost like home, but not quite.

It reminds me of being an American in Canada. Everything feels 95% familiar, but the details are slightly off. The TV stations are different (CBC vs. CBS), the mail doesn’t come on Saturdays, and they put gravy on fries. It’s a parallel reality that’s recognizable but sometimes inexplicably different.

AI video is currently stuck in that parallel reality. It operates on dream logic. But as these models flood our feeds, I wonder about the long-term effect. If we spend too much time in the “slop,” accepting slight oddities and scrambled calculators, do we lose our grip on the details that matter?

And more importantly, if we settle for the AI version of emotion, do we lose the ability to recognize the real, terrifying, suspended tension of a grandmother laughing and crying at the same time?

[Full disclosure: I am an employee of Adobe, but the views in this post are strictly my own and do not represent the company.]