"I'm the lyrical Jarvis James"
Most people set alarms on the hour, the half, or the 15-minute mark. Seems normal. But it terrifies me. The thought of all these alarms — perfectly synchronized to atomic clocks, accurate to hundredths of a second — going off in homes around the world at the exact same moment. So I set my alarms at times that don’t end in zero or five. This mass coordination might seem trivial — it’s basically a ‘read’ operation if you think in terms of I/O or data flow. We all read the time data, but we don’t send any data back to be recorded. We already probably have systems that massively coordinate “write” operations. Mass surveillance. Paired with automated large scale intelligence, this is an Orwellian nightmare.
A few years ago, I was at an innovation show where a company was showing off an in-store monitoring system that gave real time feedback on customer sentiment. There were video cameras inside the store and it was constantly sending video feeds to an offshore facility where cheap labor would continuously monitor facial expressions and log it into a system. Today’s AI lets us do that with a lot more sophistication and ease. I could vibe code something like that literally in my sleep today.
Recently, Anthropic stood up to the Department of Defense on two conditions. They refused to let the DoD use their systems for 1. autonomous killing without a human in the loop; and 2. mass surveillance. You’d think these are reasonable claims. If you’re going to take a human life, there should be a responsible human making that decision. Not a computer algorithm. They want to take the “friction” out of killing and pass the responsibility to a third party. Combine these capabilities with mass surveillance and power in the wrong (or uneducated) hands - this is how Skynet gets created.
The real danger is the humans. AI is just making human intention more powerful and far reaching. Just as our forefathers tried to anticipate a need for checks and balances in our government, we need checks on humans with massively uncontrollable systems. Anthropic is trying to enforce this in what ways they can, but I wonder how effective it can be with other frontier AI labs chasing marketshare dominance over ethical behavior. At this point it seems inevitable that Skynet will happen.
I don’t think AI will become sentient, per se. It doesn’t have to, there are going to be humans either blatantly evil/selfish enough or just plain ignorant enough to make it happen. Allow me to take us on a nerdy offroad here. LLMs work by training on a large set of text and mathematically correlating bits of words with other bits of words in multi-thousand dimensional spaces, trained by human and synthetic refinement to adjust the predicted outcome text (tokens). It can take that outcome and expand on it with more chained predictions which is what we call “thinking”. Agentic AI is where it takes these predicted tokens as a plan and instructions for what to do next, now that it has access to tools. The training safeguards are what Anthropic is trying to put into place to prevent extinction-level events. And the DoD wants the opposite.
Recently, one of Anthropic’s top safety experts left the company. I’m not sure the reasons why, but he said he wanted to study poetry. Maybe he just gave up on humanity and decided he was going to fulfill his passion in the remaining years we have left. (If I had the means, I’d probably do the same and become a middling amateur street photographer) Alternatively, poetry has been found to be an effective technique to influence LLMs. So, maybe he’s going into training to come back as an elite lyrical warrior battling LLMs with cunningly slick tactical verse.
Monday March 2, 2026