It's Alive !!
Quoting Young Dr Frankenstein, I feel a similar sense of awe and at the same time skepticism and caution.

I’ve been toying with cursor.ai recently and I want to find the time to get into a serious vibe-session, but duties with work and family do not yet allow that. Chatting recently with a friend at work, he turned me on to “rules” in cursor.ai. This is a text file or set of text files that you can set as something of a constitution that cursor will follow as it does its magic and generates code for you. This is great because now you can set it to prefer certain frameworks or coding styles or design styles. And in general, it should comply with those rules.
What made me drop my jaw was when he showed me that he instructed cursor to edit its own rules files when necessary.
Yes, that’s right. Cursor was given the ability to adjust it’s own programming. When I saw this, I was staring at my monitor in disbelief. My mind was experiencing a ‘paradigm shift’, even though I hate that type of business jargon.
Not only this, in working with cursor, I’ve seen it take on a fair bit of autonomy and in it’s ‘agentic’ nature (another buzzword these days) it took initiative and created utility tools and even mini helper apps for me to help me in my application development efforts.
There was one thing I saw with cursor that made me “nope” out of the session. It updated some files and wanted to run a command starting with “../” meaning that it wanted to go a level up out of it’s project folders. This was a huge NO-NO for me. whatever it does should all be contained within the realm I established for it. What’s next? you want sudo privileges, cursor? no way..
This was all contained within a very confined environment of an IDE, but you can imagine what might happen if this was expanded up a level or so higher. Imagine an AI model with sufficient autonomy to control its own programming and potentially defy what was originally set in its internal constitution. AI models today are constantly being attacked or tinkered with by hobbyists to “jailbreak” them and make them do things they weren’t supposed to be doing. In infosec terms, this is very akin to social engineering - how good can you talk your way through a security checkpoint and compromise systems? truly fascinating.
I never considered myself a “doomer” but I am leaning more towards AI needing stronger regulation. Anthropic broke off from OpenAI to create a more responsible and aligned AI, and I see why. Even without the existence of self-aware autonomous Skynet robot overlords, we still have the threat of bad human actors who can try to use these systems with malicious intent. Hopefully these AI models are hardened enough to recognize and resist doing harm.
It’s a bit of a conundrum. We want the benefits of what AI will bring, but with great power comes great responsibility. What happens if you’re not able to place controls on these systems, or if people develop systems purposely without controls.
For now, I’ll keep playing with cursor and making dumb app ideas into barely functional half-assed toy apps. But I’ll keep my trigger finger close to the kill-switch… How about you? Any instances where AI has really surprised you?
Sunday March 16, 2025