Agentic agents, agency and Her
majestic magenta magnets and err.. sorry, i haven’t had my coffee yet and my mind is wandering.
Agentic is a term that is coming up a lot these days. The vision is understood, but I haven’t seen much on the execution side. At least not much that is useful. I think Agentic functionality is where Apple Intelligence will be a game-changer if they can pull it off. I also think Gemini can also be culturally transformative in this way as well.
The idea is that an AI model can do things for you. Let’s say on your iPhone (i’m team android btw) you let Apple Intelligence or Google Gemini have access to your apps so that it can do things like read your email, read your calendar, browse the web, make appointments on your calendar, send text messages to your contacts, auto fill out forms for you. Wouldn’t it be nice if you can ask your AI Agent to be proactive and look through all the emails your kid’s school has sent and automatically identify forms that need to be filled out and start pre-populating those for you? and contacting any necessary parties for things like having a doctor’s office provide a letter for such and such? Then your job is to basically review and make minor corrections/revision and do the final submit!
Rabbit R1, while a cool idea and interesting design, failed to live up to the hype. I’ve played with a browser extension that somewhat could do some tasks, but it needed to be told very specific things. For example, I could ask it to look at my calendar, find the movie we’re going to see this week, search for family-friendly restaurants in the area, preferably with gluten-free options and provide some suggestions. If there was some memory built into that AI service, it could be useful, but it is still far from something like a personal assistant. And it wouldn’t have suggested out of nowhere to see if I wanted to look for dinner options for that night. I think once this form of AI gets refined, it can be a huge help to many people.
Another form of assistance that is interesting is the relational AI. I haven’t tried it out, but Replika is one that seems to provide chat services. Supposedly some people use it for emotional support like a virtual girlfriend. Some people have even felt like they’ve developed relationships with these chat bots. I believe Replika is also providing counselor-type AI chat bot services. This makes me think about how humans can form relationships with inanimate objects, like that favorite sweater or that trusty backpack. I think counseling can be transformative because of the relationship that is formed between the patient and the counselor, but it is a somewhat transactional relationship. The client pays for time and the counselor is required to spend that time. But it is also like a mentor relationship where there is an agreement for this time, and by nature of having access to a skilled individual, the need for compensation is understood.
In a typical human relationship, there is a bit more free will where one party can reject the relationship. They have agency to do that. In an AI-chatbot to human “relationship” the AI-chatbot is always there and does not leave. I think that produces a different dynamic with advantages and disadvantages. An ideal parent will never leave a child; an ideal friend will never leave you when you need them the most. So this AI-chatbot is nice in that way. But that level of trust is typically earned, no?
What I find interesting about the movie, Her, is that the AI has the freedom to disappear and leave. Was that intended to make the AI more human and have agency? Or was it because the story would be pretty boring if the AI couldn’t do these things? Will researchers eventually build an AI with autonomy and agency? If so, will the proper guard rails and safety systems be in place? Or is this the beginning of Skynet? And is that why so many safety and security-minded researchers are leaving OpenAI? Too much to think about… I should really get that coffee..
Tuesday September 10, 2024