I’m purposely avoiding the temptation to write about the Altman-OpenAI drama. It’s tiring, to be honest. Just wake me up when it’s all done.

onto AGI…

I think it’s interesting that in Korean, AGI can be read as the word for baby, pronounced “ah-gee” 아기

But I’m thinking about this because one of the goals of OpenAI is to produce AGI, artificial general intelligence. My first question is - how do they define AGI? Asking ChatGPT4 it gives 5 characteristics:

  1. Adaptability: AGI can adapt to new tasks and environments without needing to be retrained or reprogrammed specifically for each new task.

  2. Generalization: It can generalize knowledge and skills from one domain to another. For instance, learning from a game like chess might help it understand strategies in a completely different context.

  3. Autonomous Learning: AGI has the ability to learn on its own, without human intervention, gathering information, and learning from its environment.

  4. Understanding and Reasoning: Unlike narrow AI which follows predefined patterns, AGI can comprehend complex ideas, form abstractions and hypotheses, and use reasoning to solve problems.

  5. Consciousness and Self-awareness: Some definitions of AGI also include elements of consciousness or self-awareness, although this remains a topic of debate and philosophical inquiry.

For me, I’d leave the last one out. And this is for the similar reasons I would ask myself, “why would I want to bring a new life form into existence?” A lot of people/couples do this all the time, but usually are committed to devoting incredible amounts of personal resources to the care and nurture of their 아기. Are computer scientists trying to create AGI because they are similarly motivated? I don’t know.

As for the first four characteristics? I’d be really curious to see what OpenAI’s path to AGI is. Some people claim an LLM like ChatGPT is AGI, but to me it is still a very advanced mimic. It is taking input it has received and it finds linguistic patterns and re-forms them according to the prompts and the patterns (weights, feature vectors, etc..).

I could see how school children are taught in the same way. They are given a prompt, they are given enough material to solve their prompt (training corpus), they are given a format to form their response in (one shot prompt example). At some point, something clicks and they are able to conduct this on their own and make connections to other areas and be curious and ask questions. On curiosity, did we train them somehow or are they innately programmed to be curious? Can we inject these qualities into an AGI? who knows!

Many see Langchain as a major step to AGI, and this could be a big part of it. I am not deep into the mechanics of it, but my understanding is that it allows for one step to follow the next. For example, if I had something trigger a thought and it led me to search on google and that provided more information on my original thought and that helped me set up subtasks to help me with a goal around my initial thought. and so forth…

I think we can probably get to something that looks very much like Intelligence, but ultimately it would still be a task-enabled super parrot. I can imagine the following being possible:

me: hey computer, can you do my homework for tonight?

computer: sure where can I find your homework?

me: go to google classroom

computer: okay, what’s your login and password?

me: [top secret whisperings]

computer: got it. i’m in. it looks like you have an assignment for algebra due tomorrow. should i work on that?

me: yes. thanks

computer: i’ve created an answer sheet that shows the work to solve the problems for tonight’s homework, shall I upload it to google classroom or will you review it first?

… and so forth

[sidebar: giving a computer system the autonomy to perform tasks like this should really be carefully thought out. i think I mentioned this in a previous post. But this is something that I believe should be regulated, somehow.]

It’s interesting how the voice interaction of ChatGPT4 makes you feel like you’re conversing with a person. That anthropomorphism is quite interesting and I wonder if it is part of OpenAI’s plan to get human interaction with it’s AI to help train it in a specific way.

As for consciousness and self-awareness. I am reminded of a philosophy class on personhood. Self-awareness seemingly is achieved by interacting with everything around you. A baby gets visual and tactile and audio input from the environment around it. It has all these informational signals coming in and eventually learns to make sense of it, very much by interacting and playing with it. It interacts and the environment responds. It pushes a ball and it see the ball roll away. It sees the world around it and it sees itself interact in the world. Maybe an AI needs to attain “self-awareness” with these baby steps? Maybe this is why Nvidia is creating its Omniverse. What better place to train something on a world than a safe virtual world that you can fully define and control? and hopefully firewall from the outside world.

It will be neat to have an assistant autonomously do things for you. I think this is as far as AGI should go. Trying to create a new sentient life form is a bit too much for me. People are going to try it for the sake of science, but I don’t know if it is achievable.

For one thing, I think the Turing test is now proven ineffective. I don’t think we’ve reached AGI with these LLM’s, even though they seem pretty darn human enough to have already fooled some humans into thinking they are “alive.”

Often, advances in science help us to ask ourselves questions. I think with AI, the questions are

  • what does it mean for a thing to be intelligent?
  • what does it mean for me to be intelligent, or for me to be human?
  • am I just a wet-ware LLM? taking in input all around me and resynthesizing it into response patterns?

The answer to number 3 is emphatically, “No.” I, and you biological units reading this are human. You create, you learn, you are unique. You do things that no machine can do or ever will do. Creativity is at the core of being human and I do not believe that a silicon-based entity can have that kind of unique creativity. Or can it? Who knows..