This halloween kids, young and old, will be consuming a bit more candy than we typically do. It leads me to think about the adage, “you are what you eat.” While somewhat true, our bodies are incredibly adept at breaking things down into raw materials and re-synthesizing them into proteins and enzymes and cells and so forth. On a more metaphysical level, we are a product of nature and nurture, our genetic makeup and our environmental factors such as our upbringing and the neighborhood around us and all manner of inputs we receive everyday.

To me, the AI models today are trained on somewhat skewed inputs and we should keep that in mind. Several years ago, an AI chatbot was placed on Twitter and it “learned” to become quite representative or some of the more vocal parts of Twitter, namely trolls. While this experiment attracted a lot of trolls who purposely fed it hateful content, it is a reminder that a machine with a base corpus of training needs guidance and a way to properly process what it is taking in.

A lot of what is used to train these models seems to be - whatever you can get your hands on. A lot of the public text out there is actually very good. There is a lot of great expert discussion out there. This helps up generate really smart confident assertive cover letters and essays. But there is a certain difference between what people post publicly and what might be considered more normal language. There’s also a difference between what images are presented on the internet versus what is seen in our every day lives.

An AI model tends to present the most average result with some tweaks for variation. This is the goal of an LLM - to re-sequence tokens (words) into a statistically averaged order to seem to provide natural response. And the result is a well-thought carefully constructive five paragraph essay on a topic like “free speech and net neutrality.” But it is still a very sophisticated statistical average.

A friend once called me, “the master dot connector.” I really like that title. I take pride in being able to have a cross-disciplinary view of things and seeing things in a different light, making relevant connections where others may not have seen them. I wonder how much “dot-connecting” AI models can do and if they can generate novel insightful points of view.

LLM’s are a powerful tool, but I think it’s important to understand what they are and how they work and not just be amazed that it can produce conversations and images and code like a human can. It’s a bit sad that some humans have developed relationships with AI chatbots and even fallen in love with them and have committed illegal acts being spurred on by these chatbots. The line is very blurry, but these are still machines. Incredibly cool machines, but still machines, not people.

I’ve rambled, but I’m too lazy to edit and re-write this to be more cohesive. I’m human..