UX UI U-me U-you U-mami
I can tell when I haven’t eaten for a while. Food just creeps into my throughtstream. (would that happen to an AI, given certain motives and reward systems? …hmm interesting)
We’re seeing a lot of applications have AI assistance being added to them. Co-pilots, Twinkly stars, pop-up help boxes. Sometimes these can help guide the user through a complex sequence of steps to achieve their goal. What does this mean for application design? Will this lead to a trend where application UI designers can get away with being a little more lazy? I can envision the scenario – product MVP is set to launch; user feature testing is not quite hitting the mark; lightly supported research is showing that users can use AI assistance to use that feature and avoid friction that is naturally in the product. Product update ships without UI correction. UI issue never gets looked at again because “if it ain’t broke, don’t fix it” mentality.
Hopefully it doesn’t get like this, but I think it’s possible.
Should AI in UX aim to cover things up or make things disappear? To quote a notable fruit-named company leader
“Great technology is invisible” - Steve Jobs
For the past 50 years (or more?) humans have interacted with computers via punch card or keyboard or mouse. But now with advances in AI and LLM, the computer is learning “human”. To quote Andrej Karpathy, “The hottest new programming language is English”. This was just about a year ago and we’ve seen chatGPT explode because you interact with it in natural language.
Taken to the extreme, will we ever get to the point where the computer is invisible? or the application interface is invisible?
I’m excited for what kind of innovative product interfaces we’ll experience in the next couple years. I’m hoping designers will take advantage of AI more and use it as the foundation of their UX design. Extracting the user’s intent is key though. This can sometimes be challenging. And also, when the user is trying to perform precise actions, such as setting up an exact sequence of procedural variations on a material surface texture to be overlaid on a 3D mesh group. Some things will just require exact precision. Maybe Elon Musk’s Neuralink is on to something? What better way for a computer to understand intent than a direct connection to the brain?
There’s also a bit of serendipity with manual controls. You can set some wild parameters and get surprising results that an AI would probably consider outside the range of expected ‘normal’ results. So, there are pros and cons to manual UI and invisible UI.
Another thing that comes to mind is the UX of a restaurant. In the extreme “invisible” approach, I would sit down and tell the kitchen exactly what I want. In the normal UX, I get a menu and see what is available. Maybe I’ll try the scorched rice cube with spicy tuna and avocado? I wouldn’t have thought of that without a menu. Sometimes having open sky is not always a good thing.
So, kids, next time you’re designing an interface, remember to have AI in the UX, but don’t forget the Umami :)
Wednesday January 3, 2024