I’m actually kinda intrigued with the Meta Ray-ban glasses. I had been watching Snap Spectacles and Meta’s previous collab, Ray-ban Stories. I think this latest attempt might be useful now given the acceleration in development of multimodal AI models. With Chat GPT, I find myself pulling out my phone more often to take pictures of things I want to ask about, such as rough calorie estimates or translations or price estimates. There’s a bit of friction in getting the phone out and taking the pic, sending it to the app, typing or speaking a question. I would much rather prefer a flow where I’m looking at something, say a trigger phrase and ask my question about what I’m seeing, and hear the response through the speakers. I’m not too interested in posting/streaming to Insta, that’s a bit niche, but I think if they can put together a smooth AI-assistant user experience, this can be a useful product.