Meta made some pretty major moves in AI this week, and some of them were supremely expensive. According to Bloomberg, Meta spent a whopping $200 million trying to poach just one person from Apple (Ruoming Pang, the former lead in charge of developing the large language models behind Apple Intelligence), and that’s just one person it scooped up in its flurry of AI talent acquisitions—there are several others from OpenAI that also jumped ship just this week.
It’s to be expected in one way. AI is all the rage these days, and it seems like every company with a big enough war chest is pouring their spare change into becoming the next big thing in chatbots or AI slop. But for Meta in particular, those moves could equate to more than just ho-hum spending to get AI on track. It could actually be a huge boon for one of the most exciting gadget categories out there—I’m talking about smart glasses.
AI is obviously being thrown at a lot of things right now—movies, games, web search—and it’s not always ideal for all the tasks it’s being thrown at, but there’s one thing I know it could (and has already started to) actually have a major impact on, and that’s smart glasses. As fun as smart glasses like Meta’s Ray-Ban have been in the early stages, they also feel incredibly limited and, at worst, downright aggravating. A lot of those drawbacks have to do with UI. Unlike a device with a display on it, smart glasses only have one real option for native input, and that’s a voice assistant. The problem with that is that lots of voice assistants suck. They’re fine for basic tasks, but ask anything beyond “play some music,” and things tend to get choppy real fast.
Advancements in large language models (LLMs) like the one that powers ChatGPT, however, might change all of that. LLMs are inherently good at natural language prompts, which makes them more adept at advanced and multistep commands. If there’s one way to make a pair of smart glasses that feels more advanced right now, improving voice assistants would be it. Not only that, but it also feels like one of the only ways to do that at the present moment. As great as complex UI like Apple’s Vision Pro’s is—it uses a pretty amazing mixture of eye- and hand-tracking—shrinking all of the hardware needed to make that UI work down into a form factor that can be anything even remotely considered “glasses” is still a faraway possibility.
For proof of that problem, look no further than Meta’s Orion concept, which still requires a fairly large compute puck that alleviates the glasses part from having to bear the weight (literal weight in this case) of being a computer. For now, the problem of shrinking down smart glasses while also advancing their capabilities remains unsolved, and as a result, hardware companies are going to have to get creative with how they approach that quandary. In this case, that approach might be all about AI, and Meta, with millions spent this week, might just have a leg up.
Leave a Comment
Your email address will not be published. Required fields are marked *