|
| Source: Reuters | |
Days after OpenAI released its Advanced Voice mode, Meta may have already closed the gap. It’s launching a natural chat feature that lets you interact with iconic voices or create your own. It works similarly to OpenAI’s, except you don’t need a subscription — and you can try it out today. But that’s just the start. |
Here’s every major announcement from Meta’s AI-focused Connect showcase: |
Llama 3.2: Powered by Nvidia-supplied GPUs, Meta’s latest Llama model is the first to be fully multimodal, meaning it can seamlessly move between images and text. |
Smart Glasses: Meta’s Ray-Bans will soon be able to “see” and “hear” everything you experience, helping you do things like find your car in a busy parking lot or pick out an outfit for a party. A live translation feature, meanwhile, lets you talk with others across languages in real time. |
AI Studio: You can now build a custom avatar that looks like you and speaks in your voice — it can even engage in full conversations on your behalf. The company is also introducing automatic dubbing and lip-syncing, so you can publish videos in multiple languages. |
Orion: Perhaps the most important news from the event. Meta showed off a prototype that has all the power of an AR headset packed into the compact form factor of ordinary glasses. You’ll be able to interact with 3D holograms of friends and family. And a wrist-based neural interface will let you perform different actions just by thinking and making hand gestures. |
Though still far from release, the first-of-its-kind device could be Mark Zuckerberg’s answer to the iPhone. Developed in secret for nearly a decade, Meta isn’t just building a new gadget but an entirely new general-purpose platform that could transform how we interact with the world. |