Meta has launched its first standalone Meta AI app, offering a voice-first conversational assistant powered by its latest Llama 4 model. The app is designed to provide a personalized and context-aware AI experience, aiming to make interactions feel more natural and intuitive.
Built on the Llama 4 model, Meta AI delivers smarter, more tailored responses by learning from individual users’ preferences and context. One of the key features is voice-first interaction. Users can speak to Meta AI naturally, and it replies using full-duplex speech technology that allows for real-time spoken responses generated directly from conversational training. This feature can be toggled on or off and is currently in testing, with Meta noting potential technical issues during early use.
Meta AI’s ability to personalize is enhanced over time through continued interaction. The assistant remembers user preferences, such as travel interests or hobbies, and responds using context from ongoing chats. It can also integrate with a user’s Facebook and Instagram accounts via the Meta Accounts Center, using information like liked content, profiles, and past interactions to improve relevance and accuracy.
The app supports image generation and editing through voice or text commands, and these tools are also integrated across other Meta AI platforms. A built-in Discover feed lets users explore and remix AI prompts shared by others, though Meta assures that no content is shared unless a user actively posts it.
Meta AI is now embedded across Meta’s ecosystem, including Facebook, Instagram, WhatsApp, Messenger, and Ray-Ban smart glasses. In supported countries, users can transition between glasses and the Meta AI app, continuing conversations through the app or web history tab. Currently, it’s not possible to shift a conversation from the app or web to the glasses mid-session. Existing Meta View users will find their devices, settings, and media automatically moved to the new Devices tab in the Meta AI app after updating.
For desktop users, Meta AI’s web interface has been upgraded to support voice interactions, an improved Discover feed, and enhanced image generation tools with controls for mood, style, lighting, and color. A document editor for creating text and images, with PDF export support, is being tested in some regions, along with document import features for AI analysis.
User control is central to the app’s design. Voice input is optional, and a “Ready to talk” toggle enables users to keep the voice assistant active by default. An on-screen mic icon indicates when voice capture is in progress. Meta plans to refine the assistant continuously based on user feedback.
The Meta AI app is now available on iOS, Android, and the Web. Voice conversations, including the full-duplex demo, are currently supported in the US, Canada, Australia, and New Zealand. Personalized responses are available in the US and Canada.