--- title: "Mango and Avocado: Meta’s New AI Models Aim to Take on Google Gemini and OpenAI’s Generative AI Tools" description: "Meta is launching two new AI models, Mango and Avocado, in the first half of 2026. Mango focuses on high-quality image and video generation, while Avocado is a text model with enhanced reasoning and c" type: "news" locale: "en" url: "https://longbridge.com/en/news/270241766.md" published_at: "2025-12-19T02:38:49.000Z" --- # Mango and Avocado: Meta’s New AI Models Aim to Take on Google Gemini and OpenAI’s Generative AI Tools > Meta is launching two new AI models, Mango and Avocado, in the first half of 2026. Mango focuses on high-quality image and video generation, while Avocado is a text model with enhanced reasoning and coding skills. This marks a strategic shift from open-source models, aiming to compete with Google and OpenAI. The models are developed by Meta Superintelligence Labs, led by Alexandr Wang, following a $14 billion investment in Scale AI. ## Meta Bets Big On Mango And Avocado As The AI Image War Heats Up The race to own the most-used AI images and videos is pulling Meta back into the centre of the fight. After months of questions around its AI direction, Mark Zuckerberg is now steering the company toward a more closed, more competitive path, anchored by two new models designed to go head-to-head with Google and OpenAI. At the heart of the plan are Mango, an image and video model, and Avocado, Meta’s next-generation text model. > 🚨🇺🇸 META LAUNCHES AI ARMS RACE OFFENSIVE: MANGO AND AVOCADO COMING H1 2026 > > Meta just announced two new AI models hitting in the first half of 2026. > > Mango: a video and image generator. Avocado: a large language model focused on coding. > > The message is clear: Meta's not… pic.twitter.com/AaTG2ikuz9 > > — Mario Nawfal (@MarioNawfal) December 18, 2025 Both are expected to launch in the first half of 2026, according to details shared internally by Chief AI Officer Alexandr Wang during a companywide Q&A with Chief Product Officer Chris Cox. ## A Reset After Llama And A Shift Away From Open Models Meta’s strategy marks a clear break from its Llama open-source lineage. Internally, Llama 4 has been viewed as a disappointment, prompting leadership to rethink whether openness still offers an edge as rivals push faster and more polished systems into consumer apps. > Just in: $META is focusing on a new proprietary AI model codenamed "Avocado". The company is giving up on open source AI. 🥲 > > Their last open model, Llama 4, was a disappointment that illustrated the company's lack of progress in the space. pic.twitter.com/dDTSLZpjvr > > — Markets & Mayhem (@Mayhem4Markets) December 10, 2025 Mango and Avocado are positioned as proprietary models, built to compete directly with Google’s Gemini line and OpenAI’s expanding image tools. Mango is expected to focus on high-quality image and video generation, while Avocado is designed as a frontier text model with stronger reasoning and coding skills, areas where Meta has lagged in the past. ## Inside Meta Superintelligence Labs The new models are the first major outputs from Meta Superintelligence Labs, a division created during a major restructuring over the summer. Zuckerberg personally recruited Alexandr Wang, founder of Scale AI, to lead the unit, following Meta’s $14 billion investment in Scale that brought key data and talent in-house. *Alexandr Wang is the founder of Scale AI and the current Chief AI Officer at Meta, recognised as the world’s youngest self-made billionaire for building the data infrastructure that powers modern artificial intelligence.* Since then, Meta has hired more than 20 researchers from OpenAI and assembled a team of over 50 specialists with deep experience in large models and generative media. The focus is deliberate: image and video generation has become one of the most competitive battlegrounds in AI. During the internal session, Wang also revealed that Meta has begun early work on world models, AI systems that learn by observing and understanding visual environments rather than just predicting text. The effort signals a longer-term ambition to move beyond chat-based systems into models that can reason about the physical world. ## Image Generation Becomes The Stickiest Feature Meta’s push comes as rivals double down on visual AI. In September, Meta released Vibes, a short-form video generator built with Midjourney. > Meta launched Vibes > > AI-generated video feed in the Meta AI app. > > Create short videos from text prompts, remix what you see, or scroll through creator content. > > Powered by Midjourney and Black Forest Labs.pic.twitter.com/Jt9qbgvjzf > > — Manish (@manishxraj) September 30, 2025 Days later, OpenAI launched Sora, showing how quickly each player now reacts to the other. Google had already raised the pressure earlier in the year with Nano Banana, a move that helped boost Gemini’s monthly users from 450 million in July to over 650 million by late October. > Nano Banana is now in Search 🍌 Open Lens in the Google app for Android or iOS and tap the new Create mode to get started. pic.twitter.com/JwcEPOmN8I > > — Google (@Google) October 27, 2025 > Mixboard just got a major tune up! > > Today, we’re introducing a bundle of updates (with a ribbon on top 🎁) to help you keep exploring your ideas: > > \- Nano Banana Pro: Create presentations with the content found directly on your boards using our latest image generation model > \- New… pic.twitter.com/02YDK8jITo > > — Google Labs (@GoogleLabs) December 8, 2025 The competition intensified again in November when Google rolled out Gemini’s third generation. Inside OpenAI, executives reportedly responded with a code red to reclaim top benchmark scores. Soon after, the company released an updated version of ChatGPT Images. Speaking to journalists later, Sam Altman said image creation is now one of the main reasons users keep coming back, calling it a sticky feature. ## Google Pushes AI Into The Mass Market Google is not slowing down. On Wednesday, it announced Gemini 3 Flash, a faster and cheaper model designed for broad use. > Gemini 3 Flash is rolling out to developers now ⚡ > > 3 Flash is our latest model with frontier intelligence built for speed. With strong multimodal, coding and agentic features, it not only enables everyday tasks with improved reasoning, but also is our most impressive model for… pic.twitter.com/woWX0ZFFys > > — Google (@Google) December 17, 2025 While smaller than Gemini 3 Pro, it carries many of the same reasoning abilities and is aimed squarely at everyday apps rather than premium tiers. Alphabet CEO Sundar Pichai said, > “With this release, Gemini 3’s next-generation intelligence is now rolling out to everyone across our products including Gemini app + AI Mode in Search. Devs can build with it in the Gemini API, Google AI Studio, Gemini CLI, and Google Antigravity and enterprises can get it in Vertex AI and Gemini Enterprise.” With scale becoming essential, the strategy of keeping powerful tools behind enterprise paywalls may no longer succeed. > We’re rolling out Gemini 3 Flash starting today. Here’s where you can find it: pic.twitter.com/xzcbc1wE6A > > — Google AI (@GoogleAI) December 17, 2025 ## Internal Tension And High Stakes At Meta The shift to closed models has not been seamless. Reports of internal friction have emerged as teams move away from Llama and reallocate resources toward Avocado. Some engineers see the pivot as necessary to stay competitive, while others worry about losing the goodwill and momentum built through open development. Meta’s spending reflects the stakes. Billions are being poured into compute, data and hiring, with Wang’s leadership now under close watch. Avocado, in particular, is widely seen inside the company as a make-or-break test of whether Meta can truly match the best models on the market. ## Can Meta Win The Image Arms Race? Meta’s return to proprietary AI is a calculated risk, not a guaranteed comeback. Mango enters a market where Google and OpenAI already move at speed, with massive user bases and tightly integrated products. Avocado faces even tougher odds in text and reasoning, where benchmarks shift quickly and loyalty is thin. From Coinlive’s perspective, Meta’s biggest challenge may not be talent or funding, but timing. By 2026, image and video AI could already be commoditised, with success depending less on raw quality and more on distribution, cost and trust. Mango and Avocado may be powerful, but survival in this market will hinge on whether Meta can turn technical strength into daily habit, not just headlines. ## Related News & Research | Title | Description | URL | |-------|-------------|-----| | Indian AI lab Sarvam’s new models are a major bet on the viability of open-source AI | Indian AI lab Sarvam has launched new large language models, betting on open-source AI to compete with larger rivals. An | [Link](https://longbridge.com/en/news/276230007.md) | | What's next for Meta amid its massive AI spending projection? | Meta Platforms (META) saw a stock rally after strong Q4 earnings but has since declined. Investors are concerned about a | [Link](https://longbridge.com/en/news/275985012.md) | | Prediction: These will be the best-performing AI stocks in 2026 | The AI buildout is still going on at a strong pace. | [Link](https://longbridge.com/en/news/276183329.md) | | Infosys Unveils AI First Value Framework | Infosys Ltd :INFOSYS - INFOSYS UNVEILS AI FIRST VALUE FRAMEWORK | [Link](https://longbridge.com/en/news/276131501.md) | | 1 artificial intelligence (AI) stock investors are buying on the dip | This key AI stock got hit hard, and smart investors saw a big opportunity. | [Link](https://longbridge.com/en/news/275996696.md) | --- > **Disclaimer**: This article is for reference only and does not constitute any investment advice.