--- title: "MiniMax conference call: Focusing on \"all-modal\" and \"high quality,\" bidding farewell to purely \"卷模型\" and evolving towards an AI platform ecosystem" description: "In the conference call, founder Yan Junjie proposed the platform value formula: intelligent density × Token throughput. The M2.5 model set an industry record in programming benchmarks, with daily Toke" type: "news" locale: "en" url: "https://longbridge.com/en/news/277476273.md" published_at: "2026-03-02T14:03:59.000Z" --- # MiniMax conference call: Focusing on "all-modal" and "high quality," bidding farewell to purely "卷模型" and evolving towards an AI platform ecosystem > In the conference call, founder Yan Junjie proposed the platform value formula: intelligent density × Token throughput. The M2.5 model set an industry record in programming benchmarks, with daily Token consumption increasing by more than 6 times. The international market contributes over 70% of revenue, and the M3 full-modal model is expected to be launched in the second half of 2026. The company is evolving from a large model company to an AI platform enterprise China's AI unicorn MiniMax has released its first annual performance report since going public, delivering an unexpectedly strong growth performance. The total revenue for the year 2025 reached $79.04 million, a year-on-year increase of 159%, exceeding Bloomberg's aggregated market expectation of $71.4 million by approximately 10.6%. The annual recurring revenue surpassed $150 million in February 2026, indicating that **the pace of commercialization is significantly accelerating**. Profitability has also improved. The gross profit for the year increased by 437% year-on-year to $20 million, with the gross margin rising from 12.2% in 2024 to 25.4%. **The adjusted net loss for the year was $250 million, roughly in line with the previous year.** The company's founder and CEO, Yan Junjie, elaborated on the strategic evolution direction during the earnings call: **MiniMax is transitioning from a large model company to a platform company in the AI era. Its core logic is that platform value equals intelligent density multiplied by token throughput; when both dimensions are strong enough, platform value will naturally emerge.** On the product front, the M2.5 model has achieved global leading performance in multiple productivity benchmark tests. With the enhancement of model capabilities, the daily token consumption of the M2 series models in February 2026 has reached more than six times the level in December 2025, **validating the market acceptance of the high cost-performance route**. Key points from the earnings call: > - **Model iteration speed and commercialization validation**: Three versions from 2.0 to 2.5 were completed in the past 108 days, with the daily token consumption of the M2 series models increasing over six times since December 2025, validating the market acceptance of the high cost-performance route. > - **Phase results of the multimodal strategy**: Multimodal integration has been established as the inevitable path to AGI, with independent refinement of each modality completed. The M3 series models are expected to be launched in the first half of 2026, showcasing the results of collaborative evolution across modalities. Video generation has become the third-largest sub-market in API call volume, with multimodal capabilities seen as a core barrier to capturing this market. > - **Judgment and layout of agent evolution**: It is clear that L3-level agents have arrived, with the distinction between L4 and L5 lying in "single task" versus "multi-agent collaboration." Programming scenarios are the first to be validated, but the potential market space for office scenarios (data analysis, document writing, PPT production) is considered much larger than programming. > - **Differentiated competitive strategy**: Strategically, there are areas to pursue and areas to forgo; in 2023, the company abandoned the general personal intelligent assistant for mobile devices, focusing resources on differentiated products like editors and Hailuo Video. The R&D strategy does not pursue comprehensive leadership but aims to open the market with "speed" and "specific capabilities." > - **Underlying logic of R&D efficiency**: It emphasizes that the essence of AI competition is not about burning money and resources but about model iteration speed and marginal efficiency. The cost of conducting full-modal training under a unified architecture is far lower than building independent systems separately, and the synergy effects have been continuously validated > - The **spillover effect of ecological construction** has generated spillovers at the ecological level, from the contributions of Google Cloud's ecosystem to leading call volumes on developer platforms like OpenRouter. In the future, it will further lower the usage threshold through multimodal capabilities at the product level, building a more complete platform ecosystem. ## Revenue Accelerates, International Market Contributes Over 70% Breaking down MiniMax's $79 million annual revenue, both major business segments achieved rapid growth. The open platform aimed at enterprises and individual developers contributed approximately $26 million, a year-on-year increase of 198%; AI products aimed at consumers, including MiniMax Agent, Hailuo AI, Talkie, and Xingye, contributed approximately $53 million, a year-on-year increase of 143%. **Internationalization has become a significant feature of the company's revenue structure.** In 2025, revenue from international markets accounted for over 70% of total revenue, with international revenue from the open platform also exceeding 50%. As of December 31, 2025, MiniMax has cumulatively served over 236 million users in more than 200 countries and regions, as well as 214,000 enterprise clients and developers from over 100 countries and regions. The performance on the expense side confirms that scale effects are beginning to emerge. **Sales and marketing expenses decreased by 40% year-on-year, while R&D expenditures increased by 33.8% year-on-year, but the growth rate was far lower than the revenue growth rate.** As we enter 2026, the momentum for commercialization has further strengthened. Yan Junjie revealed that in February 2026, the number of new user registrations on the open platform had reached more than four times the level of December 2025. ## Model Matrix Iteration Accelerates, M2.5 Refreshes Programming Benchmark During the conference call, it was mentioned that, on the technical level, **MiniMax has demonstrated rapid model iteration capabilities**. In the fourth quarter of 2025, three large language models, M2, M2.1, and M2-her, were intensively released, completing the three-generation evolution from M2 to M2.5 in just 108 days. The M2.5 released in February 2026 achieved a globally leading performance in productivity scenarios. In the programming field, this model set a new industry record on the SWE-bench Verified benchmark, with a 37% efficiency improvement compared to the previous generation M2.1. Cost breakthroughs have also been achieved—at an output rate of 100 tokens per second, the cost of running complex Agents with M2.5 is only $1 per hour. Based on this, the company estimates that a budget of $10,000 can support the continuous operation of an Agent for a whole year. Since its release, M2.5 has quickly topped the Open Router leaderboard. Multimodal capabilities are advancing simultaneously. The Hailuo 2.3 video model, Speech 2.6 voice model, and Music 2.0/2.5 music models have been successively launched. By the end of 2025, the video model has helped creators generate over 600 million videos, and the voice model has generated over 200 million hours of voice content. **The results of optimizing inference efficiency are significant.** As of February 2026, the inference computing cost of the M2.5 series per million tokens has decreased by over 50% compared to December 2025, and the inference latency of the Hailuo video generation model has been reduced by over 30% ## Accelerating Ecological Layout, Leading Cloud Platforms and Toolchains Successively Connected MiniMax has made a series of key advancements at the commercial ecosystem level. Major global cloud platforms are accelerating the introduction of its model capabilities—Google Vertex AI, Azure AI Foundry, Fireworks AI, and NetViews AI have all deployed MiniMax models. In the programming tools sector, MiniMax has become the default model for mainstream programming platforms such as OpenCode and Kilo Code. At the beginning of 2026, Notion announced the integration of the M2.5 model, making it the first and only open-source model option launched on the platform. Yan Junjie stated, **this marks a further deepening of MiniMax's penetration in productivity scenarios.** The synergy with the OpenClaw project has also released ecological effects. Yan Junjie mentioned that OpenClaw founder Peter had previously publicly stated that M2.1 is his preferred best open-source model. MiniMax subsequently launched MaxClaw, further lowering the user entry barrier and promoting widespread adoption of the model within the developer community. ## Accelerating Organizational AI Integration, Internal Agents Covering Ninety Percent of Employees In terms of organizational transformation, MiniMax founder and CEO Yan Junjie disclosed that **internal Agent interns have provided support for nearly 90% of employees, covering scenarios such as software development, data analysis, operations management, talent recruitment, and sales marketing.** He defined this practice as one of the core sources of the company's competitive advantage. The large-scale internal deployment of Agents is bringing dual benefits. On one hand, the feedback loop between model iteration and product innovation has been significantly accelerated; on the other hand, the actual deployment environment clearly exposes the current model capabilities' shortcomings, directly guiding the research and development priorities for the next generation of models. Yan Junjie observed that **the company is undergoing a noticeable shift, with employees gradually moving from "teaching Agents how to work" to "observing how Agents work."** ## Outlook for 2026: Betting on M3 Full-Modal Model, Transitioning to a Platform Company In the outlook for 2026, MiniMax founder and CEO Yan Junjie proposed three core judgments: the software development field will witness a leap from L4 to L5 level intelligence, **AI will evolve from a tool to a colleague-level collaborator**; **workplace productivity scenarios will replicate last year's rapid penetration path in the programming field**; **multi-modal content creation will enter the direct generation stage of medium to long-form production-level content, with output formats increasingly approaching streaming real-time.** He anticipates that these trends will drive platform Token demand growth by one to two orders of magnitude. To meet this demand, the next-generation flagship products M3 and Hailuo 3 series models have been architected for the aforementioned scenarios, **with plans to launch multi-modal integration capabilities in the second half of 2026.** Yan Junjie stated, **MiniMax is one of the only three companies in China that has achieved leadership in every modality and is one of the few independent companies capable of executing in parallel at both the product and model levels.** In terms of strategic positioning, Yan Junjie redefines platform companies in the AI era as those that can define and drive new intelligent paradigms and continuously capture the business value created by paradigm shifts. This definition clearly distinguishes itself from the platform paradigm of the internet era, which is centered around traffic entry points. The management stated that **MiniMax's goal is to become a platform enterprise in the AI era, with core driving forces coming from the continuous enhancement of model capabilities and the deep exploration of customer value.** At the strategic execution level, **the company insists on focusing on the two keywords "all-modal" and "high quality," knowing what to pursue and what not to pursue.** Yan Junjie revealed that in 2023, the company clearly decided not to develop a general personal intelligent assistant for mobile terminals, as it judged that unique value could not be formed in that area; instead, it will concentrate resources on differentiated products such as editors and Hailuo Video. Below is the full transcript of the conference call (generated with AI assistance): > 2025 Annual Performance Conference Call > > Operator: > > Hello, ladies and gentlemen. Thank you for your patience. Welcome to the MiniMax 2025 Annual Financial Performance Conference Call. Please note that the management's keynote speech and the Chinese Q&A session will be provided with simultaneous interpretation in English. > > The English line will be in listen-only mode. I will now turn the call over to Ms. Yu Meiqi, Director of Investor Relations at MiniMax. > > Unnamed Speaker: > > Thank you, operator. Good evening, good morning, everyone. Welcome to the MiniMax 2025 Annual Financial Performance Conference Call. Before we begin, please note that today's discussion may contain forward-looking statements that involve various risks and uncertainties. Actual results may differ from those discussed. Except as required by law, the company does not undertake any obligation to update any forward-looking information. > > For important information regarding this conference call, including forward-looking statements, please refer to the public information released by the company earlier or the 2025 Annual Performance Announcement and the financial condition as of December 31, 2025. In today's conference call, the management will also discuss certain non-International Financial Reporting Standards (IFRS) financial metrics. These metrics are provided for supplementary information only and should not replace the financial results based on IFRS. For definitions of non-IFRS financial metrics, reconciliations of IFRS and non-IFRS financial results, and related risk factors, please refer to our 2025 Annual Performance Announcement. > > In today's conference call, the management will primarily use Chinese. A third-party interpreter will provide simultaneous interpretation in English during the keynote speech and Q&A session. Please note that the English interpretation is for convenience only. In case of any ambiguity, the original language spoken by the management shall prevail. Finally, unless otherwise stated, all currency units are in US dollars. I will now turn the call over to our founder, Chairman of the Board, CEO, and CTO, Dr. Yan Junjie. > > Yan Junjie, Founder, Chairman of the Board, CEO, and CTO: > > Esteemed investors and analysts, good evening. I am Yan Junjie. Thank you for joining our first performance conference call after the IPO. I would like to take this opportunity to share our progress over the past year and our strategic focus for the next phase of growth > > First, let's review 2025. For MiniMax, the theme this year is "solidifying the foundation." In 2025, we established full-modal R&D capabilities, possessing globally competitive models in key modalities such as language, video, voice, and music. At the same time, we continuously upgraded our products through ongoing technological innovation, including an open platform for enterprises and developers, as well as products aimed at end users like MiniMax Agent, Aito, and Xingye. We further deepened our global layout. In terms of large language models, in the fourth quarter of last year, we launched three updated models: M2, M2.1, and M2-her. M2 redefined the balance between performance, cost, and speed, integrating three key capabilities: programming, tool usage, and deep search. > > Its performance is approaching global leading levels. After its release, M2 quickly gained adoption in the global developer community, becoming the first Chinese model on OpenRouter to exceed a daily token consumption of 50 billion, while ranking first on the Hugging Face global trends chart. Based on M2, we rapidly launched M2.1, focusing on enhancing performance in handling complex real-world tasks, particularly in programming and workplace scenarios, where it demonstrated stronger capabilities in understanding and executing multi-step instructions. Additionally, M2-her serves as the foundational model supporting our AI interactive products (i.e., Xingye). > > Its design aims to provide a more natural and personalized conversational experience, ranking first globally in comprehensive performance in long-context dialogue tests. In February, we released M2.5, achieving globally leading performance in key productivity scenarios (including programming, tool usage, and workplace applications). In programming, M2.5 set a new industry record in the FWE Bench Verified benchmark test, achieving a 37% efficiency improvement compared to the previous generation M2.1. > > More importantly, M2.5 made the operation of complex agents economically feasible. Running continuously at an output speed of 100 tokens per second for one hour costs only $1. This means that a budget of $10,000 can keep the agent running for an entire year. The breakthrough in model capabilities also drove rapid growth in usage, with M2.5 quickly topping the OpenRouter leaderboard after its release. > > From M2 to M2.1, and now to M2.5, each generation of models has achieved significant improvements in capability and application adoption. By February 2026, the average daily token consumption of the M2 model series was over six times that recorded in December 2025, with token consumption from programming scenarios increasing by more than ten times. In terms of multimodal capabilities, we have now established model capabilities covering video, voice, and music. > > In October last year, we released the video model Aito 2.3, achieving significant improvements in character movement, visual quality, and style expression. We also launched a faster model that can reduce batch content creation costs by up to 50%, and further upgraded the media agent in Aito AI to support full-modal content creation, generating final outputs with one click By the end of 2025, our video models will have helped global creators generate over 600 million videos. In October last year, we released the speech model Speech 2.6, which is optimized for voice agent scenarios, significantly enhancing voice interaction performance, achieving globally leading ultra-low latency, and supporting over 40 languages. By the end of last year, our speech model had helped global users generate over 200 million hours of voice content, becoming one of the core infrastructure platforms of the voice intelligence ecosystem. Our newly released music models Music 2.0 and 2.5 have also made significant progress, reliably handling a wide range of vocal styles and emotional expressions. > > In the process of developing these models and products, we have also continuously advanced the AI-native organizational evolution. Internally, our intelligent agent interns now support nearly 90% of employees, with application scenarios covering software development, data analysis, operations management, talent recruitment, and marketing sales. We see ourselves as a testing ground for the evolution of AI-native organizational capabilities, which will steadily enhance our R&D efficiency. In January of this year, we productized our accumulated capabilities and released MiniMax Agent 2.0, enabling agents to directly access users' local workspaces. At the same time, we launched the expert agent feature, allowing users to create domain-specific agents for professional use cases. > > By the end of February, professional users had created over 50,000 expert agents to address professional challenges through deep knowledge and capability integration. Even before the OpenClaw project gained widespread attention, its founder Peter had highly praised the model proposed by MiniMax, calling the M2.1 model his preferred and best open-source model. After the official launch of OpenClaw, the performance and cost advantages of the M2 series made it possible for more developers to adopt the model at significantly lower costs. Our agent products also actively support OpenClaw, launching MaxClaw, further lowering the entry barriers for users. > > Next, I would like to talk about our progress in commercialization. For the entire year, we achieved revenue of $79 million in 2025, a year-on-year increase of 159%. Among them, revenue from AI products reached $53 million, a year-on-year increase of 143%; revenue from our open platform was approximately $26 million, a year-on-year increase of 198%. Please see the next slide. > > We see that revenue is accelerating growth in 2025. For example, the number of new user registrations on the open platform for enterprise clients and individual developers in February 2026 was more than four times that recorded in December 2025. As of December 31, 2025, we had served a total of 236 million users from over 200 countries and regions, as well as 214,000 enterprise clients and developers from more than 100 countries and regions. International market revenue accounted for over 70% of our total revenue in 2025, and international revenue accounted for over 50% of our total revenue from the open platform Since the release of M2.5, we have seen strong attraction in the international market, with new global customer interest and positive user reputation continuously being established. Leading global cloud providers and AI-native cloud platforms, including Google Vertex AI, Microsoft Azure AI Foundry, Fireworks AI, and NetViews AI, have all deployed the MiniMax model. We have also become the default model for leading programming platforms such as OpenCode and Kilo Code. Earlier today, Notion launched M2.5, which is its first and only open-source model option. > > In addition to providing the above services, we have further improved computational efficiency by driving engineering optimizations, resulting in meaningful gains. Thanks to algorithm optimization, operator implementation, and iterative improvements in coding and decoding engineering, as of February 2026, the inference computation cost of the M2.5 model series per million tokens has decreased by over 50% compared to the level in December 2025. During the same period, the inference latency of high-fidelity video generation models has also decreased by more than 30%. > > With the continuous iteration and improvement of our model capabilities, new economies of scale have emerged. For the full year of 2025, gross profit reached $20 million, a year-on-year increase of 437%, with gross margin rising to 25.4%, an increase of 13 percentage points from 12.2% in 2024. In terms of expenses, sales and marketing expenses decreased by 40% year-on-year, while R&D expenses increased by 33.8%, but significantly lower than our revenue growth rate. For the full year of 2025, adjusted net loss was $250 million. > > As commercialization continues to advance and model optimization brings cost benefits, our adjusted net loss rate has significantly narrowed. In the first two months of 2026, we have already seen strong growth momentum. As of February 2026, our annual recurring revenue has exceeded $150 million. Next, I would like to share our outlook for the future. > > We believe that in 2026, the level of intelligence will be significantly enhanced. Our efforts will focus on three areas. First, in the field of software development, we expect to see the emergence of L4 to L5 level intelligence, marking the transition of AI from a tool to a colleague-level collaborator. Second, in professional workplaces, we expect to see a pace of progress similar to what was achieved in the programming field last year. > > In particular, the delivery capability and penetration rate of AI agents in workplace scenarios will be significantly enhanced. Third, multimodal creation will move towards directly generating long and medium-long form content for immediate use this year, with formats increasingly approaching streaming real-time output. Overall, these three developments indicate new technological challenges, significant expansion of intelligent supply, and a huge innovation window emerging at the application layer. They also mean that the demand for our platform will significantly increase, with token volumes potentially growing by one to two orders of magnitude. > > Our next-generation M3 and HaiLuo 3 model series are designed to meet these demands. Meanwhile, we are rapidly strengthening our infrastructure and continuously attracting top talent, shifting our focus from optimizing training efficiency to enhancing higher R&D and iteration efficiency At the strategic level, we are evolving from a large model company to a platform company in the AI era. In the internet era, platform companies primarily serve as gateways for traffic. > > However, in the AI era, platform companies are those that define and advance new intelligent paradigms and can capture the product and business value created by these paradigm shifts. This requires the ability to shape emerging intelligent frameworks, continuous innovation in technology and products, and the provision of scalable infrastructure and high-efficiency token throughput capabilities. We believe we are one of the few companies that have established and are continuously strengthening these capabilities. Therefore, the value of an AI era platform company can be simply summarized as: the intelligence density provided multiplied by the token throughput. When both of these dimensions are strong enough, the platform value naturally emerges. Standing at this historic turning point in the industry, our capabilities stem from two factors. The accelerated development of the AI industry is becoming increasingly evident. Breakthroughs in model capabilities, deployment of intelligent applications, and maturation of monetization models are also continuously expanding in the industry. > > As a result, we have already seen strong growth momentum. We are confident in becoming the core builders of the AI platform ecosystem. This concludes our prepared remarks. We are now ready to answer your questions. > > Q&A Session > > Operator: > > Q&A Session > > Operator: > > We will now begin the Q&A session of the conference call. (Operator instructions) Your first question comes from Gary Yu of Morgan Stanley. > > Gary Yu: > > Thank you, management. Thank you for your insights. > > Your vision is to become an AI platform company. So, how do you define a platform company in the AI era? Why do you believe that a startup like MiniMax can become one of them? Thank you. > > Yan Junjie, Founder, Chairman of the Board, CEO, and CTO: > > Thank you for your question. This is something we have long discussed and contemplated internally. As we mentioned earlier, when the boundaries of intelligence are pushed, it generates many new scenarios, new customers, and new users, forming a new ecosystem and creating new commercialization dividends. For example, in the fields of coding or visual/image generation, some companies have already emerged. So, why does MiniMax have the opportunity to become a platform company in the AI era? I believe there are several reasons. First, the AI market is not a zero-sum market. > > The incremental market each year is larger than the existing stock market. It is also not a winner-takes-all market. As long as you have unique and differentiated innovations, you can find your market fit. We believe that within the next two to three years, our model development capabilities and infrastructure capabilities are very likely to create new scenarios, and there exists a huge innovative market space in areas such as coding, office efficiency, and interactive entertainment. > > In such a high-growth, rapidly changing market, we believe opportunities exist at three levels. The first is the model level. I think a key factor is our reliance on long-term accumulation of models and faster iterations. For example, within 108 days, we successfully released M2, M2.1, and M2.5, each release bringing rapid growth in user numbers and API call volumes Moreover, from the first day of our entrepreneurship, we have been accumulating cross-modal capabilities. We are the only company adopting this strategy, which positions us favorably in the inevitable trend of multimodal integration. The second aspect is the product layer. MiniMax is the first company in the country to focus on both products and models simultaneously. Therefore, "model + product" creates a stronger barrier to entry. This approach of treating models as products is more difficult for our peers to replicate. The third aspect is the ecological level. We leverage differentiated capabilities to establish an open system, such as in the OpenClaw ecosystem. OpenClaw utilizes many of our models for development. At the same time, our models are also very suitable for high-throughput product scenarios. By further integration, we have also lowered the usage threshold for users. This is why we see a large number of code contributions. We have the ability to help the ecosystem grow rapidly. Looking ahead, this is just the beginning of our internal ecological construction. Next, we will focus on building the next generation of all-modal models, the M3 series, to establish clear model differentiation. On the other hand, we hope to build unique products and ecosystems around the intelligence we provide. We believe that, apart from a few large companies, we are the only company in Asia that can make achievements in both products and models simultaneously. Thank you. Yan Junjie, Founder, Chairman of the Board, CEO, and CTO: Next question, thank you. Operator: The next question comes from Alex Vovk of JP Morgan. Please go ahead. Alexander Vovk: Thank you for taking the time, management. Congratulations on your strong performance. I would like to ask a question about multimodality, which you have emphasized as the ultimate goal of AI. If competitors focus on perfecting a single modality first and then shift to cross-modal, it means they might move faster than you. Would your focus on a cross-modal approach from the beginning become a burden and slow you down? Yan Junjie, Founder, Chairman of the Board, CEO, and CTO: Thank you for your question. This is a question we have been asked since the day our company was founded. I would like to take this opportunity to explain why we focus on cross-modal. We believe that the integration of multimodalities is the fundamental prerequisite for continuously enhancing intelligence levels. In the past six months, several models have achieved breakthroughs through multimodal integration, validating this trend. For example, models like Nano Banana Pro integrate visual understanding and generation, further expanding the boundaries of intelligence. For us, we adopt a two-phase approach. We are currently in the second phase. The past four years have been the first phase. We have steadily built industry-leading models in each modality, gaining positive word-of-mouth and market recognition. We have many models providing in various modalities, achieving significant accomplishments in their respective fields. Next, the key is to integrate and merge them for greater breakthroughs. The M3 model in the second half of this year is precisely aimed at achieving this goal. In this process, we want to emphasize two points: first, the accumulation of each modality is a long process > From data to single modality, and then to multi-modal fusion, the entire chain takes a considerable amount of time. This is the foundation of our long-term capabilities and also what sets us apart. We are one of the only three companies in the country that have achieved a leading position in all modalities. The second point I want to share is that video generation, aside from coding and agent tasks, is the largest market. We believe we can see medium to long-form content in near real-time formats. We believe we can achieve this capability, and there is a huge opportunity for us. As you mentioned, will our strategic approach hinder our R&D progress? How to say? I think challenges exist, but they are inevitable. > > Since the company's establishment, AGI has been focused on multi-modal input and output. Therefore, we have established an organizational structure that allows cross-modal foundational capabilities to be reused. As you can see, under this AI-native organizational structure, our cost of building full modalities is not higher than that of other startups and is far lower than the investment of large tech companies. Moreover, each of our individual modalities has achieved competitive models. > > In some cases, we have even surpassed companies that focus on a single modality. Our technical judgment and forward-looking positioning have been continuously validated over the past few years, and it will only become clearer in the future. Thank you. > > Unnamed Speaker: > > Next question, thank you. > > Operator: > > Our next question comes from UBS's Kenny Fong. > > Kenneth Fong: > > Congratulations on your strong performance post-IPO. You mentioned that programming intelligence at L4 to L5 levels is on the way, and there are many claims that many software companies may be replaced by agents. How should we view this transformation? Where do you position yourselves in this transformation? > > Yan Junjie, Founder, Chairman of the Board, CEO, and CTO: > > Well, this is a very important question. > > Let me first explain what L4 to L5 level intelligence means, the future direction of programming intelligence, and our position in this transformation. L3 is the intelligence we are using today, while L4 and L5 represent colleague-level and organization-level intelligence. For example, to build a world-leading model, it requires collaboration among many people, algorithm innovations and experiments, program optimization, data processing, and technical operations, which is a huge workload. We believe L4 will be able to handle many innovative tasks, such as conducting experiments based on a research paper and proposing efficient solutions to many challenges in the paper, thus achieving many innovations. For L5 level intelligence, it requires not just one person, but the collaboration of many people. > > I believe programming is just a part of intelligence. It is the earliest validated productivity capability. Besides programming, I believe office productivity will replicate the rapid progress seen in the programming field over the past year in the coming year. We believe the market is growing, and I believe this market is even larger than programming. So, how do we view ourselves? How do we position ourselves? I think there is a huge market in front of us. Programming models enable more people to write code and to write it better But I want to emphasize again that programmers still make up only a small part of the labor market. > > A larger portion of the workplace is handled by non-code software. For example, use cases such as data analysis, financial modeling, or creating slides are the work needed to support financial performance meetings. The market represented by these use cases is far larger than programming. We have made initial progress in programming and agents, occupying a unique market position with minimal resources. > > Therefore, greater market penetration has only just begun. For us, we act quickly. As I mentioned, the evolution from M2 to M2.3 took only 108 days. So, it can be said that we maintain the fastest iteration speed in the industry, with each generation of models achieving significant improvements in capability and usability, highlighting our R&D capabilities and ability to handle scale. > > We built M2 with limited resources, but our resources are expanding. I believe that as the improvement of models accelerates, better models will further raise the ceiling. Our past performance is based on the M2 series models. We expect the M3 series models to unleash greater potential, forming a positive flywheel effect. > > In addition to our rapid actions, we are able to create differentiated models, which have been continuously validated over the past few months. As I said, the market is huge, and the technological path will diverge. For us, we need to know whether we have the capability to define the technology roadmap. Our goal is not to win on every dimension. Instead, we focus on defining model capabilities that showcase our unique advantages. For the M2, Conch 2, and Voice 2 series models, each has established clear differentiation and can quickly gain market appeal. Their characteristics include low latency and high cost-effectiveness. > > These features will make us stand out and help us gain a larger market share. As our organization and resources continue to expand, our deep understanding of model evolution and technology routes will further enhance this differentiation and its value. In summary, we are confident that through programming agents and the broader productivity market, we can further increase our share and achieve more breakthroughs. We hope to strive for a larger market with faster iterations and stronger differentiation, achieving more breakthroughs. > > Thank you. > > Operator: > > Your next question comes from Goldman Sachs. > > Analyst: > > Thank you for your sharing. We know that there are tech giants, startups, and open-source models in this industry. > > I would like to know where you compete? What are your priorities? > > Yan Junjie, Founder, Chairman of the Board, CEO, and CTO: > > As mentioned earlier, we are building and hope to become a platform company in the AI era, driven by the continuous enhancement of intelligent density combined with scalable business growth. Compared to other AI companies, we differ in several aspects. First is our strategic positioning. From day one, we have focused on multimodal models to enhance intelligent density and expand boundaries, creating differentiated value > > At the same time, we are building scalable products and businesses around model intelligence density, focusing resources on areas that can create differentiated value. For example, in 2023, we decided not to build a general mobile assistant (i.e., products like Doubao and ChatGPT). We chose not to develop such products because we do not believe we can create unique value in this field. Instead, we focus on differentiated model research and product innovation, rather than burning cash. Take our HaiLuo and MiniMax Agent products as examples. > > These are our priorities. This strategic decision reinforces our differentiation and increases our win rate. Another example is our commitment to cross-modal development of foundational models from day one. As mentioned earlier, the accumulation of each modality is crucial. > > We have now reached a critical stage of cross-modal integration. This positions us favorably in the inevitable trend towards full-modal integration. Secondly, I want to talk about our R&D efficiency. In the AI era, success ultimately does not depend on how much money or resources you burn, but on the speed of intelligence enhancement. > > This speed comes from R&D efficiency. It will translate into greater market share and higher efficiency. We have consistently emphasized and executed this. We apply it to every stage of R&D, including algorithm optimization, experimental design, iteration cycles, etc. We fully leverage our flexible organizational structure, combining top-down and bottom-up approaches while reusing experiences and infrastructure across modalities. > > This ensures that we always stay ahead. In the long run, we believe that only a few AI platform products globally will naturally lead the industry. We are one of the few independent companies with significant advantages and a clear differentiated positioning to win in competition. > > Operator: > > Your next question comes from Yu Zhonghai of CITIC Securities. > > Analyst: > > Thank you. Congratulations on your strong performance. You mentioned that in the first two months of 2026, the Token consumption of the M2 series has already reached six times that of December last year. > > Is this explosive growth a one-time bonus, or the beginning of a sustainable long-term trend? Because we noticed a surge in Token consumption on OpenClaw, I ask this question. Do you think this is a one-time phenomenon or the start of a long-term trend? > > Yan Junjie, Founder, Chairman of the Board, CEO, and CTO: > > Thank you for your question. We believe this is the beginning of a long-term trend, rather than a one-time bonus. Of course, industry growth often follows a step function pattern rather than linear movement. Our ability to continuously launch new models enables us to seize industry opportunities. I think the core part is our R&D strategy, which is to prepare resources and capabilities in advance and define each generation of models based on our understanding of the evolution of intelligence. In addition to the M2 model, the next wave of growth is supported by several factors. In fact, we have been actively preparing capabilities since the second half of 2025 to capture multiple high-impact productive opportunities that will emerge in 2026 > > We believe that growth will become increasingly diversified. Programming has enormous room for development. I mean, it has been quite good as an auxiliary tool. We believe it will continue to improve and evolve from an assistant-level tool to a colleague-level collaborator, and even to a more advanced intelligent operator. > > Based on our technological reserves, R&D progress, and judgment, we believe the above situation is likely to occur this year. The second point is about workplace scenarios, as this is a larger and broader market compared to programming. It involves many professions using a variety of tools, and the problems are more complex. Many tasks performed by these professions cannot be validated through conventional challenges, and we have been actively preparing for such challenges. We expect to see rapid progress in the workplace similar to that in the programming field. Turning to the multimodal field, we believe we will significantly lower the adoption threshold and create better models, enabling the generation of longer videos that can be used directly. > > Therefore, competition among models involves winning and losing, and every company faces this reality. No company can guarantee permanent leadership. However, we are confident in our ability to continue winning in the most critical areas. I believe one of our key strategies is to push the boundaries of technology and leverage this to achieve breakthroughs, creating a larger ecosystem through our products and models. The ultimate goal is to leverage this to capture the dividends. We are confident in growing alongside this industry, expanding our capabilities, R&D efficiency, product innovation capabilities, and global monetization capabilities into a lasting competitive advantage for the organization. > > Operator: > > Your next question comes from Thomas John of Jefferies. > > Thomas John: > > Good evening. Thank you for answering my question. You mentioned that internal intelligent agent interns now cover nearly 90% of employees. > > What insights has this change brought you? How does it feedback into your product and technology development? > > Yan Junjie, Founder, Chairman of the Board, CEO, and CTO: > > Thank you for your question. We are not just an AI company. Our goal is to build a truly AI-native platform company. While researching AI models, we hope to transform ourselves into an AI-native company. > > So, this is one of our key organizational goals. We focus on two things: the first is speed, which is the pace of progress. I mean, the fundamental reason we are becoming an AI-native company is that, as a startup, we have limited resources and need to maximize efficiency to survive and succeed. Therefore, we have been using AI agents internally, and many employees are using them in their daily work. > > We have observed a clear trend. In many cases, the dynamic is shifting from humans teaching agents how to work to humans observing how agents work. Sometimes, the agents even surprise us. This not only shortens our organizational workflow but also allows every link to benefit from the enhancement of intelligence. > > From model iteration, product innovation to customer service, our feedback and iteration loops are accelerating. At the same time, our employees can focus more on higher-value work, further accelerating our thinking and innovation capabilities as an organization This also feeds back into our model development, as it allows us to define the goals of model intelligence. For example, when agents are deployed internally within the company, we can clearly observe that even the best models today still make mistakes or fail to complete tasks correctly. > > These gaps precisely reveal the highest economic value. They indicate the priorities for the development of the next generation of models and agents. This enables us to define our goals more clearly. The more agents we deploy, the clearer the direction for model iteration becomes. > > In the past few months, we have improved the speed of our model iterations, revenue growth, customer service capabilities, and token throughput. This allows us to define new model goals more quickly. We are maximizing the value of AI within the company. We believe, as we have stated, that building an AI-native company, we have already seen a positive flywheel effect internally. I believe this will become one of our organization's key competitive advantages. Thank you all for participating today. If you have any further questions, please feel free to contact our investor relations team. > > Thank you ### Related Stocks - [00100.HK - MINIMAX-WP](https://longbridge.com/en/quote/00100.HK.md) ## Related News & Research | Title | Description | URL | |-------|-------------|-----| | New Buy Rating for MiniMax Group, Inc (0100), the Technology Giant | In a report released yesterday, from Guotai Haitong maintained a Buy rating on MiniMax Group, Inc, with a price target o | [Link](https://longbridge.com/en/news/277143854.md) | | China's Minimax reports strong revenue growth, charts broader AI ambitions | Chinese AI startup MiniMaxreported a 159% revenue growth to $79 million in 2025, with over 70% of sales from internation | [Link](https://longbridge.com/en/news/277484992.md) | | The top artificial intelligence (AI) stocks to buy with $1,000 right now | The market is gifting investors some sale prices of leading AI stocks. | [Link](https://longbridge.com/en/news/277390794.md) | | Chinese AI firm MiniMax more than doubles revenue in first post-IPO results | Annual revenue surged, though losses widened—a result that is likely to be welcomed by investors—who have piled into the | [Link](https://longbridge.com/en/news/277459562.md) | | The alleged Claude leak: Anthropic says Chinese rivals scraped millions of answers | Anthropic has accused three Chinese AI firms—DeepSeek, Moonshot AI, and MiniMax—of using 24,000 fake accounts to generat | [Link](https://longbridge.com/en/news/276640069.md) | --- > **Disclaimer**: This article is for reference only and does not constitute any investment advice.