Meituan's large model is here! Open-source "Changmao," performance on par with DeepSeek V3.1, also focusing on "computing power savings."

Wallstreetcn
2025.08.31 00:50
portai
I'm PortAI, I can summarize articles.

Meituan has open-sourced the LongCat-Flash large model, which features a hybrid expert model with 56 billion parameters, pursuing excellent performance and computational efficiency. Through the "zero computation" expert mechanism, the model dynamically allocates computing resources, activating only 18.6 billion to 31.3 billion parameters, significantly saving computing power. The introduction of the shortcut connection hybrid expert model (ScMoE) enhances training and inference throughput. LongCat-Flash has undergone multi-stage training and aims to become an intelligent agent for solving complex tasks