
Meituan's large model is here! Open-source "Changmao," performance on par with DeepSeek V3.1, also focusing on "computing power savings."

I'm PortAI, I can summarize articles.
Meituan has open-sourced the LongCat-Flash large model, which features a hybrid expert model with 56 billion parameters, pursuing excellent performance and computational efficiency. Through the "zero computation" expert mechanism, the model dynamically allocates computing resources, activating only 18.6 billion to 31.3 billion parameters, significantly saving computing power. The introduction of the shortcut connection hybrid expert model (ScMoE) enhances training and inference throughput. LongCat-Flash has undergone multi-stage training and aims to become an intelligent agent for solving complex tasks
Log in to access the full 0 words article for free
Due to copyright restrictions, please log in to view.
Thank you for supporting legitimate content.

