
The 20 billion AI unicorn strikes back, MiniMax's first inference model rivals DeepSeeK, with computing costs of only 530,000 USD

I'm PortAI, I can summarize articles.
AI startup MiniMax has released its first inference model M1, which was trained for three weeks using 512 NVIDIA H800 GPUs, with a rental cost of $537,400. In multiple benchmark tests, M1 surpassed DeepSeek's latest R1-0528 model, requiring only 25% of the computational resources used by DeepSeek to generate 100K tokens
Log in to access the full 0 words article for free
Due to copyright restrictions, please log in to view.
Thank you for supporting legitimate content.

