The 20 billion AI unicorn strikes back, MiniMax's first inference model rivals DeepSeeK, with computing costs of only 530,000 USD

Wallstreetcn
2025.06.17 11:56
portai
I'm PortAI, I can summarize articles.

AI startup MiniMax has released its first inference model M1, which was trained for three weeks using 512 NVIDIA H800 GPUs, with a rental cost of $537,400. In multiple benchmark tests, M1 surpassed DeepSeek's latest R1-0528 model, requiring only 25% of the computational resources used by DeepSeek to generate 100K tokens