
DeepSeek is trending, is NVIDIA finished?

The training computing power requirements for DeepSeek v3 have significantly decreased, thanks to advancements in algorithms and data distillation, making the training of latecomer models more efficient. Although Huansquare achieved a similar level with 1/10 of the computing power after the release of GPT-4o, the calculation of training costs must consider the investment in preliminary research. In the future, synthetic data will be an important source to break through data limitations, and the overall demand for training computing power is still on the rise, with laboratories such as OpenAI and Anthropic also facing issues of insufficient computing power
Due to copyright restrictions, please log in to view.
Thank you for supporting legitimate content.

