
NVIDIA released the new generation Rubin platform, with inference costs reduced by 10 times compared to Blackwell, and plans to ship in the second half of the year

I'm PortAI, I can summarize articles.
The training performance of the Rubin platform is 3.5 times that of Blackwell, and the performance of running AI software has improved by 5 times, with the number of GPUs required for training mixed expert models reduced by 4 times. Jensen Huang stated that all six Rubin chips have passed key tests showing they can be deployed as planned. NVIDIA announced that the platform has been fully put into production, with cloud service providers such as Amazon AWS, Google Cloud, Microsoft, and Oracle Cloud being the first to deploy it
Log in to access the full 0 words article for free
Due to copyright restrictions, please log in to view.
Thank you for supporting legitimate content.

