
Amazon CEO: Trainium chips have already produced one million units, generating annual revenue of several billion dollars

Amazon CEO disclosed for the first time that its self-developed AI chip Trainium2 has achieved an annualized revenue of billions of dollars, with over 1 million chips put into production. This is mainly attributed to its "cost-performance advantage" and the large-scale adoption by key customer Anthropic. At the same time, Amazon released the next-generation chip Trainium3, which has a 4-fold performance improvement
Amazon is carving out a new revenue path worth billions of dollars in a market dominated by Nvidia, thanks to its self-developed AI chips.
During the recent AWS re:Invent conference, Amazon revealed the specific scale of its self-developed AI chip business for the first time. Amazon CEO Andy Jassy stated in his social media post that its second-generation AI training chip Trainium2 business "has already gained tremendous traction, generating annual revenues in the billions, with over 1 million chips in production."
This statement quantifies Amazon's commercial success with its self-developed chips for the first time, directly addressing market concerns about its competitiveness in the AI infrastructure space. Jassy emphasized that the core reason customers choose Trainium is its "attractive cost-performance advantage compared to other GPU options," which aligns with Amazon's classic business model of providing self-developed technology at lower prices.
He revealed that over 100,000 companies are currently using Trainium, and this chip supports the majority of usage on Amazon's AI application development platform, Bedrock.
Bedrock is a key AI service offered by Amazon that allows enterprise customers to choose and build applications across multiple AI models. The Trainium chip becoming the main computing hardware for this platform signifies Amazon's successful vertical integration from hardware to services within its ecosystem.

Strong Support from Key Customer Anthropic
The adoption by key customers has played a decisive role in the billions of dollars in revenue from Trainium chips. According to tech media CRN, AWS CEO Matt Garman revealed in an interview that AI startup Anthropic is one of the largest users of Trainium chips.
Garman stated, "We see Trainium2 gaining tremendous traction, especially from our partner Anthropic." He further pointed out that in a project called "Project Rainier," Amazon deployed over 500,000 Trainium2 chips to help Anthropic build its next-generation Claude series AI models. Project Rainier is Amazon's largest AI server cluster to date, designed to meet Anthropic's rapidly growing computing power demands.
Amazon is a major investor in Anthropic, and as part of the investment agreement, Anthropic has chosen AWS as its primary cloud service provider for model training. This strategic partnership not only provides a stable revenue source for Trainium chips but also offers Amazon a benchmark case to showcase the performance and scale of its chips
New Generation Trainium3 Released, Challenging NVIDIA's Ecosystem
While consolidating its existing market, Amazon is accelerating its catch-up through technological iteration. The company officially launched the new generation AI chip Trainium3 at the re:Invent conference. According to Jassy, the new chip has achieved a significant leap in performance, "compared to Trainium2, Trainium3 will provide at least 4.4 times the computing performance, 4 times the energy efficiency, and nearly 4 times the memory bandwidth."
Despite significant progress, challenging NVIDIA's market position remains a daunting task. NVIDIA's moat lies not only in its GPU hardware but also in its proprietary CUDA software platform, which has become the de facto standard for AI development. According to TechCrunch, rewriting AI applications for non-CUDA chips is a complex and costly endeavor.
However, Amazon seems to be planning a response. Reports indicate that its next-generation Trainium4 chip will be designed to work in conjunction with NVIDIA's GPUs within the same system. Whether this move will ultimately weaken NVIDIA's market share or further consolidate its dominance on AWS cloud remains to be seen

