
Sandisk Return RateSanDisk's stock price keeps rising and is undergoing a valuation re-rating process.

Why is Sandisk rising? Many retail investors are still immersed in understanding Sandisk from a speculative perspective, or attribute the stock price increase to cyclical shortages in storage. While these factors are correct, they are not entirely accurate. The essence is that the demand for storage in the AI era is growing exponentially, and the need for technological innovation in storage products is driving Sandisk's valuation reconstruction. HBF is a technological invention by Sandisk and will be as important as HBM in the future. I have mentioned multiple times since last December that Sandisk is undergoing valuation reconstruction. We are looking forward to Micron Technology's stellar earnings report next week and the key role Sandisk's HBF will play in NVIDIA's future products at the GTC conference (as an intermediate storage layer, KV cache), solving storage bottleneck issues. ///
The following article is generated by AI:
///Based on the information you provided, we have outlined the core schedule of NVIDIA's GTC 2026 conference, the upcoming new technologies and products, and a detailed interpretation of HBF technology.
🗓️ Main Schedule of GTC 2026 Conference
The GTC 2026 conference will open on March 16 (local US time) in San Jose, California.
GTC Live Pre-event
- Time: March 16, 8:00 AM local US time
- Content: CEOs from companies like Perplexity, LangChain, and Mistral AI are invited to discuss the five-layer stack architecture of accelerated computing and AI infrastructure.
Jensen Huang's Keynote Speech
- Time: March 16, 11:00 AM local US time
- Location: SAP Center, San Jose
- Content: This is the most anticipated session of the conference, expected to cover the latest advancements in areas such as accelerated computing, AI factories, open models, agentic systems, and physical AI.
In addition to on-site events, there will be over 1,000 sessions, training, and workshops during the conference, lasting until March 19.
🚀 New Technologies and Products NVIDIA Will Release
The core focus of this GTC conference is not the refresh of single-chip parameters, but how NVIDIA is driving the entire AI industry from "buying GPUs" to a new stage of "deploying AI factories." Main points of interest include:
Vera Rubin Computing Platform
This is an integrated AI supercomputing platform, not just a single GPU. It consists of CPUs, GPUs, interconnects, networking, and system components, aiming to solve the bottlenecks in computing power, networking, and storage in AI training and inference.
- Core Components: Include the Vera CPU with the new Olympus architecture and the Rubin GPU equipped with the third-generation Transformer Engine.
- Performance Improvement: The inference computing power of the Rubin GPU is expected to be 5 times that of the previous Blackwell platform. When running large Mixture of Experts (MoE) models, the token generation cost can be reduced to one-tenth.
- Mass Production Information: The platform has begun full-scale production in early 2026 and is expected to be launched to the market through cloud service providers (such as AWS, Google Cloud, Microsoft Azure) in the second half of 2026.
Feynman Architecture Preview
This will be one of the most strategically significant highlights of the conference. Feynman is NVIDIA's roadmap preview for the post-Rubin era and may become one of the first chips to adopt TSMC's A16 process, with production expected to start in 2028.
AI Factory Infrastructure Reconstruction
To support the growing demand for AI computing power, NVIDIA will showcase new solutions in interconnect, power supply, and cooling:
- Interconnect: Transition from traditional copper interconnects to higher-bandwidth, lower-loss optical interconnects (CPO and silicon photonics technology).
- Power Supply: Showcase solutions like 800V HVDC and highly integrated modular power supply to ensure efficient and stable power delivery to every computing node.
- Cooling: Liquid cooling technology will shift from an optional solution to a standard configuration to address the cooling challenges of ultra-high-power chips.
💡 The Significance and Prospects of HBF and KV Cache
The "intermediate storage layer KV cache" you mentioned is highly relevant to the storage technology direction NVIDIA is currently pursuing, but two concepts need to be distinguished: one is the platform concept proposed by NVIDIA, and the other is the HBF technology standard being promoted by the industry.
Relationship and Differences Between the Two
- NVIDIA's Platform Concept: At CES, Jensen Huang announced the "NVIDIA Inference Context Memory Storage Platform," a revolutionary architecture designed specifically for AI inference to create a high-speed, low-energy "context memory" storage layer, expanding the GPU's available memory capacity and solving the storage bottleneck of KV cache (Key-Value Cache).
- HBF Technology Standard: HBF (High Bandwidth Flash), regarded by the industry as "NAND version of HBM," is a new type of storage technology positioned between HBM (High Bandwidth Memory) and traditional SSDs (Solid State Drives). Storage giants like SK Hynix and Sandisk are collaborating to promote it as a global standard.
In simple terms, HBF is precisely a highly promising technological path to achieve the kind of "intermediate storage layer" described by NVIDIA.
The Significance of HBF
The emergence of HBF is to solve a core contradiction in the current development of large AI models: the model's demand for memory capacity far exceeds the growth rate of HBM capacity.
Characteristic HBM (High Bandwidth Memory) SSD (Solid State Drive) HBF (High Bandwidth Flash)
Positioning: Ultra-fast cache for the most active data, Large-capacity storage, Filling the gap between HBM and SSD
HBF Advantages: Extremely high bandwidth, extremely low latency, Huge capacity, low cost, Combines high bandwidth with large capacity, cost-effective.
HBM Limitations: Small capacity, extremely high cost. SSD has low bandwidth, high latency.
Its core value lies in providing storage space far exceeding HBM capacity at speeds close to HBM, specifically for handling the massive contextual data (i.e., KV cache) in AI inference, thereby significantly improving the processing capability and energy efficiency of AI systems without a substantial increase in cost.
The Future Prospects of HBF
The commercialization process of HBF is accelerating, with broad prospects:
- Clear Market Demand: As AI shifts from "training" to "inference," the surge in user concurrency makes the demand for efficient storage systems particularly urgent. HBF precisely meets the dual needs of capacity scalability and high energy efficiency in inference scenarios.
- Industry Ecosystem Formation: Storage giants like SK Hynix and Sandisk have begun collaborating to promote its standardization and productization, which will accelerate the maturity of the entire industry chain.
- Clear Commercialization Timeline: According to the plan, the first HBF product samples are expected to be delivered in the second half of 2026, and the first AI inference servers integrated with HBF are expected to be launched in early 2027.
Overall, HBF is regarded as one of the key technologies in the AI inference era and is expected to see large-scale application in the coming years.
The copyright of this article belongs to the original author/organization.
The views expressed herein are solely those of the author and do not reflect the stance of the platform. The content is intended for investment reference purposes only and shall not be considered as investment advice. Please contact us if you have any questions or suggestions regarding the content services provided by the platform.

