Investing.com -- According to Bernstein, the opportunity for storage in AI datacenters remains limited when compared to servers, particularly as LLMs (large language models) require much less storage.
In a recent webinar with David Hall, former VP of Infrastructure at Lambda, Bernstein gathered insights about trends in the AI cloud market.
Hall is said to have estimated that storage costs only represent “8-12% of the cost of a model-training GPU cluster,” reinforcing Bernstein’s view that storage will play a smaller role than servers in AI infrastructure.
While models focused on images and videos need higher storage due to larger datasets, the overall storage demand in AI datacenters still trails far behind other components, like servers, said the firm.
Bernstein also noted that selecting a storage provider involves balancing features against cost.
They explained that Lambda has chosen to work with providers such as DDN, Vast, and WEKA, based on customer needs, while bypassing solutions from NetApp (NASDAQ:NTAP), Dell (NYSE:DELL), and Pure Storage (NYSE:PSTG), citing more comprehensive features elsewhere.
In the discussion, Hall is also said to have touched on GPU lifespans and upgrades.
Bernstein said he stated that GPUs today have lifespans of around “7-9 years,” meaning even fully depreciated chips can still provide value.
Hall pointed out that the new Blackwell GPUs offer performance gains of “60-200% at a 30-40% pricing bump” compared to Nvidia’s Hopper, but not all use cases require the latest technology—many tasks can be performed with older-generation GPUs such as Ampere or P-series models.
Bernstein said the webinar further emphasized Nvidia (NASDAQ:NVDA)'s dominance in the software layer through CUDA and CuDNN, a key differentiator in the AI space.
Although emerging startups with custom GPUs may chip away at Nvidia’s market share, the firm says the software remains critical to success.
They noted that the software layer will be “the most important component” in determining the competitive landscape for GPUs.