AI microchip supplier Nvidia, the world’s most valuable company by market cap, remains heavily dependent on a few anonymous customers that collectively contribute tens of billions of dollars in revenue.
The AI chip darling once again warned investors in its quarterly 10-Q filing to the SEC that it has key accounts so crucial that their orders each crossed the threshold of ten percent of Nvidia’s global consolidated turnover.
An elite trio of particularly deep-pocketed customers for example individually purchased between $10-$11 billion worth of goods and services across the first nine months that ended in late October.
Fortunately for Nvidia investors, this won’t change any time soon. Mandeep Singh, global head of technology research at Bloomberg Intelligence, says he believes founder and CEO Jensen Huang’s prediction that spending will not stop.
“The data center training market could hit $1 trillion without any real pullback,” by that point Nvidia share will almost certainly drop markedly from their current 90%. But it could still be in the hundreds of billions of dollars in revenue annually.
Nvidia remains supply constrained
Outside of defense contractors living off of the Pentagon, it’s highly unusual that a company has such a concentration of risk among a handful of customers—let alone one poised to become the first worth the astronomical sum of $4 trillion.
Strictly looking at Nvidia’s accounts on a three-month basis, there were four anonymous whales that, in total, comprised nearly every second dollar of sales in the second fiscal quarter, this time at least one of them has dropped out since now only three still meet that criteria.
Singh told Fortune the anonymous whales likely include Microsoft, Meta, and possibly Super Micro. But Nvidia declined to comment on the speculation.
Nvidia only refers to them as Customers A, B, and C, and all told they purchased a collective $12.6 billion in goods and services. This was more than a third of Nvidia’s overall $35.1 billion recorded for the fiscal third quarter through late October.
Their share was also divided up equally with each accounting for 12%, suggesting they were likely receiving a maximum amount of chips allocated to them rather than as many as they might have ideally wanted.
This would fit with comments from founder and CEO Jensen Huang that his company is supply constrained. Nvidia cannot simply pump out more chips, since it has outsourced wholesale fabrication of its industry-leading AI microchips to Taiwan’s TSMC and has no production facilities of its own.
Middle men or end user?
Importantly, Nvidia’s designation of major anonymous customers as “Customer A”, “Customer B,” and so on is not fixed from one fiscal period to the next. They can and do change places, with Nvidia keeping their identity a trade secret for competitive reasons—no doubt these customers would not like their investors, employees, critics, activists and rivals being able to see exactly how much money they spend on Nvidia chips.
For example, one party designated “Customer A” bought around $4.2 billion in goods and services over the past quarterly fiscal period. Yet it appears to have accounted for less in the past, since it does not exceed the 10% mark across the first nine months in total.
Meanwhile “Customer D” appears to have done the exact opposite, reducing purchases of Nvidia chips in the past fiscal quarter yet nevertheless representing 12% of turnover year-to-date.
Since their names are secret it’s difficult to say whether they are middle men like the troubled Super Micro Computer, which supplies data center hardware, or end users like Elon Musk’s xAI. The latter came out of nowhere for example to build up its new Memphis compute cluster in just three months time.
Longer term risks for Nvidia include the shift from training to inference chips
Ultimately, however, there are only a handful of companies with the capital to be able to compete in the AI race as training large language models can be exorbitantly costly. Typically these are the cloud computing hyperscalers such as Microsoft.
Oracle for example recently announced plans to build a zettascale data center with over 131,000 Nvidia state-of-the-art Blackwell AI training chip, which would be more powerful than any individual site yet existing.
It’s estimated the electricity needed to run such a massive compute cluster would be equivalent to the output capacity of nearly two dozen nuclear power plants.
Bloomberg Intelligence analyst Singh really only sees a few longer term risks for Nvidia. For one, some hyperscalers will likely reduce orders eventually, diluting its market share. One such likely candidate is Alphabet, which has its own training chips called TPUs.
Secondly, its dominance in training is not matched by inference, which run generative AI models after they have already been trained. Here the technical requirements are not nearly as state of the art, meaning there is much more competition not just from rivals like AMD but also companies with their own custom silicon like Tesla. Eventually inference will be a much more meaningful business as more and more businesses utilize AI.
“There are a lot of companies trying to focus on that inferencing opportunity, because you don’t need the highest-end GPU accelerator chip for that,” Singh said.
Asked if this longer term shift to inferencing was the bigger risk than eventually losing share of the market in training chips, he replied: “Absolutely”.
Leave a Comment