Did Elon Musk Just Give Nvidia Investors 40 Billion Reasons to Cheer?

In This Article:

Key Points

  • Elon Musk's xAI, a rival to ChatGPT, has been building a supercomputer leveraging clusters of Nvidia's chips.

  • The company is reportedly eyeing 1 million additional GPUs, which could cost an estimated $40 billion.

  • While Nvidia is facing more competitive pressures from its own customers, demand trends suggest it shouldn't have much of a problem with growth.

  • These 10 stocks could mint the next wave of millionaires ›

When it comes to training generative AI models, Nvidia's (NASDAQ: NVDA) graphics processing units (GPUs) are hailed as the gold standard among industry experts. That's not exactly a novel conclusion considering the semiconductor powerhouse has amassed an estimated 90% or more of the GPU market.

The more subtle idea here is how exactly Nvidia built such a gigantic lead over the competition. While it does not explicitly specify which companies buy its GPUs, it's widely speculated across Wall Street that cloud hyperscalers Microsoft, Alphabet, and Amazon, as well as Meta Platforms are among Nvidia's largest customers.

Considering that these players are forecasting well in excess of $300 billion on AI infrastructure spending just this year, I wouldn't be shocked at all if these "Magnificent Seven" members are repeat customers of Nvidia.

But beyond the typical mega-cap AI names is another company quickly emerging as a top client. Let's explore how Elon Musk's new start-up, xAI, is deploying Nvidia's hardware and assess just how much it could be spending on the industry's best chip architecture.

How are xAI and Nvidia working together?

Musk's xAI is a start-up building a large language model (LLM) called Grok, which is meant to compete with the likes of other popular LLMs such as ChatGPT, developed by OpenAI.

For the last year, xAI's primary focus was building a supercomputer to train its AI applications. The initial stage of development for the supercomputer, called Colossus, used 100,000 Nvidia GPUs. Shortly thereafter, Musk and his team scaled up its GPU cluster to 200,000 chips.

An AI chatbot prompt bar open in a browser.
Image source: Getty Images.

What's next for xAI?

A couple of months ago, Musk sat down with a team of developers and talked shop about how Grok is trained and what's next for xAI. Perhaps unsurprisingly, he doubled down on securing more GPUs.

At the time, he implied that the next training cluster would be five times larger than the current infrastructure. In other words, Colossus 2 would need 1 million chips (five times the 200,000 referenced above). He estimated the total cost for this project would fall between $25 billion and $30 billion.