Better Artificial Intelligence Stock: Nvidia vs. AMD

In This Article:

Key Points

  • Nvidia is the GPU market leader that has created a wide moat through its CUDA software program.

  • However, as the market shifts towards AI inference, AMD could have the potential to take market share.

  • The stocks currently trade at similar valuations.

  • 10 stocks we like better than Nvidia ›

Even with new export controls cutting off a vital market in China, demand for advanced chips used to power artificial intelligence (AI) infrastructure remains high. While there is a growing market for custom AI chips, the most commonly used chips for running AI workloads are graphics processing units (GPUs). This name stems from the fact that these chips were originally designed to speed up graphics rendering in video games.

Due to their powerful processing speeds, GPUs are now used for a variety of high-power computing tasks, such as training large language models (LLMs) and running AI inference. The GPU market is basically a duopoly at this point, headed by Nvidia (NASDAQ: NVDA) and Advanced Micro Devices (NASDAQ: AMD). The question many investors ask, though, is: Which stock is the better buy?

The leader versus the challenger

The unquestioned leader in the GPU space is Nvidia, which commands an over 80% market share. Not only is Nvidia larger than AMD, but it's also been growing its data center revenue more quickly. Last quarter, Nvidia grew its data center revenue by 73% to $39.1 billion, while AMD's data center revenue jumped 57% to $3.7 billion.

Nvidia's advantage comes from its software platform, CUDA. It launched the free software platform all the way back in 2006 as a way to let developers program its GPUs for different tasks in an effort to expand beyond the video game market. The company pushed the use of the software to universities and research labs, which made it the software program upon which developers were taught to program GPUs.

While AMD made some half-hearted efforts with software, it didn't launch a true CUDA competitor until around 10 years later with ROCm. By that time, CUDA had already become the default software used to program GPUs, and ROCm was still behind with less hardware support, limited documentation, and more difficulty to install and use. Meanwhile, Nvidia has since expanded upon its software lead through a collection of AI-specific libraries and tools built on top of CUDA, called CUDA X, which helps bolster the performance of its chips for AI tasks.

Ultimately, CUDA has given Nvidia a big network effect advantage. The more CUDA is used, the more tools and libraries are built for it, making Nvidia GPUs all the stickier. While ROCm continues to improve, it still trails CUDA, especially for use in LLM training.