Nvidia Investment Thesis

Published on 04/28/21 | Saurav Sen | 4,504 Words

The BuyGist:

  • This thesis was originally posted on November 6th, 2018. 
  • We go through the company's competitive advantage, durability of competitive advantage, and its management quality. 
  • Since we wrote this, a lot has changed. We regularly revisit our thesis to see what we got right and what we got wrong.

Thesis Summary

Dominant GPU company that's front-and-center in the AI revolution. Main threats are AMD and FPGAs but should carve out a space in markets beyond Gaming and data-centers like Autonomous Vehicles and Virtual Reality.

Competitive Advantage: The Castle

  • Core Competency? Graphics Processing Units (GPUs) for Gaming and AI Computing.
  • Products: Better or Cheaper? Better. Improving GPU’s and making them more AI-friendly.
  • Evidence: Growth? Revenue growing at 40%-plus. EBITDA margins expanding.

Durability of Competitive Advantage: The Moat

  • Competition/Threats? AMD for GPUs. Xilinx/Intel for FPGAs. And ASICs.
  • Protection? Complexity, IP rights, and CUDA – Nvidia’s software stack for GPUs.
  • Cash Flow Resiliency? Resilient. Common Architecture and CUDA extend product cycle.

Management Quality: The Generals

  • Strategy & Action? Positive. Founder Huang focused on the right end-markets.
  • Financial Productivity? High. ROE > 50%, due to high margins.
  • Sustainable Free Cash Flow? ***Redacted

Why Nvidia?

I looked at Nvidia for 2 main reasons: 

  1. As a potential call-option on Cryptocurrencies. 
  2. Its price correction in October 2018.

On point #1: It turns out that Nvidia’s crypto business is declining. And going forward, Nvidia expects won’t assume any revenue from cryptocurrencies. So, my plan didn’t quite work out. Maybe cryptos will be back but I wouldn’t bet on it. It’s why I wanted to explore a lateral way of playing the crypto market instead of a direct bet.

Competitive Advantage: The Castle

Core Competency? Graphics Processing Units (GPUs) for Gaming and AI Computing.

If there’s one company that’s been front-and-center of the AI revolution, it’s Nvidia. And that’s because it makes these specialized chips that few others have mastered. Graphics Processing Units (GPUs) are different from Central Processing Units (CPUs), which do most of the work on your laptop. I’ve explained this in some detail in Investing in AI Hardware, but I’ll reiterate a few points here. I’ll be brief.

At this point you may ask, “why are ‘graphics’ chips used for Artificial Intelligence?”. Good question. Graphics require very compute-intensive processes. To bring to life a videogame is hard work – for engineers and for the processor. If a CPU were to perform this compute-intensive process, on top of the thousand other processes it has to run, it would make your computer or game console too slow; and it would consume too much power. It makes sense to have a separate processor that just focuses on the image rendering side of things. It turns out that the reason image rendering is so compute-intensive is because it needs to perform a ton of parallel computing. For those of you who’ve studied some higher-level mathematics – GPUs are Linear/Matrix Algebra workhorses. For the rest, it means GPUs can run tons of simultaneous equations, quickly and efficiently. By efficiently, I mean with less power than a CPU. For image rendering, which basically means simulating real-world images (light and shadows), a ton of Matrix Algebra is required. Think of it as coordinating and superimposing millions of pixels, all at the same time.

It turns out that this sort of parallel computing is perfect for Artificial Intelligence operations. At their heart AI operations are complex math problems, that have tons of parallel equations to solve. Today, Nvidia’s co-founder and CEO Jensen Huang claims that they saw this years ago (which may be true). Regardless of whether they foresaw this revolution or not, the company really struck gold around 2016. As Cloud Computing became mainstream, AI became easier to disseminate. All the heavy parallel computing would be done in giant server farms or datacenters. This AI process – in the form of speech or image recognition – would be made available on “edge devices” such as phones or speakers. All this has happened in the last couple of years. 

When AI first started going mainstream, engineers knew that CPUs won’t cut it. The best thing they could use then was GPUs. So, the point is that GPUs were built for graphics, but they stumbled upon a revolution called AI. Whether Jensen Huang knew this all along or not is irrelevant. At the moment, his company Nvidia dominates the GPU space and, in effect, dominates the AI Computing landscape. 

There is one nuance that I had mentioned in Artificial Intelligence is Real, but it’s worth repeating here. So far, the dominant flavor of AI has been Machine Learning. At the risk of oversimplifying, I’ll state that Machine Learning is the simplest form of AI. Don’t get me wrong, Machine Learning itself is very complicated and there are several sub-flavors. But compared to some other frameworks like Convoluted Neural Networks and Recursive Neural Networks, Machine Learning is not as computationally intensive. In the world of Machine Learning, GPUs are probably the most efficient type of hardware. But as AI processes get more convoluted, other types of chips are starting to make their mark. More on this in the Competition section.

But GPUs, thanks mostly to Nvidia, showed up to the AI party before anyone else. And they have proliferated. They’re powering most of the AI applications out there. Nvidia has harnessed a lot of the Machine Learning growth in the last couple of years. But the company refuses to sit on its laurels. It’s investing heavily in new generation GPUs that can do more than graphics rendering and Machine Learning. Nvidia’s newest GPUs are targeting industries like Autonomous Vehicles and Medical Imaging. In my view, when Virtual Reality Applications blossom, it will be right in Nvidia’s wheelhouse.


Products: Better or Cheaper? Better. Improving GPU’s and making them more AI-friendly.

As far as GPUs are concerned, Nvidia doesn’t have much competition. Their main competitor today really is AMD. Are their products better than AMD’s? It depends on the application. But the comparison is usually not subjective. If you google “best GPUs” or something similar, you’ll be inundated with performance stats that can be really confusing. But what I do understand is that I should look for 2 broad varieties of performance metrics: Speed and Energy Consumption. Nvidia and AMD compete on permutations and combinations of these two performance categories. 

There are two kinds of innovation going on at Nvidia.

  1. Make GPUs even better.
  2. Make GPUs more palatable to AI applications beyond Machine Learning.

Gaming is Nvidia’s biggest moneymaker. I’m sure you’ll agree that Gaming has come a long way since PS1 in the 1990s. In some cases, graphics are almost as good as visual effects in movies. Nvidia has a big hand in that. As their GPUs started being used for AI applications, Nvidia didn’t neglect its main breadwinner. First of all, the chips used for both applications have been virtually the same. Secondly, it’s not that they don’t have competition, so they won’t just sit idle. Gaming is still a big and growing market. A great (and timely) example of Nvidia’s relentlessness in Gaming is the recently released Turing chip. CEO Jensen Huang calls it the greatest innovation in graphics in the last 15 years. I wouldn’t know, but even if it’s not a breakthrough, it looks very impressive. Apart from improvements in the usual Speed and Energy Efficiency metrics, Nvidia claims to have conquered something called Ray Tracing, otherwise known as the “holy grail of computer graphics”. 

I won’t write a whole thesis on Ray Tracing; you can read more about it here but gist is this: Computer Graphics is essentially about manipulating light in different ways to make images look more realistic. Our eyes are incredibly sensitive to the smallest details on the computer or TV screen, even if we don’t consciously think about it. That’s a big problem for programmers. Ray Tracing, a method that took 10 years at Nvidia to master, makes the programmers’ jobs a bit easier. The dumbed-down explanation I used for myself is that Ray Tracing comes magnitudes closer to mimicking how light behaves out in the real world (the world that our eyes normally see). And so, the images on screen look a lot more like the ones in the real world. The applications of Ray Tracing go beyond Gaming, into movies and (eventually) Virtual Reality.

Innovations in the Graphics world clearly have ripple-through effects in the AI world. Better, faster rendering of images would clearly mean better, faster image recognition processing. As Nvidia thinks about AI beyond Machine Learning, it seems to be doing a good job of taking their graphics expertise onto other AI use-cases. Two use-cases stand out: 

  1. Autonomous Vehicles
  2. Medical Imaging

Both these industries are amongst the most intensive AI applications in production now. They’re still in their early stages but Nvidia wants a big part of the action. Over the last year or so, Nvidia has been releasing specialized GPUs for these specific applications. For example, Nvidia released the DGX platform earlier this year for Autonomous Driving. Companies like Daimler-Benz and Volvo are apparently building their Autonomous Driving systems using DGX. When Nvidia does this – make industry-specific GPUs – it does this partly by hardware configurations (alter the actual GPU) and partly via their software stack – CUDA. 

CUDA is a big deal and it really separates them from the pack. Think of it as a programming language that talks to the GPU to run these AI applications. The applications themselves may send and receive data to and from the GPU, so something must do the translation. That middle layer is CUDA, which was invented by Nvidia. CUDA makes it relatively easy for Nvidia’s clients (like a Daimler-Benz) to build AI applications that can harness the power of Nvidia’s GPUs. The same goes for GE, which makes Medical Imaging equipment. They may use a Watson pre-built API but it needs the parallel computing power of GPUs. Engineers can connect Watson AI API to the GPU using CUDA.

Nvidia has other such specialized chips, which all look and feel similar because they’re all GPUs. But slowly and surely, Nvidia is tailoring them towards the specifics of industries that it has chosen very carefully – where its expertise in graphics ports over almost seamlessly into AI applications.


Evidence: Growth? Revenue growing at 40%-plus. EBITDA margins expanding.

Nvidia’s revenue and EBITDA growth charts are a great proxy for the growth and proliferation of “Big Data” – the killer combo of Cloud Computing and AI. 


To say that the last couple of years have been good for Nvidia is a giant understatement. It’s important to see where the growth came from. You’ll notice the datacenter business, in particular, has seen tremendous growth. Here’s a look at how their business has progressed, by end-market.


The key question is, of course, whether this growth with continue. If you’ve taken a look through the Buylyst Worldviews and The Buy List in particular, you’ll know that “the house view” is that Cloud Computing and AI are in the early innings. It’s hard for me to see growth slowing down in these end-markets. The same goes for Autonomous Driving. The second-order-thinking question is: will these datacenters and Autonomous Vehicles use GPUs? Then the third-order-thinking question is: if they will keep using GPUs, then will they use Nvidia GPUs?

My opinion on the second-order and third-order questions are: Probably and Probably. But there are some threats to that assessment, which I’ll discuss in the next section.

Durability of Competitive Advantage: The Moat

Competition/Threats? AMD for GPUs. Xilinx/Intel for FPGAs. And ASICs.

I’ve mentioned that AMD is a formidable competitor in the GPU arena. And Intel is keen on getting into GPUs as well. They will also be tough competitors, when they finally launch some comparable GPU products. But I doubt whether AMD and Intel keep CEO Jensen Huang up at night. Here’s why:

  1. The end-markets are big enough for 3 GPU-makers. Comfortably. 
  2. Intel and AMD’s base business is CPUs, not GPUs.

Point #2 is a big advantage for Nvidia, in my opinion. This is particularly true when compared to Intel, which seems to have a “see what sticks” approach now. They’re trying to do everything – CPUs, GPUs, FPGA’s, ASICs, Memory Chips. CPUs have always been their core business, and now AMD is encroaching in on their dominant market share. That’s happening partly because of Intel’s ambivalence in trying to do it all: stay dominant in what it does best (CPUs) and not miss the AI party. AMD has done GPUs for a while. But they’re fighting two dominant forces on both sides – Intel in CPUs and Nvidia in GPUs. They seem to be doing fine now, but those are tough battles (tougher if Intel refocuses on CPUs). Nvidia is already laser-focused on GPUs.

FPGAs and ASICS are bigger threats to GPUs. The Buylyst is invested in Xilinx, which is an FPGA specialist. I went through the difference between GPUs and FPGAs in a bit of detail in Investing in AI Hardware but here is the gist: FPGAs are programmable chips, which means they can be customized, on the fly, according to the type of workload. GPUs are more rigid – they’re good at parallel computing, which lends itself well to Machine Learning but not as well to Neural Networks. Because FPGAs can be programmed, they can be faster and more energy-efficient depending on the type of workload.

In the AI world, there are 2 main types of processes:

  1. Training
  2. Inference

Training means that the AI algorithm is refined by feeding it more data. The AI program learns to process this data in the most efficient way possible. Inference means the AI system is in the field, in the real world – data comes in real-time and the AI system needs to rely on its training to infer or get useful insights out of that raw, unstructured data. Experts who know more about this stuff than I do say that GPUs are better at Training while FPGAs are better at Inference. As I understand it, the key variable there is power consumption. If the AI program resides on a car, for example, power consumption will be a key variable. Once the FPGA has been customized for a particular kind of workload, it would be better “in the field” because its more efficient. So far, it’s been hard for Nvidia and its GPUs to break into the Inference game. 

ASICs are another threat – they are prebaked chips designed specifically for a certain type of workload. If the device and type of data is known in advance, ASICs can be much faster and more energy-efficient than GPUs and FPGAs. In the arena of Cryptocurrencies, which is the reason I first started to look into Nvidia, ASICs are quickly eating into GPU territory. The reason is simple – Cryptocurrency workloads are predictable, unlike AI workloads. It makes sense to have a prebaked chip for those types of math puzzles that miners need to solve in order to mine Bitcoins. As I mention in Bitcoin’s merits are inflated, mining is a very compute-intensive activity. The unwanted side-effect is that the biggest, richest miners win. And they can buy more computing power, and the cycle snowballs. The biggest miners can afford to design and build their own prebaked chips. Many of them don’t need GPUs any more. 


Protection? Complexity, IP rights, and CUDA – Nvidia’s software stack for GPUs.

When I first wrote Investing in AI Hardware, one of the things I touted about FPGAs was its programmability. When I read the worldview now, it seems to me that I was almost dismissive of GPUs. But as I dug into Nvidia, it became clear to me that GPUs are programmable too, just not to the extent that FPGAs are. My initial take on the AI Hardware landscape nudged me into Xilinx (I’m glad it did). Besides, Nvidia was too overvalued back then. But the more I dig in, the wider its Moat seems to be. 

The indisputable fact is that GPUs are hard to make, especially the new powerful ones that Nvidia makes. Mass producing GPUs with Ray Tracing technology is even harder. I can be very confident in saying that Nvidia doesn’t have to worry about scrappy upstarts – it’ll be very, very hard for them to match Nvidia in scale and market-reach, even if they can match on design. A laser-focused mission to make just GPUs and more than 2 decades of snowballing expertise, when put together, is a very wide Moat.

Nvidia has 2 main threats as discussed above:

  1. Other GPUs
  2. FPGAs and ASICs

But they both have one big Moat to cross: CUDA. I’ve mentioned CUDA above, but it’s worth repeating – CUDA is Nvidia’s program that allows engineers to attach the capabilities of the GPU with the AI program. Think of it as a translator. 

Regarding threat #1: Their main competition is AMD, and maybe Intel down the line. It’s conceivable that they beat Nvidia’s GPUs on speed and energy-efficiency. But they don’t have CUDA. AMD uses something called OpenCL as its GPU software stack but if we go google “compare OpenCL and CUDA”, CUDA does seem to have an edge. Obviously, I’m not in a position to explain why. However, as an investor, it’s enough for me to know that Nvidia thinks CUDA is one of its big differentiators and it’s somewhat corroborated by external sources. One of CUDA’s main advantages seems to be its compatibility with almost any high level or low level programming language. 

Regarding threat #2: Again, CUDA allows Nvidia’s GPUs to be somewhat programmable. FPGAs made by Xilinx may be more programmable, but you need to be a “Hardware Programmer” to do it. That’s an extremely specialized skill. In fact, Xilinx’s Achilles Heel is the fact that it’s extremely hard to program. Of course, Xilinx has a newfound commitment to making “its own CUDA” but that’s not easy to do. ASICs, of course, aren’t programmable at all.

The software “pull-in” that Nvidia has developed over the last decade or so – CUDA – is also a “lock-in”. If AI processes are running based on CUDA, it makes it more cumbersome to replace Nvidia’s chips physically with an AMD GPU or a Xilinx FPGA. Of course, software can be re-written but with what? OpenCL and whatever Xilinx offers? As far as I know, neither have the compatibility and ease-of-use of CUDA, which means that AMD’s or Xilinx’s chips would need to be much faster or much more energy-efficient than Nvidia’s for engineers to think about replacing.


Cash Flow Resiliency? Resilient. Common Architecture and CUDA extend product cycle.

CUDA acts as a lock-in, as I mentioned above. But there’s another peculiarity of Nvidia that amplifies that lock-in. Nvidia is laser-focused on GPUs. As Nvidia keeps adding new end-markets to its repertoire, it needs to keep tweaking its GPUs. However, as it does this, it doesn’t need to reinvent the wheel. All its GPUs share the same architecture. As far as they need to be tweaked to suit the company’s end-markets, a lot of it can be done with just some incremental R&D expenditure and a whole lot of programming on CUDA. 

The effect on cash flow is twofold: 

  1. The common hardware architecture keeps R&D costs manageable. 
  2. CUDA makes it hard for customers to switch to AMD or Xilinx when Nvidia launches a new GPU, which can be installed with minimal friction in the customer’s AI workflow. This should reduce Nvidia’s cash flow volatility.

Management Quality: The Generals

Strategy & Action? Positive. Founder Huang focused on the right end-markets.

Full marks to Management. Well, “as good as it gets” is probably more accurate. Jensen Huang started Nvidia with another co-founder. Huang is still CEO. And he’s a bona fide engineer. Nvidia’s performance speaks for itself – it’s one of the last men standing in the GPU arena and it’s fueled much of the AI revolution. 

I had written A Comfortable Company a long time ago. In it I had listed a few attributes that I look for in top Management. Jensen Huang has many of these attributes, which seem evident from listening and watching hours of his presentations and earnings calls. 

  1. He’s a master of his craft – GPUs. And he seems to be embracing the role of being one of the main spokespersons for Accelerated AI Computing.
  2. He’s laser-focused on Nvidia’s core competency – GPUs.
  3. At the same time, he has the ability to zoom out and see the megatrends of the world. In other words, he has the ability to figure out what sports Nvidia should play, and which ones it should avoid.
  4. He’s focused on reinvesting in the business to widen Nvidia’s Moat. 

On the last point, one of the interesting tidbits that came out of one of the presentations was that Nvidia now has more software engineers than hardware engineers. This is a great data point about Management knowing where Nvidia’s strengths are. For all the reasons mentioned above, CUDA is one of their biggest differentiators. And they seem to be investing heavily in it to widen the moat even more. 


Financial Productivity? High. ROE > 50%, due to high margins.

Nvidia is the highest-ROE company I have ever analyzed. It seems to be at a run-rate of a 50% ROE. And that’s with negligible help from debt. Nvidia barely has any. Almost all of that 50% number is credit to Nvidia’s amazing margins. And the reason for that is everything mentioned in the sections above: 

  1. Nvidia makes a specialized product that’s hard to replicate.
  2. It’s taking that product and applying it to growing end-markets. 
  3. It’s reinvesting in its Moat to stave off competition from other GPUs and other new technologies. 


Sustainable Free Cash Flow? About $5.9 billion. Translates to roughly $194/share.

The assumptions are as follows: 

  1. Assumed 20% growth in Gaming.
  2. Assumed 20% growth in Professional Visualization.
  3. Assumed Datacenter business doubles before it stabilizes.
  4. Assumed Autonomous Vehicles business doubles before it stabilizes. 
  5. Assumed OEM and IP business declines by 50%. 
  6. EBITDA margin improves from 39% to 42%. Compared to the trend over the last 12-24 months, that’s a modest increase.
  7. No benefits from Working Capital swings.
  8. Similar tax rate assumed.

Among these, I suspect assumptions #4 and #5 are probably too conservative. Yes, I do believe that the Autonomous Vehicles business can more than double. Jensen Huang believes ALL vehicles will be Autonomous in the future. Even if he’s wrong by 90%, the upside is still massive. And it’s only just begun.

The end-result is a Free Cash Flow estimate of almost $6 billion. Alas, at this point, it seems like Nvidia is fairly valued. There seems to be no Margin of Safety even after the October correction.


We use cookies on this site to ensure the best service possible.