Invest in AMD?

Published on 05/18/21 | Saurav Sen | 5,201 Words

The BuyGist:

  • This is the full-fledged investment thesis on AMD. 
  • The thesis presentation was posted earlier - subscribers have access here.
  • We go through the following sections: 
    • Competitive Advantage
    • Strategy & Moat
    • Growth Drivers
    • Competition
    • Strategy Risk
    • Key Risk/Threats
  • We end with a definitive conclusion - Buy or Watch.
  • Subscribers have full access.

Competitive Advantage: The Castle

Summary: With the Xilinx merger, unique one-stop-shop for a full stack of logic chips - CPU + GPU + FPGA + ACAP – for an AI-heavy, Heterogenous Computing world.

AMD is the third-largest semiconductor company by market capitalization (we’re leaving TSMC and Samsung out of this for now because they’re mostly just semiconductor fabs, not designers). There are only 3 dominant companies that design CPUs (Central Processing Units) and GPUs (Graphics Processing Units) – AMD, Nvidia and Intel.

AMD makes both CPUs and GPUs, which means they’re competing with both Intel and Nvidia. That’s why we’ve never invested in AMD before – they were going up against the undisputed dominators in CPUs (Intel) and GPUs (Nvidia). We always like a scrappy outsider story but trying to outcompete two global dominators is a tough ask.

In the last couple of years, AMD has proved us wrong. They’ve not only gained market share from the two Goliaths, but they’ve also become a free-cash-flow-generating company.

So, what did AMD get right? In CPUs, they simply executed better than Intel. They remained focused when they had to – in 2017-18 – when Intel was distracted with non-CPU acquisitions. Intel’s distraction was AMD’s advantage. AMD has a different execution model – it focuses on designing chips, not manufacturing them. Intel does both. AMD outsources manufacturing to companies like TSMC (our top holding since inception). This has served them well. They moved fast in design. TSMC kept up in production. They outpaced Intel in the latest generation CPUs – the transition from 10 nanometer chips to 7nm.

In GPUs, AMD’s success has been more surprising. There is no apparent execution advantage here. Nvidia follows the same model – focuses on design and leaves production headaches to TSMC. But Nvidia (another one of our successful holdings since 2018) has also been distracted, somewhat. They’ve spent a lot of energy and resources on capturing the Cloud Datacenter market, maybe at the cost of their original bread-and-butter business – GPUs for PCs and Gaming Consoles. This is the part of the GPU market where AMD has been able to gain market share. Why? Are their GPUs better? It’s hard to tell because each company – AMD and Nvidia – have a myriad of chips in production. But maybe the answer lies in cost. We don’t have exact pricing data – as in, we don’t know what AMD’s contract with, say, Sony PlayStation stipulates – but maybe this gross margin comparison paints a clear picture.

We do expect this Gross Margin gap to close in the next few years (more on that later). But so far, we’re not convinced about AMD’s competitive advantage and Moat. We don’t really put a lot of stock in the “nanometer wars”. AMD boasts of beating Intel to 7nm chips using TSMC’s cutting-edge technology. But we’ve read reports that put Intel’s 10nm chips and TSMC’s 7nm chips on equal footing. So, maybe it does mostly come down to price and, maybe, some marginal improvement in performance vs. competition. We can’t hinge our AMD thesis on this.

Going forward, we think it’s dangerous to hang our hats (or our investment thesis) on a perceived technological edge in AMD’s CPUs and GPUs. We think these advantages are temporary. As long-term investors, we need to look beyond point-in-time competitive advantages that are not necessarily durable. Let’s assume that Intel and Nvidia know what they’re doing even if there have been temporary hiccups. Let’s assume that Intel’s node “disadvantage” will be rectified. Let’s focus on competitive advantages that are durable. Let’s focus on things that competitors don’t do or can’t or won’t.

AMD’s acquisition of Xilinx (yet to be set in stone due to pending regulatory approvals) caught our attention late last year. We’ve wanted to find a palatable entry point, and the recent sell-off in Tech stocks may be just the opportunity. The Xilinx acquisition sets up AMD (or AMDX, as we like to call the combined entity) to deliver something others don’t or won’t or can’t – a full-stack logic chipset.

AMD’s competitive advantage as a stand-alone company was unclear to us. AMDX’s competitive advantage is much more convincing. This is a subjective call. There are 2 main concepts that underpin this subjective call:

  1. Xilinx’s dominance in FPGA chips.
  2. The advent of Heterogenous Computing.

First, Xilinx dominates the FPGA world. FPGAs are Field Programmable Gate Array chips or, for us non-engineers, programmable chips. These chips have been around for a while but its only recently – with the advent of AI, Machine Learning, and 5G – that these programmable chips finally have a big enough rasion d’etre. The only other credible competitor to Xilinx is Altera, which was acquired by Intel a few years ago. Here’s how they stack up:

Why are programmable chips suddenly hot? The simple answer is that computing is changing, workloads are changing. There’s a lot more data to handle, especially a lot more unstructured data (like literature, images, sounds etc.). Not so long ago, a CPU was good enough for everything we needed to do on a computer or even on a server. Intel dominated this world of all-purpose, one-size-fits-all CPUs.

Then in the 1990s, videogames evolved by leaps and bounds with the advent of gaming consoles like the PlayStation. PCs became more powerful as well. And so, games for PCs became more powerful. It was a virtuous cycle. It turned out the gaming was too much for good old Intel CPUs to process. So, the GPU (Graphics Processing Unit) was invented. Nvidia dominated this market for a long time. It still does.

GPUs are built differently. They’re good at specific types of computations, unlike a CPU that’s a jack of all trades. A few years ago, it turned out that GPUs, were good for some types of computations that are needed for Machine Learning. This gave GPUs (and Nvidia) a massive new lease of life. Concurrently, FPGAs, which are the third kind of chip, piggybacked off the success of GPUs, and had their moment too.

Another massive development took place at the same time – Cloud Computing. We can’t stress this enough – AI and Machine Learning applications have grown exponentially over the last 3-4 years because of Cloud Computing. All that heavy computation happens on a heavy-duty server, and insights from that computation can be distributed instantaneously, anywhere, anytime. Some of that computation, however, can happen on our laptops or phones (Edge Devices). This is the concept of Heterogenous Computing.

At the end of the 20th century, almost all computing used to be done locally on our desktops and laptops. Then some of it shifted to server racks, but mostly for storage. Most of the computational tasks were still done on PCs. All that is changing now. Computing can now be done either on a Cloud server or on your laptop or phone. On a Cloud server, most of the data processing can be done on a CPU, a GPU, and FPGA or an ASIC (Application Specific Integrated Circuit). That’s Heterogenous Computing – computing distributed over many chips, on many devices. In 2018, we took a crack at explaining the spectrum of logic chips out there – this article should help you figure out where Xilinx stands in the gamut.

AMD is making a gutsy move. They’re envisioning the world 10 years ahead and course-correcting to ensure that they thrive in this new world. Heterogenous Computing is the future. AMD made strides in CPUs and GPUs for PC and Consoles. More recently, they’ve gained market share in the Cloud Datacenter CPU battle. They’re also trying to break into Nvidia GPU strongholds in Cloud Datacenters.

Until last year, AMD seemed to be content with GPUs for Gaming. But they soon realized what a huge market Cloud Computing is. Sure, Gaming GPUs can be used for AI workloads in the Cloud. But as these applications get more complicated, more “intelligent”, Gaming chips won’t cut it. We were, therefore, glad to see in their 2020 Investor Day presentation that AMD is determined to build specifically for the Cloud. This slide shows that AMD will now have 2 different types of GPUs – each specific for a certain kind of AI workload:

With the Xilinx acquisition, AMD (or AMDX rather) would take this one step further in the race to offer a full suite of logic chips for a heterogenous computing world. Intel and Nvidia are also trying to get there (more on that in the Competition section) but AMDX looks very promising now.

We believe AMDX would have a serious competitive advantage of competitors, especially over Intel, but the battle has just begun. Maintaining that competitive advantage in, say, 5 years from now is going to be tough. It’s up to Management to widen the Economic Moat. How can they do that?

Here’s what we’ll discuss in the next few sections:

  1. Management Strategy & Moat
  2. Growth Drivers
  3. Competition
  4. Strategy Risks
  5. Key Risks

Management Strategy & Moat

Summary: Investing in new generations of chips for a new heterogenous computing world – merger with Xilinx a step towards programable system-on-chips.

The key to offering a “full stack solution” for a heterogenous computing world is interoperability. CPUs, GPUs, FPGAs and whatever other chips should be able to talk to each other easily. Then AMDX would have a serious economic moat.

Fortunately, CEO Lisa Su and her top managers know this. They know that they can’t keep hoping to outflank Intel and Nvidia at every node transition. That’s not a sustainable strategy. They know it’s not just about a marginally more powerful CPU anymore. They need to offer something the others can’t or won’t or don’t. So, they acquired Xilinx – the world’s leading FPGA company. Now they have the makings of a full stack of logic chips. Now, the real work begins – to make sure that the 3 types of chips combine to become something greater than the sum of the parts.

There are two technologies that AMDX management need to get right:

  1. Interconnect
  2. Software

Interconnect refers to the wiring between these distinct types of chips. There’s a hardware and a software component to this. The hardware component literally refers to chip design, circuit design, materials and other consideration that straddle the worlds of physics and chemistry. AMD boasts of something called “Infinity” to ensure interoperability between various components. We’re not hardware engineers, so we’ll need to trust AMD management’s confidence in this technology. Let’s assume it works fine.

The most important moat-widener is Software, which is important to make sure that all chips talk to each other to get a job done. Let’s say the job is pattern recognition for an Autonomous Vehicle. This heavy computational workload will probably need CPUs and GPUs and possibly even an FPGA. AMDX should be able to offer up a solution to this problem. But they need good software to do this. David Wang, who heads up AMD’s GPU division has been working hard to achieve this:

Now let’s add Xilinx to the mix. How will they fit in? Xilinx’s chips are already programmable. They spent years developing a programming language so that hardware engineers can program FPGAs to cater to specific tasks. It’s call Vitis. AMDX’s challenge is to ensure that its programming platform – ROCm – can talk to Vitis. If AMDX can make that happen, we can see a wide Moat forming. This is our rough depiction of what they want to achieve (from our thesis presentation):

Ultimately, AMDX is shooting for a unified fabric between CPU, GPU, FPGA, or any other type of chip with a common memory cache (for faster processing) and a common software language. This strategy can pay massive dividends in the future, especially if the other competitors slack off on their full stack offering. AMD’s advantage is that they’ve got the best and biggest programmable chip company in their stable now.

Growth Drivers

Summary: Exponential Growth in Cloud, AI, 5G and Heterogenous Computing. Management expect a 20% revenue CAGR over next 5 years. Utterly believable.

We can pull up all sorts of data and projections on the big thematic tailwinds of AMD – AI, Big Data, 5G etc. But we won’t make any lofty projections. Instead, we’ll stick to AMD’s track record, Managements’ projections about revenue growth, their perceived TAM over the next few years. It turns out that Management’s expectation of 20% annual revenue growth rate is quite reasonable.

Firstly, AMD has achieved more than 20% revenue growth in the last few years.

Can they do it again? Yes. We believe the that the thematic tailwinds are in their early innings. Whatever tailwinds existed over the last few years, will likely remain intact in the next few years, if not accelerate. 5G and IoT haven’t yet taken off at scale. AI programs will get more, well, intelligent. The need for more specialized chips – because of the growth in Heterogenous computing – will increase. If you need a data point to prove this, the current chip shortage is an unmistakable symptom of this phenomenon. If AMDX can offer customizable “full-stack” chips for complex AI applications better and faster than Intel and Nvidia can, the sky’s the limit. But since these stalwarts are not likely to hang up their boots anytime soon, we’ll stick with Management’s expectation of 20% revenue growth per year.

The other source of confidence is the combined TAM of AMD and Xilinx. At $110 billion TAM, the combine entity’s revenue of $14.5 billion makes up about 13% of TAM. If revenue doubles – which is what we need to believe to buy this story – then they would make up about 26% of total TAM. To put this in context, Intel’s trailing 12-month revenue is about $77 billion. Nvidia’s is about $17 billion. To assume that AMDX can get to a $29 billion valuation in the next 4-5 years is not a slam dunk, but it is as believable as growth stories are going to get.

Ultimately, the bet is on a successful merger with Xilinx in a world that’s moving towards Heterogenous Computing. Put another way, we need to believe a 20% CAGR to buy this AMDX story. We believe it. This is a subjective call based on a track record of historical performance, the total addressable market in the next few years AND the massive shift in the way computing is done. AMDX has all the parts it needs to succeed. The bet is on execution. AMD CEO Lis Su and Xilinx CEO Victor Peng have a great track record of building top-notch products and businesses.


Summary: Intense. Intel and Nvidia are the main competitors. Both companies trying to build the “entire stack” of logic semiconductors.

The idea of offering the full stack – CPUs + GPUs + Programmable chips – is unfortunately not an original one. Intel and Nvidia are thinking along the same lines. Intel had bought Altera a few years ago in the hope that it could capture some “synergies” between Intel’s core CPUs and Altera’s FPGAs. But that didn’t quite play out as planned. The main roadblock was that Intel never really invested in building a software platform so that CPUs and FPGAs could talk to each other. But frankly, Intel probably made this acquisition a little too early. Even a couple of years ago, the idea of Heterogenous computing was mostly conceptual. Today, it’s alive and growing.

Nvidia’s approach has been different. They’ve been GPU dominators for a while, but about 5 years ago they stumbled upon a gold mine called Machine Learning (ML). ML – or any AI process – basically refers to a computer’s ability to self-program. It turned out the GPUs, originally designed for Gaming, were well-suited for ML. Nvidia seized the opportunity and cornered the ML market. Intel made some attempts at GPUs for ML but was a day late and a dollar short.

AMD, to its credit, executed much better. They seized on Intel’s scatter-brain years of 2017-2020 and took some market share in CPUs. While they did that, AMD was also quick to come out with top-notch GPUs to compete with Nvidia. So far, AMD has seen success in Gaming. The jury is still out on AMD’s success in GPUs for Machine Learning applications in Datacenters. But we see positive signs.

In both cases – CPUs and GPUs – AMD moved fast and executed well. They started with CPUs PCs and Notebooks. Then they moved on to CPUs for datacenters. Finally, they started manufacturing GPUs for Gaming Consoles. More recently, they’ve started targeting GPUs for datacenters – for Machine Learning applications. In their last Investor Day in 2020, GPU boss David Wang revealed that AMD is adapting its GPU architecture to optimize for Cloud Datacenters. This is a positive step forward to the scrappy AMD to compete with the GPU dominator Nvidia.

Wang’s presentation revealed another interesting detail – AMD has spent a lot of time and money on developing ROCm, which is the software layer that engineers need to optimize AMD’s GPUs. Nvidia has long had something called CUDA, which we’ve long argued is Nvidia’s main Moat. Wang revealed that ROCm is now compatible with CUDA. This is a big deal if true. It means that engineers previously using Nvidia’s GPUS via CUDA can now – theoretically – switch over to AMDs chips. In practice, we’re not qualified to opine on how easy or difficult this is.

With the Xilinx acquisition, AMD has a good chance to build something unique – faster and better than the others. That’s the crux of this thesis. Both Intel and Nvidia are working hard at offering a full-stack offering – CPUs + GPUs + Programmable Chips. But here are the facts:

  1. Intel’s never really seen any success in GPUs. The company has been too distracted with other projects, and on keeping up with AMD’s faster transition to 7nm CPU chips.
  2. Nvidia has just now made the commitment to building CPU chips. This is a first for them.
  3. AMD, right now, seems to be the only company that proficiently makes top-notch CPUs and GPUs. Adding the world’s most prolific FPGA maker should be a massive competitive advantage.


Strategy Risk: Is it smart to stick with an x86 chip architecture? If the world moves towards ARM CPU designs, AMD may not catch up to Nvidia.

We find it a bit strange that there is almost no talk of the x86 vs. ARM debate in AMD’s presentations or literature. This is an important issue in the world of Heterogenous Computing. The x86 vs ARM debate is about CPU architecture. x86 is an Intel invention but is also used by AMD in its own CPUs. ARM – a British CPU design firm owned by SoftBank (and bid for by Nvidia) – presents a different CPU architecture.

The last time this debate flared up was in 2020 when Apple made the decision to furnish its high-end Macbooks with its own custom M1 CPUs. These CPUs are built on an ARM architecture. For a long time before, Apple used Intel CPUs, which were, unsurprisingly, x86-based. The difference between these two architectures is not subtle. At the risk of oversimplifying, all other things being equal (they rarely are) the x86 has more computational horsepower but also consumes more electrical power. ARM-CPUs tend to be less computationally powerful but are also more power-efficient.

It’s unclear whether – in a Heterogenous Computing world – the ARM RISC-based architecture will replace x86 CPUs. We’re not smart enough But it’s a massive risk for Intel. In fact, we used to hold Intel for a short while. We got rid of our position because of this risk, which was heightened with the Apple decision. See our rationale here. In Datacenters and in Edge devices like smartphones and laptops, power consumption is a very important variable. ARM tends to win out in that metric. The risk is that if computing can be spread out between datacenters and edge devices, and between CPUs, GPUs and FPGAs, then we may not need the power-hungry x86 CPU anymore. If that happens, Intel is in big trouble.

But this is also a risk for AMD. Their bread-and-butter CPU business is x86-based. They have entertained ARM designs in the past, but they no longer make those chips. When analysts have asked CEO Lisa Su about the ARM threat, she usually brushes it off with a generic “we’ll do whatever our clients want” answer. The risk is that Nvidia will go full-steam ahead with ARM-based GPUs and CPUs, making Nvidia’s full-stack more appealing in an AI-heavy, Heterogenous Computing world. On the flip side, AMD is likely to ramp up ARM-based production much faster than Intel will be able to swallow its ego and adapt to a new world.

If ARM-based chips do replace x86 designs, Nvidia would have a serious advantage. They’ve bid to takeover ARM from Softbank. They’re likely to have they’re stack – GPUs and (recently announced)

Incidentally, Xilinx’s chips are ARM-based. That’s another good reason for AMD to acquire them. We would, however, like to see Lisa Su and team be more serious and do something about the ARM threat, unless they don’t believe it is a legitimate threat. If it’s the latter, then we’d hope they delineate exactly why that is.

Other Key Risks/Threats: Fighting two stalwarts – Intel in CPUs and Nvidia in GPUs – may be a tough proposition, long-term. Other risks: custom chips, geopolitical (TSMC in Taiwan).

AMD is still the scrappy fighter. Their performance over the last couple of years has been astounding for precisely that reason. They took on two giants of the industry and managed to carve out their own niche. For a long time, we stayed away from AMD because we always knew this was going to be a tough proposition. They’ve proved us wrong. And now they’re financially strong enough to flex some muscles. The flexing has begun with the acquisition of Xilinx.

Our bet is that AMDX – what we call AMD + Xilinx – has a very good chance of becoming a third (and equal) pillar of the logic chip industry. What is the probability of this happening? We can’t be precise, but we believe it’s high – way north of 50% - for all the reasons discussed in the previous sections. However, it won’t be a smooth path.

The x86 vs. ARM debate remains a threat. But there are two others that we will keep a close eye on – 1) ASICs and 2) Geopolitical tensions between US and China affecting Taiwan.

The ASIC threat is omnipresent. ASIC refers to Application Specific Integrated Circuits. These are super-custom chips designed for super-specific computational tasks. Recall our rough guide to logic chips – ASICs are like hard-coded FPGAs:

ASICs can be very power efficient because they do only a specific thing. But they’re expensive to make. Firms like Google and Microsoft can afford to spend the money to research, design and build them. Not many others can. But that’s the potential risk. The “Big 3” Cloud Datacenter companies – Amazon, Microsoft and Google – all have deep pockets. Will they switch over completely to custom ASICs? If so, AMDX’s tailwind won’t be as powerful as we think. We still believe that if AMDX can offer these companies semi-custom chips – now more customizable with Xilinx on board – the Big 3 will be reticent to build logic chips from scratch, barring very specific AI tasks.

The US-China cold war is an on-going threat. We never know how to calibrate it. But we believe that it is neither in USA’s nor China’s best interest to have a prolonged skirmish over Taiwan. The reason Taiwan is so important is because it’s home to TSMC – arguably the most important company in the world. TSMC manufactures chips for pretty much every major semiconductor company not named Intel or Samsung. AMD and Xilinx both use TSMC to manufacture their chips.

It’s hard to quantify all these risks. But the prospect of a powerful AMDX in the next few years far outweighs the risks mentioned above.

Valuation: What is Sustainable?

Our valuation assumptions are straightforward:

  1. We’ve done a rough valuation for the combined entity – AMD + Xilinx.
  2. As per deal terms, AMD shareholders will own roughly 74% of the combined company. We’ve factored that into our target price.
  3. Revenue growth: We’ve put faith in management guidance of 20% CAGR over the next few years – for all the reasons mentioned in the previous sections. Over a 4 year timeframe, this would imply that revenue roughly doubles. Considering the last 4 years, we find this quite believable.
  4. Gross margin improves to 50% - as per Management expectations. As AMDX veers towards more high-end products like System-on-Chips, we find this scenario very believable.
  5. R&D expense grows by 20%. SG&A grows by 10%. In fact, Management expects “cost synergies” with the Xilinx acquisition. We’ve not given them the benefit of the doubt in this regard.
  6. Capital Expenditure increases to $500 million from $383 million.
  7. Cash Interest Expense reduce to $0 – we assume AMDX will pay off debt in the same way as AMD management has been doing.
  8. Cash Tax Rate assumed at 10%. AMD Management assumed its Cash Tax Rate will be 3% over the foreseeable future. Combined with Xilinx, we believe a 10% rate is closer to the truth.

Here’s how our assumptions stack up:

For more details on Valuation, see Page 11 of our AMD thesis presentation.

Overall, we believe AMDX will be a formidable story over the next few years. Our thesis hangs on the merger approval between the two companies. We assume that regulatory bodies around the world will follow USA’s lead on approving this merger. Together AMD and Xilinx will be an ever more formidable competitors to the hegemony of Intel and Nvidia. We believe that with the recent sell-off in technology stocks, there is finally enough Margin of Safety to invest in the AMDX story.

We use cookies on this site to ensure the best service possible.