Is Apple killing Intel?

Published on 07/05/20 | Saurav Sen | 4,454 Words

The BuyGist:

  • This analysis was triggered by Apple’s decision to move to ARM-based custom chips in its Macbooks.
  • In isolation, this is not a big blow to Intel. Apple makes up 5-7% of its revenue.
  • But the long-term implications could spell trouble for Intel.
  • Will the ARM chip architecture conquer Cloud Datacenters? If so, Intel will be in trouble.
  • We analyzed the situation and made a decision on our Intel position.
  • NOTE: This analysis is not written by engineer, but by an investor and a life-long student of Economics. This is not a technical report.

Thinking Different

This has been bothering us for 2 weeks.

Apple released a bombshell in June 2020. During its Worldwide Developer Conference (WWDC), Apple announced that it will be furnishing many of its Macbooks (laptops and desktops) with its custom-made CPUs – designed in-house and manufactured by TSMC (our largest holding, by the way). These new computers will ship in 2021. At the moment, Apple still uses Intel CPUs.

This is the second time Apple has made such a game-changing decision. The first time was in 2006, when Apple switched from PowerPC chips to Intel processors. That decision coincided with Apple’s stupendous renaissance under Steve Jobs. That was huge. But this this time, we believe, the implications are even bigger because their decision gets right to the heart of the future of computing. Apple isn’t just changing vendors. It’s opting for a different CPU architecture altogether, which has ripple effects far beyond its products.

But let’s start with some simpler reasons for our analysis before we get to the future of computing. Intel and Apple are Buylyst holdings. We’ve been holders of Apple since 2018. Needless to say, it’s been one of our top return-contributors. Our current holding stands at roughly 3% of the portfolio – our target weight was reduced from 5% to 3% once Apple’s stock exceeded our valuation. We bought into Intel in February 2020, which sounds scary because it was right before the epic Covid-19 crash. But we had eased into our 5%-ish position over the course of a few weeks in March. Our cost-basis is, therefore, quite low and attractive. Now, Intel stock is back up to pre-Covid levels.

Our Intel thesis was this: Transitioning from CPU to AI company with the “XPU” approach. Target Price assumes Cloud+AI business grows 10% per year while legacy PC business declines by 2% per year. Achievable if execution is better than last 2 years.


The first part of our thesis is still true. After a few missteps in the last couple of years, Intel is now trying very hard to become an AI “XPU” company. They’re not idiots – they see the way computing is evolving. They know that their CPU business is a declining one, so they’re rushing to build on their CPU expertise and make AI-ready “XPUs”. But Intel’s CPUs were making a last stand in Cloud Datacenters, where Intel enjoys a whopping market share of >90% (AMD is a distant second). In the last 12 months, the Cloud business has more than offset Intel’s decline in PC CPUs (which was expected because PC sales have been declining). But with this Apple announcement, our thesis assumption that Intel’s Cloud business will grow by “10% per year” suddenly looks to be in jeopardy. We knew that good old CPUs for PCs – Intel’s raison d’etre – are just not the bee’s knees anymore. But their datacenter business is. Now, we’re not so sure. Apple’s decision is about its products – laptops, dekstops, phones and tablets. But the long-term effects of this decision could be felt in Cloud Datacenters.

Are we overreacting? That’s what this analysis is about. We finish with a definitive decision about Intel and our other AI & Big Data holdings.

Built Different

Why is Apple making the switch? Most analysts focused on Apple’s “end-to-end” model – how it likes to be in complete control of its hardware and software, so they work optimally in unison. In fact, this was a massive differentiator for Apple, until Microsoft released their Surface “lap-let”. Sure, that’s part of the thinking, but Apple’s transition has deeper reasons.

First of all, we need to define exactly what the transition entails. Sure, they’re different chips, but the salient point is that Intel’s CPUs and ARM (a Softbank company that simply designs chips) CPUs are built differently. Intel’s is built upon an X86 architecture, while ARM’s architecture is completely different. I’m not an engineer, so I’ve relied on experts to explain the difference. Here’s a good one from Stanford University’s Computer Science department. If you’re not in the mood to study computer science, here’s the gist:

Intel’s X86 architecture is based on a CISC (Complex Instruction Set Computers) architecture. ARM is based on a RISC – Reduced Instruction Set Computers) architecture. The important differences between the two architectures are:

  1. Intel X86 CISC process can handle more complicated computations but used more electric power.
  2. ARM’s RISC architecture uses simpler calculation method and is therefore less powerful – both in terms of computations and in terms of electric power usage.

This set of differences is why servers and desktops (including laptops) have stuck with Intel X86 architecture while mobile devices carry ARM architecture processors. Computation is heavier on servers and desktops, while mobiles devices are more concerned with electric power consumption.

There is one other major difference between the two architectures: the role of software programmers. Intel’s X86 CISC architecture is can be “called” on by programmers using less code. That’s because the X86 processors are already programmed to do certain things like liaising with Memory chips. An ARM processor requires more code because it’s rather “linear” in the way it processes computations. You can say that software – most software programs and operating systems – have been developed around the X86 architecture. That dependency may change with Apple’s decision.

The ARM architecture falls short in the “computational power” department. But that’s a fixable problem because ARM is inherently more programmable in the way it handles computations. That is annoying to software engineers. But here’s the thing: software is inherently more compoundable than hardware. The fact that it is more annoying and laborious to program software for ARM architectures kept the X86 intact in boxes that were connected to a power socket. In mobile devices, where power consumption was a key variable, ARM became the norm despite it’s coding-heavy requirements. But this is a software problem more than a hardware problem. If there is enough incentive to build software to make ARM as computationally heavy as the X86, it’s only a matter of time. We’ll be looking into this issue.

Apple’s decision transition away from the X86 to its custom-made ARM chips is not primarily around power consumption. It’s nice to sell laptops with better batter life. But Apple’s main reason was software-congruency. Apple wanted to make it easier to developers to program Apps for all its operating systems in one shot – for the MacOS, iOS, iPadOS, WatchOS etc. All will be ARM-based environments, which makes its much easier for developers to deploy their programs despite ARM’s coding-intensive instruction sets. To alleviate that problem, Apple has special compilers – liaisons between the software program and the hardware processor – that can make it easier for programmers to “call” on processors to compute. This is huge.

Apple’s business is to sell more devices. That’s their main goal. They made it easier for developers to make killer apps for their devices. Inadvertently (or maybe not), Apple has kicked the hornet’s nest. They’ve accelerated the demise of the X86 processor – not because they are major customers but because Cloud Datacenters are, and they may transition away from the X86 as well. Why? Because of Apple? No.

The Big Cloud Datacenter companies were already thinking in this direction because of the way computing is changing. And this may have been in the back of Federighi’s (Apple’s Software chief) mind as well. It’s a brave new world, and in 2, 3 or 5 years, CPUs will be just so “basic”. Welcome to the brave new world of Heterogenous Computing.

21st Century Computing: Any Where, Any Device

We used to live in a world of Homogenous Computing. In the 70s, computing was centralized. Computing was done on mainframes in prominent universities and in organizations like NASA. Then came the Personal Computer. Suddenly, computing became decentralized. But it was still tethered. People had to go to a computer workstation connected to a power socket. Then came laptops. Computing was now – at least theoretically – done anywhere, anytime. A little over 10 years ago (yes it was that recent!) came smartphones, followed by tablets and smart watches. What made all this work was high-speed connectivity. Data could travel fast enough for it be useful and actionable. Computing was now truly decentralized.

We’re about to enter a new paradigm called Heterogenous Computing. In a strange way, computing will now be a little more centralized than what we’ve been used to in the last 5-10 years. But it won’t go back the mainframe days. A myriad of computing devices will still roam the streets. High Performance Computing will still be done anywhere and on any device. But the type and level of computing will probably be optimized between Cloud Datacenter and “Edge” devices like our phones or (hopefully in the near future) Autonomous Cars. 

About 2 years ago, we laid out the foundation for our investments in AI & Big Data in a worldview piece called “The Grand Inflection Point”. Our main point was this: Just as data has compounded at an alarming rate in the last 10-15 years, the software required to gain insights from it – AI – is catching up. But the main roadblock is hardware – the processing power to implement those convoluted programs to gain actionable, profitable insights from all that data. This is a real doozy – just when we have the fuel, we’re running into engine problems.

The hardware roadblock, specifically, is the impending end of Moore’s Law. This refers to the grim reality that engineers are reaching the limits of Physics when trying to squeeze in more transistors within an integrated circuit. Scientists and engineers have been shrinking chips and exponentially increasing power for decades. To put it crudely, they’re finally running out of physical space. But they’ve come up with some workarounds.

The biggest workaround is Heterogenous Computing. The big idea is this: Let’s split computing – which is simply a bunch of calculations – across different kinds of chips and devices. This is massive.

Why would we want to split computing in such a complicated way? Because of AI. Artificial Intelligence – or (to put it crudely) the ability of a computer to self-program – is finally coming of age because of 2 simultaneous technological advances:

  1. Cloud Computing.
  2. Advancement in Semiconductors.

These two advancements are feeding off each other. And the end-results is that AI is being more widely dispersed than ever before. We touched on this issue in Investing in the Cloud a long time ago. Companies (and startups) like Cloud Computing because it allows them to “rent” infrastructure. IT infrastructure like servers become a variable cost because companies don’t have to maintain a full-fledged IT staff and server rooms on their premises.

But this movement is spawning a nice little marketplace effect – between developers, startups and companies. Being on the Cloud allows companies to access a myriad of software packages and apps the Cloud platform has to offer. The Cloud Provider – like Amazon of Microsoft - wants to entice companies to join its Cloud platform. They also want developers to launch apps on their platform – AWS or Azure or whatever - so that companies can incorporate these apps into their workloads.

Now, because hardware – semiconductors – is catching up to software – AI – Cloud biggies like Amazon and Microsoft are lapping up all sorts of new chips so they can provide both companies and developers or startups or their own engineers the best processing power available. This has been great for chipmakers – both in Logic and Memory. Intel has also been a beneficiary – they sold about $37 billion worth of chips (primarily CPUs) to datacenters in 2019 out of a total revenue of $72 billion.

But datacenters aren’t just interested in CPUs. They’ve kickstarted a marketplace between companies, regular people, and developers (they are also people, just to be clear), where there’s a race to build and deploy the most cutting-edge AI software that today’s hardware allows. The thing is: CPUs don’t allow much AI functionality.

In another worldview analysis called Investing in AI Hardware, we had delineated the 4 main types of Logic Semiconductors that datacenters are lapping up. Here’s the crude diagram wed drawn back then:

We won’t go into the details of each kind of chip here. But the important point is this: GPUs, FPGAs and ASIC chips – not CPUs – are taking computing to whole new level. They are purpose-built chips and they can be tailored to specific kinds of computing tasks – such as Convoluted Neural Networking (CNN) programs that are taking AI to whole new level.

CPUs are generalists. They’re just not built for the high-performance, demanding logic of AI programs. These other chips have their advantages and disadvantages, but they can do things the CPUs can’t. All this has happened in the last 5 years.

The moral of the story is that computing is shifting from a CPU-only approach to a multi-ship or “system-on-chip” approach, where different types of chips will pitch in to make sense of data. And these chips may reside in the Cloud or in an edge device. It’s just not about the CPU anymore. Computing – especially high-performance computing – is shifting away from them.

This shift is why ARM CPUs are suddenly in vogue. That’s not good for Intel.

Will ARM conquer the Cloud?

The short answer is: We don’t know. As of today, nobody really does.

The answer is technical and depends on thousands of factors. In investing, we deal with probabilities, not certainties. In our view, the probability that Intel’s Datacenter business will keep growing just decreased significantly with Apple’s ARM move.

Apple has embraced ARM. Microsoft may embrace ARM. And the reasons – as we described above – are compelling:

  1. CPUs just won’t be the only brains in a computer.
  2. Processing itself will be distributed not only among different chips, but also different devices.
  3. ARM’s dependence on software rather than “hardwired” logic in the processors.
  4. ARM is power-efficent.

Try and envision a situation in which a bulk of the computation will shift away from CPUs on servers, desktops or laptops to GPUs, FPGAs and ASICs on other Edge devices. Betting on Intel – as the company stands today – is betting on the growth in good old desktop CPU demand. That’s a tough bet.

Now, it’s not so easy to just plug in ARM chips in Datacenters – which was our big bet when we took a position in Intel. ARM chips have tried before to break into datacenters, but with no success. Why? A couple of reasons:

  1. All software development was being done X86 machines – whether on Windows or Mac.
  2. CPUs are still the main brains in computer. GPUs etc. are a recent phenomenon, which is yet to fully take off.

This is the big counterpoint to the ARM movement. In fact, this counterpoint is best described by Linus Torvalds – the creator of Linux:

“Some people think that “the cloud” means that the instruction set doesn’t matter. Develop at home, deploy in the cloud. That’s bullshit. If you develop on x86, then you’re going to want to deploy on x86, because you’ll be able to run what you test “at home” (and by “at home” I don’t mean literally in your home, but in your work environment). Which means that you’ll happily pay a bit more for x86 cloud hosting, simply because it matches what you can test on your own local setup, and the errors you get will translate better…

Without a development platform, ARM in the server space is never going to make it. Trying to sell a 64-bit “hyperscaling” model is idiotic, when you don’t have customers and you don’t have workloads because you never sold the small cheap box that got the whole market started in the first place…

The only way that changes is if you end up saying “look, you can deploy more cheaply on an ARM box, and here’s the development box you can do your work on”. Actual hardware for developers is hugely important. I seriously claim that this is why the PC took over, and why everything else died…It’s why x86 won. Do you really think the world has changed radically?”

Here’s where the Apple announcement is massively important – they just announced that they will sell “the small cheap box” in which ARM development can take place. This changes the game a little bit. But it’s still not a sure thing because most development work is still done on Windows PCs.

Microsoft played with the ARM architecture a while ago but was unsuccessful in transitioning to it. Simon Sinofsky, a venture capitalist who was on that Microsoft project, was generous enough to a publish a full tweetstorm about Apple’s ARM decision. He ended his thoughts with:

The least important measure in moving from Intel to ARM is the performance of the processor. Executing instructions per second is the commodity. What matters is the software and device ecosystem on top, no matter how few/many threads/processes are used.”

Both Torvalds and Sinofsky make the same point: why would datacenters move to ARM at all? They won’t do it unless Microsoft moves Windows to ARM, even if Apple is already moving.

They’re obviously a lot smarter than I am but I’m a little skeptical of their conclusion. They ignored two issues, which I believe are massively important:

  1. Heterogenous Computing and the proliferation of AI using other types of chips.
  2. Why are major Cloud Datacenter companies already investing in ARM chips?

Amazon AWS is the biggest Cloud Datacenter company in the world. They’ve invested a lot of money and energy into developing their in-house Gravitron chip, which is ARM-based and is said to outperform Intel’s latest processors on a variety of metrics. Microsoft is reported to have been testing out and ARM-based chip itself – one made by former Intel employees.

I agree with Sinofsky that performance is not the main issue. The ecosystem is what’s important. To put it crudely, ecosystem matters because there’s less lost in translation between developing a program and deploying it on any device. And that’s why Intel still generates a ton of Free Cash Flow. But if performance is not the issue, then will Cloud Datacenters keep paying a ton of money (about $37 billion in 2019) to Intel? Doesn’t it make sense to spend some initial capital and make their own chips? If that’s the case, then ARM will win out – not because of their performance but because of their business model.

ARM’s business model is licensing out designs to chip designers and fabricators. Intel’s bsuinss model is a closed loop. Amazon could license some designs from ARM and design its own Gravitron chip. Amazon couldn’t (and can’t) license X86 designs and make its own chips. Intel wouldn’t allow that – its profit machine would be in peril. But now, ironically, its profit machine could be in danger because big Cloud companies with big pockets want to take more control of their base business. And to do that, they want to make their own chips. The only design available to them is ARM. This is bad for Intel.

A lot depends on Microsoft now. If they go the ARM route, Intel is in big trouble. Will they follow Apple? It’s hard to say. But Microsoft is not shy about developing its own chips. Check out Project Catapult – Microsoft has been working on developing custom chips for a while. Here’s an interesting line from the Project Catapult:

“Since the earliest days of cloud computing, we have answered the need for more computing power by innovating with special processors that give CPUs a boost. Project Catapult began in 2010 when a small team, led by Doug Burger and Derek Chiou, anticipated the paradigm shift to post-CPU technologies.”

With that worldview, it’s hard to be optimistic about Intel. They don’t dominate GPUs – Nvidia does. They don’t dominate FPGAs – Xilinx does. They don’t dominate contract fabrications for ASICs – TSMC does. Intel dominates CPUs. If we enter a post-CPU world, how will Intel grow?

What do we do about Intel?

We will be selling out of our position in Intel. We were partially betting on the transformation of the company before Apple’s announcement. Now that transformation looks to be much more challenging.

Can Intel develop a whole new architecture to beat ARM? Yes, it’s possible. But will it do it in good time? Will it become that “XPU” company in the next 2-3 years before every major datacenter decides to custom make its own chips? It’s a tough call. But, as investors, it’s not prudent for us to bet on some sort of glorious transformation (a la Microsoft under Nadella). Intel needs to be led by a crack team of AI engineers and scientists, not by finance people like Bob Swan. I’m sure Swan is a highly competent manager, but does he have the visceral understanding of how Intel can beat ARM? Microsoft had Nadella, a bona fide engineer (and Gates in the background), to lead the company out of slumber. Who’s that person in Intel?

By the way, we’re not investing in AMD either. They’ve got the same X86 problem. Besides, we always thought AMD was overvalued. And if the post-CPU world is a reality, AMD will be hit hard too. Our 5% Intel position will be slowly replaced by some known companies that will gain from an ARM transition:

  1. TSMC (already our largest holding and a huge gainer from Apple’s decision since they fabricate Apple’s chips)
  2. KLA (more custom chips means more manufacturing process control and yield management)
  3. ASML (their machines are helping beat Moore’s Law in a different way)
  4. Nvidia (our most profitable position to date)
  5. Xilinx (we’ve been holders previously, gainfully)

We will consider increasing our position in some of these companies that are already in our portfolio – we know them well and they are each playing their part in advancing heterogenous computing.

We will also take a position in FLIR Systems, which was our first Industry 4.0 investment idea – a theme we’ve been investigating before we got sidetracked by Apple’s announcement. We’ve got a long list of potential investment ideas in the Industry 4.0 theme. The point is: filling up Intel’s 5% position won’t be a problem.

Since the cost-basis of our Intel position was low due to the March 2020 market crash, we won’t be selling Intel at a loss. This action is not a referendum on Intel. This is about Risk Management. We’d rather bet on companies that will gain from Heterogenous Computing than betting on corporate turnarounds in a fast-changing industry.


We use cookies on this site to ensure the best service possible.