Competitive Advantage: The Castle
Core Competency? Specialized chips (FPGAs, SOCs) for Cloud/AI/IoT/5G/Edge devices.
Xilinx is a Field Programmable Gate Array (FPGA) specialist. They invented this type of Integrated Circuits back in the 1980s. They are now building on that legacy to create products for the New Information Revolution of Cloud/AI/IoT/5G/Edge computing. They are very much in the center of one of the biggest inflection points of our civilization.
I like companies that specialize and do one thing better than most. First, I believe that competitive advantages from product differentiation tend to be more sustainable than those from cost leadership. Second, companies with distinct products are just simpler to analyze. As for Xilinx, their product is anything but simple. It took me a while to understand FPGAs, but a good place to start was to zoom out and notice where it fits into the spectrum of computer processors. I took a crack at it in my “AI Hardware” overview.
FPGAs are programmable processors – that’s their most distinguishable feature. That means that it facilitates “programable logic” right at the hardware level. The most elemental consequence of this advantage is: it saves power and time while an application does its thing. Power consumption and speed of operation are the two most important variables in determining processing efficiency. Obviously, we want to run our applications on processors that consume low power and are fast. As you might have guessed, that’s the tradeoff that the makers of these processors constantly grapple with. With FPGAs, an application developer can program the hardware to cater to the specific kind of logic and memory usage that the particular application demands. In this new post-PC era of Artificial Intelligence on the Cloud and Edge devices, this programmable feature can be a massive advantage.
In the spectrum of processors – CPU-GPU-FPGA-ASIC – modern FPGAs like the ones made by Xilinx tend to be more power efficient than CPUs and GPUs and more flexible than ASICs. Is it the best of both worlds? It depends. It used to be that CPUs would get most of the job done. For most of the 30+ years that our civilization has been using personal computers, CPUS were good enough for most of our computing needs. For you and me, CPUs are still good enough. But for Cloud Datacenters and AI application developers, this power-speed spectrum means a lot. As they sell their service, they’re always looking for competitive advantages. Power efficiency and speed are two major sources of these advantages. In this new era of AI-infused application run out of a Cloud Datacenter, suddenly (in the last 3 years) a CPU doesn’t cut it.
GPUs, like the ones made by Nvidia, became massively popular. Many AI-applications run on servers that are equipped with GPUs, because they’ve (almost accidently) proven to be adept at parallel computing that AI and Machine Learning applications need to perform. Google, as might have been expected, went the ASIC route. They made their own processor, specifically suited to their AI needs. Microsoft, Amazon, Alibaba, Baidu and Tencent – the other massive Cloud Datacenters (also called Hyperscalers) went with a combination of GPUs and FPGAs. Of these, Xilinx is the preferred FPGA supplier to Amazon AWS and all the Chinese Hyperscalers. Microsoft went with Altera (now part of Intel), which is Xilinx’s main competitor.
To these Hyperscalers, the flexibility of FPGAs is a massive draw. In this post-PC, post-CPU world of computing, flexibility is attractive because we’re still in the early innings of Artificial Intelligence. Machine Learning, for example, is just an early iteration of AI. As AI logic evolves, so must the hardware required to support it. Nobody wants a cool retina-scan application on their smartphone that’s slow and uses up 30% of their phone battery. Different kinds of applications require different kinds of logic, which require different combinations of memory and networking needs. For many such applications, FPGAs reduces power consumption and increases speed of processing by multitudes. ASICs would give them even better power efficiency and speed, but they’d have to be specifically manufactured (in volume) for that specific type of logic. That’s limiting and cost-prohibitive. So, a few of those Hyperscalers – like Amazon and Alibaba – sell something called FPGAs-as-a-Service (FAAS). They make the inherent advantages of a Xilinx FPGA available to their customer to test and run their AI applications. This is a new thing – Amazon was the first to do it and they started only in April 2017. In the new AI-computing world, FPGAs are yet to become mainstream.
So far, in this early innings of AI, FPGAs haven’t been as popular as GPUs. It appears there are 2 big reasons for this:
- Most AI applications are still Machine Learning applications. For this, GPUs are plenty efficient.
- FPGAs are flexible, but that flexibility comes with a cost. They’re not easy to use.
On the first point, I believe that as we move through this inflection point, the main buyers of FPGAs – Datacenters – will keep expanding at exponential rates. As that happens, AI logic to analyze this exponential growth of data is bound to get more convoluted. As this happens, a general purpose, parallel computing GPU will fall short. For complicated logic, with ever-changing data structures, FPGAs show significant power and speed advantages over GPUs. Neural Network computing is just taking off.
On the second point, FPGAs require hardware expertise. Most software engineers don’t’ have that. FPGAs require some level of hardware programming, which need hardware engineers, and there aren’t too many of those. As long as FPGAs have been around, this has been a problem. Xilinx, it appears, is doing something about it.
Products: Better or Cheaper? Better; 1 of 2 dominant FPGA makers. Innovated new ACAP product.
I mentioned Altera in the previous section – who along with Xilinx have most of the market share in the FPGA world. So far – over the last 25-30 years – it seems that they’ve tried to outcompete each other by coming up with the smallest, fastest, least power-hungry processor. But most of the game was being played in the Moore’s Law arena, which means that most of the competition was about how many transistors they could fit into the smallest surface area (usually measured in nanometers). So, Xilinx and Altera tried to beat each other in nanometers for the longest time. But they’re reaching the physical limits of Moore’s law now. The “mine’s smaller than yours” battle is passé. 3D stacking of Integrated Circuits is one way to solve the issue. But architectural ingenuity is another way. Xilinx has chosen the latter.
Remember the two big impediments to FPGA adoption: Use-Cases and Ease-of-Use (or lack thereof). Xilinx is using its technological legacy to address both issues with a new innovative product category – the ACAP.
ACAP stands for Adaptive Compute Acceleration Platform. Xilinx announced this product recently, in March 2018. It’s in production now and will be ready to ship in 2019. Xilinx’s CEO Victor Peng seemed to be very optimistic about ACAP adoption among Datacenters, based on his discussions with “all” the hyperscalers. I assume that includes Microsoft. I won’t get into a full-fledged description of the product but there is a wonderful summary of it here. The important point is that it combines the advantages of both of Xilinx’s core competencies – FPGAs and SOCs. We’ve discussed FPGAs at length. FPGAs are a type of SOC, which Xilinx has popularized over the last few years.
SOC stands for Systems-on-a-Chip. These are multipurpose chips that hardwire together different components of computing on a single chip – logic, memory, and networking. For many types of applications, it saves power and increases speed of processing. How? In CPUs or GPUs, a lot of power and time is wasted in “reading and writing” data to and from memory (DRAM or Flash) and the hard drive (or solid-state drive). In SOCs, the link between logic and memory is deeper and more intertwined. Data moves in smaller “batches” and moves faster. Power and time are saved. The new, much heavier AI-infused computing needs of today have put SOC front-and-center. Those nanoseconds worth of extra speed, when compounded over thousands of AI applications and hundreds of millions of users, make a massive difference. FPGAs are a kind of SOC, as are ASICS.
Xilinx has a rich legacy in SOCs. And their latest SOCs like the Zync Ultrascale are heavily used in several industries. Most notable among them are Wireless Communications. I think this is a useful legacy because of the role 5G will play in this new computing paradigm. It’s clear to me that IoT cannot flourish without 5G. And that data from IoT, via 5G, won’t be that useful unless there is massive computing capability in the Cloud or right in the IoT device. This brings me to an important point: Xilinx’s legacy in SOCs and FPGAs across several industries (like Wireless Comms and Aerospace) gives it a unique vantage point.
ACAP is the culmination of all that experience. It took Xilinx about $1 billion, 1,500 engineers and 5 years to build the ACAP. And they did it because, finally, the new computing paradigm with AI-IoT-5G justifies that sort of R&D and innovation. The use-cases are plenty now. But with the ACAP Xilinx is also attempting to break through that second (and more potent) impediment that has always held back FPGAs and SOCs from mainstream computing – ease-of-use. Xilinx is investing heavily into creating a prolific and easy-to-use software stack for the ACAP so that software engineers can finally use this hybrid SOC-FPGA without fear. They mentioned in their Investor Day that software developers can use Python and OpenCL to program ACAPs, using various pre-built apps. This is a big step in usability of Xilinx products. And they must invest in usability if they want the ACAP to be adopted as widely as its capabilities would suggest.
The new demand of computing (which are increasing exponentially), Xilinx’s unique vantage point (across several industries) and its newfound focus on ease-of-use should make the ACAP a massive driver of revenue and cash flow growth. And when 5G rolls around, that growth could hockey-stick. Xilinx, it seems to me, has moved on from the nanometer wars. They are now all about architectural advantage.
Evidence: Profitability? High, stable margins. But low revenue growth over last 5 years.
Xilinx’s numbers are much simpler than their products. Revenues have chugged along at about 5% increments over the last 5 years or so, with low volatility. Margins are very good – 70% gross margins and 30-35% EBITDA margins. Since they expense most of their R&D in the Income Statement, the conversion from EBITDA to Free Cash Flow is 80-90%. The numerical story of Xilinx suggests stability and a strong core business.
They have 2 main revenue segments: Core and Advanced. You could think of them as FPGAs (Core) and SOCs (Advanced). As of March 2018, Core Products made up about 46% of revenue, while Advanced Products made up 54%. As you might expect, Advanced Products is fast becoming the main Revenue (and EBITDA) bucket. The segment grew by 45% from 2016 to 2017 and by 28% from 2017 to 2018. And, as you might suspect, Core Products revenues have been declining, which explains the 5%-ish revenue growth rate.
None of these numbers include ACAP (not in distribution yet), IoT, 5G and Autonomous Cars, all of which are right in Xilinx’s wheelhouse. And all those products SOCs and ACAP will fall under Advanced Products. And obviously, just Data Centers itself have a lot of room for growth. Xilinx has a lot of upside left there even if they’re deeply involved with Amazon AWS, Alibaba, Baidu, Tencent, IBM and Huawei. Needless to say, they are all expanding compute capacity by double-digits every year. But they’re also gravitating more towards FPGAs, away from GPUs. This is a massive opportunity for Xilinx. They estimate (as per their Analyst Day presentation) that Data Centers alone present a market opportunity that’s growing by 67% a year! Currently, it makes up about 21% of their revenue. Even if they capture half of that growth rate, the revenue models look encouraging. It would amount to a roughly 6% revenue growth per year from Data Centers alone. Of course, how much market share they capture in high growth areas like Data Centers, Autonomous Cars and the impending 5G deployment, depends on how well they compete. Obviously, I can’t predict the future, but Xilinx’s core competency and its product suite suggests that it’s got all the chops to compete not only with FPGA rivals but also with emerging technologies.
Durability of Competitive Advantage: The Moat
Competition? Limited. Intel/Altera for FPGAs/SOCs. Hyperscalers may in-source AI Chips.
I discovered that Xilinx vs. Altera is quite the hot topic in the dark alleys of the computing world. But it highlights the fact that FPGAs and its variants are highly specialized products, which have traditionally been used by a very specialized group of people.
Altera is now part of Intel, and their big win is Microsoft Azure. This is huge because Microsoft is the second-largest Cloud datacenter in the world. Xilinx is involved with most of the others – Amazon, Alibaba, IBM etc. As I’ve argued, so far the battle between Altera and Xilinx has been primarily about nanometers. But this competitive landscape has now changed, for 2 main reasons:
- The introduction of ACAP.
- Intel’s priorities.
I’ve discussed ACAP at some length in the previous sections. But Altera’s relationship with Intel adds an interesting dimension to this competition. Intel has some issues now, since they’ve missed the GPU boat completely. They’re CPU business is bound to decline, as we move into this post-PC computing paradigm. They acquired Altera, probably to secure the lucrative contract between Altera and Microsoft) and to have a foot in the FPGA door. They acquired Nervana to have a foot in the ASIC door. They recently announced plans to revive their GPU business. It seems like they’re trying out a “see what sticks” strategy. It’s tempting to think that it’s a good strategy, given that we’re in the early innings of AI. But their lack of focus, and “R&D confusion” could cost them dearly. They’ve certainly got deep pockets, but the lack of clarity and focus has led many big companies towards mediocrity. It seems to me that Altera’s relationship with Microsoft and Intel will dominate what they do from now on. What will be their response to Xilinx’s ACAP?
As I scrolled through many the chatrooms of “know-somethings” of the computing world, I came away with these two impressions about the contrast between Xilinx and Altera:
- Xilinx FPGAs seem to be harder to use. Their software support is lacking.
- But Xilinx is (by most measures) the #1 FPGA company because it’s found a way into a more diverse set of industries like Wireless Communications, Aerospace and Defense and Automobiles.
On the first issue, Xilinx is trying to solve the problem with ACAP, as I’ve already mentioned. The second point, I believe, will prove to be a strength in this new “Heterogenous Computing” era. For now, it seems to me, Xilinx has the upper hand over Altera. The Altera threat will never go away but they’ve never really been able to become the #1 FPGA maker.
The other big threat for Xilinx is in-sourcing of chip-making by the big Datacenters like Amazon and Alibaba. Google has already done it with their invention of the TPU, which is an ASIC. Microsoft’s relationship with Intel/Altera is highly bespoke with Project Brainwave. But I don’t think it’s easy to replicate Xilinx’s expertise in FPGAs and SOCs even with deep pockets. It took focused effort for 5 long years, $1 billion, and 1,500 engineers at a bonafide chip company like Xilinx to come up with ACAP. Google pulled off something similar with TPU. But I’m not sure the other hyperscalers will find it optimal to divert so much in intellectual and monetary capital towards complicated ASIC or FPGA architectures. This is my assumption, but I could be wrong.
In short, the main threats to Xilinx are:
- Altera coming up with some cooler version of ACAP in 2019/2020.
- Hyperscalers move towards making their own chips, because…
- …like Google, they may think that customized ASICs are better for their customers.
Protection? High barriers to entry – very specialized, patented technology.
The more complicated and differentiated a product is, the harder it is for competition to catch up. By combining their expertise in FPGAs and SOCs to make the ACAP, their ramping up the “architecture battle”. For the likes of Altera, it’s possible to catch up. For others, like startups in the US, China or Korea, it’s very difficult.
Again, I’d like to emphasize the point that both Altera and Xilinx are both facing the physical limits of Moore’s Law. They need to either start “stacking” chips or they need to come up with new architectures to deal with the Moore’s Law problem. In this new era of “Architecture Battle”, protection from competition is more solid. It’s not about shrinking chips and packing in more transistors now. It’s about finding solutions to an exponentially more dynamic computing world. It’s possible that Altera and Xilinx will find their own niches in the future. As of now, Xilinx has struck first in this new generation of “solutions” with ACAP. Altera can’t just shrink nanometers to respond.
Resiliency of Cash Flows? Resilient. Slow Cycle product with limited competition.
One of the other big advantages of FPGAs is that they are fast-to-market. This has to do with its flexibility – it doesn’t take a whole lot to go to a new version in terms of design to make it optimal for a new application. Compared to GPUs or ASICs, this is massive. Their hardware is fixed. And it takes time to get a new design to go into fabrication and finally out into the market. FPGAs can be iterated.
What does this mean in terms of cash flows? Combine the fast-to-market nature of FPGA’s with the fact that things are gravitating towards “architectural advantage”, cash flows become more resilient to competition. In my view, something like an ACAP is a long-cycle product, because it’s flexible and it’s not easily replicable. AI is in its early innings. IoT, Autonomous Cars and Virtual Reality haven’t even taken off, which complicates any forecasts regarding both logic, data structures and memory usage. Well, that’s precisely the situation for which FPGAs, SOCs, and now, ACAP, have been designed. ASICs and GPUs will probably need to be replaced much more often as computing evolves, exponentially. That would probably make them more expensive in the long-run, if their replacement schedule is, say, 18 months long.
Management Quality: The Generals
Strategy & Action? Positive. New Management centers strategy on Datacenters/AI/IoT/Edge.
Victor Peng took over as CEO a few months ago. He was previously Head of Products and had a deep engineering background before that in companies like AMD. Investments in the ACAP preceded his tenure as CEO, but as Head of Products, there is no doubt he played a huge role in that decision. Once he became CEO, the announcement of the ACAP coincided with another announcement: Xilinx is now a Datacenter-first company.
At Xilinx’s investor day, Peng presented all sorts of charts on the tremendous growth in Datacenters and AI, and where Xilinx fits into this new paradigm. It didn’t take him time to convince me. If anything, I think many of the forecasts about data and AI run the risk of underestimating, despite all the hyperbole. But proclaiming to be a datacenter-first company, was music to my ears.
The other nudge by Peng seemed to be a new focus on ease-of-use. As I’ve mentioned, this used to be a big impediment to the widespread adoption of FPGAs. I believe that being a “product guy” for almost 10 years at Xilinx gives him a visceral appreciation for customer needs. Peng seems to be pushing the firm – and its investments – in creating “customer solutions” for the data tsunami that they will face in the coming years. One example of this type of thinking is apparent in Xilinx’s new investment in software engineers to make the “user interface” for its products a lot more intuitive, so customers can use “high level” languages like Python to do most of the customization of their FPGAs and, eventually, their ACAP.
On a purely financial note, Xilinx has slowly and steadily been redirecting funds from SG&A to R&D. It seems to me Victor Peng, being a non-financial, techy guy, will continue that trend, if not accelerate it. But he needs to balance that with investments in software engineers. I think he has his head in the right place.
Alignment of Incentives? Average. > 80% in long-term incentives. But management ownership low.
Xilinx’s long-term compensation bucket for its Executive Management team seems to have a lot of granularity. Buckets are linked to specific products such as “Xnm products”, as well as more nebulous concepts such as “technology leadership”. Sadly, there doesn’t seem to be much talk of Return-on-Capital type measures. But the long-term incentive payout, which is designed to be more than 80% of total compensation, is awarded in restricted stock units which vest over time. This is decent alignment with shareholders like us.
On the negative side, it doesn’t look like the Executive Team has a significant amount in stock ownership. I’m not sure that’s because they’ve sold shares, or it’s because the anemic share performance over the last few years hasn’t resulted in big stock payouts. It’s probably the latter.
But overall, I don’t have major complaints with the structure.
Financial Productivity? High ROE/ROIC company. Mostly driven by high margins.
Xilinx is a highly productive company. By any productivity measure – ROE, ROIC – their numbers are high. According to Management, they’re industry-leading. My rough calculations based on estimate of Free Cash Flow, seem to support that claim. I peg their ROE at about a 25-30% level. I think that is sustainable. And I think with the introduction of the ACAP, chances are that there is much upside.
I believe part of the reason they are such a productive company is because their cash flows are more resilient to competition. I’ve discussed this in the sections above, but the nature of FPGAs and SOCs make them longer cycle products which are more immune to the nanometer battles that plague the CPU world.
Sustainable Free Cash Flow? Roughly $1.1 billion. Translates to roughly $89/share.
To me Xilinx has two opposing revenue trajectories. Core Products – traditional FPGAs – will decline. Bigly. Advanced Products like SOCs and ACAPs will increase. Bigly. The question is whether the increase in Advanced Products sales will be enough to compensate for the decrease in Core Products. I guess that’s the bet I’m making – that it will. And that view is based on two underlying assumptions:
- Massive growth in addressable markets: Data Centers, 5G, IoT, and Autonomous Cars.
- Increased adoption of FPGAs, SOCs and ACAPs over the next 3-5 years.
Numerically, I’ve made the following assumptions on Revenue:
- Advanced Products sales will double over the next 3-5 years. This is based on the following underlying assumptions:
- Data Centers Revenue will double. I’ve made similar assumptions for Microsoft. Assuming no growth here would be foolishly conservative. In fact, I think I’m underestimating this. Happy to be wrong.
- Autonomous Cars will involve ACAPs and SOCs. This market has just about taken off with ADAS (Advanced Driver Assistance Systems), in which Xilinx has a big market share.
- 5G MIMOs will use SOCs, and this market will grow at double digits every year starting 2020.
- Core Products will decrease by 33%.
- Traditional, older generation FPGAs sales will decrease by about 10% a year.
- As some SOCs get older, they will move to the “Core Products” bucket.
The end-result is that I expect Xilinx’s revenue to be about 40% higher than it is now. Based on that assumption, there is some Margin of Safety in the stock price.