Xeon 4th Gen is Intel’s chance to make a comeback in the data center

 

It’s no secret that Intel has been struggling for years to keep up with its rivals in the data center, which principally include AMD but also Arm-based CPU designers like Ampere and Amazon. The company’s Datacenter and AI Group reported an operating margin of 0% in the third quarter of last year, which basically means it’s earning as much as it’s losing; just a year ago, it was making $2.3 billion. The main problem is that Intel simply hasn’t been able to keep up with its competitors, but the arrival of brand-new CPUs and GPUs might change that. With its 4th Gen Xeon Scalable processors and Max Series of CPUs and GPUs, Intel aims to reverse its years-long decline.

4th Gen Xeon is an important step forward but not quite a winner

Ever since AMD launched its second-generation Epyc Rome CPUs in 2019, Intel has been on the back foot. Efficiency is king in the data center, and Epyc Rome used TSMC’s 7nm process, which is much more efficient than the ancient 14nm node Intel used at the time. Rome also came with 64 cores, while Intel could only muster 28 on typical Xeon CPUs, with a 56-core option existing on paper, though it never caught on. It wasn’t just the 7nm node that made Rome possible, but also a chiplet design, which allowed AMD to really crank up the core count without wasting tons of silicon.

In many ways, 4th Gen Xeon CPU (codenamed Sapphire Rapids) is Intel’s take on Epyc. It uses Intel’s 10nm process, which is about equivalent to TSMC’s 7nm, and has four chiplets or tiles that each have 15 cores and all the other functionality that a CPU needs. That each chiplet is basically a CPU unto itself is a key difference between 4th Gen Xeon and the most recent Epyc CPUs, which have two types of dies: those for cores and those for I/O. This means Sapphire Rapids is actually most similar to first-generation Epyc Naples, which Intel mocked in 2017 for having “glued-together” dies.

Intel is undeniably still behind on the chiplet game even with 4th Gen Xeon, but the company has one trick up its sleeve: HBM2. High bandwidth memory, or HBM, is a compact and high-speed form of memory, and HBM2 is often used for GPUs as superfast VRAM, but top-end Sapphire Rapids CPUs (which are officially called Intel Max) use 64GB of this memory as a sort of L4 cache. AMD’s brand-new Epyc Genoa chips won’t feature HBM2 because the company believes it’s simply not necessary, but Intel disagrees, and in time we’ll see who’s right.

There are lots of architectural improvements that Sapphire Rapids brings, and Intel claims that 4th Gen Xeon is about 53% faster on average than 3rd Gen Xeon Ice Lake in “general purpose compute,” which is basically the kind of performance you’d see in a benchmark like Cinebench. Other applications see larger uplifts, ranging from two times to ten times. Perhaps most importantly, Intel boasts an efficiency improvement of 2.9 times that of Ice Lake, which is extremely important for reducing the total cost of ownership (or TCO) for data centers. Additionally, 4th Gen Xeon supports DDR5 and PCIe 5.0, both of which are extremely important for high-end servers.

While Sapphire Rapids is certainly a big improvement for Xeon CPUs, it’s probably not going to dominate the data center. AMD hasn’t rested on its laurels and its newest Epyc Genoa CPUs use TSMC’s 5nm process and the Zen 4 architecture, just like Ryzen 7000. Top-end Genoa has 96 cores rather than 64, which means Intel is still at a big disadvantage, and it wouldn’t be surprising if Genoa was also more efficient since TSMC’s 5nm is much newer than Intel’s 10nm.

As a side note, Intel hasn’t announced any workstation Xeon CPUs based on Sapphire Rapids, but rumor has it that those are coming later. These Xeon W chips allegedly won’t offer the full 60 cores of Sapphire Rapids and cap out at just 56, but could still prove a worthy competitor to AMD’s Ryzen Threadripper chips.

The empire strikes back?

It’s been about three years since Intel last had the advantage on AMD, and now the company finally has a chance to mount a counterattack. Intel is also on the offensive in data center GPUs with Ponte Vecchio, which Intel has generically branded as the Data Center GPU Max Series. Intel didn’t really offer any concrete details about its general performance but the GPU has over 100 billion transistors spread across 47 tiles. It’s a two-front attack against AMD, which recently announced its massive MI300 server APU, and any other company with data center processors.

It’s easy to get skeptical about Intel’s chances given the company’s recent history and I’m sure 4th Gen Xeon and Ponte Vecchio will have teething issues, but AMD was able to transform itself from nearly bankrupt into one of the world’s leading processor designers. If AMD could do it, why not Intel? This could be the springboard that allows Intel to regain performance leadership, perhaps not with this generation, but with the next.

Original Article