On the surface, the results for the Intel Core i9-13900K look terrific: The single-core score would remove the i9-12900KS as champion in that particular chart (as should be expected) while the multi-core score leaves the Raptor Lake part rubbing shoulders with the Ryzen Threadripper 3990X and Ryzen Threadripper 3970X. But more in-depth scrutiny actually leaves the i9-13900K looking like just a bit of a lukewarm upgrade over the Intel Core i9-12900K at the moment. The single-core score is only about +8% better (Geekbench’s charts; our median scores give an even lower +7% difference) and the multi-core score is only an impressive-looking +38.74% better because of the significant core-count difference: 24 cores (i9-13900K) vs. 16 cores (i9-12900K).
The first Zen 4 parts have also started cropping up on benchmark sites, revealing high clock rates and potentially excellent (if clearly misread) performance results. With Intel’s recent financial report rocking investor confidence, there will likely be many hoping that Raptor Lake can take on Ryzen 7000 and win minds if not so many hearts. It’s still early days for chips like the Intel Core i9-13900K, although it’s unlikely that results like the recent Geekbench scores will climb particularly higher. It will be down to real-world tests such as gaming and performance efficiency tied together with an aggressive pricing strategy and fluctuating consumer confidence in the brand that will decide if Intel’s Raptor Lake chips face extinction against the upcoming Zen 4 Raphael series.
Obviously let’s wait for production shipments of both before drawing conclusions, but…
I would be surprised if Intels N7 product could outperform AMD’s N5 product… but it looks like they will at least be close.
I expect Intel to pass AMD again when they move the CPU to N4. Since Meteor lake is actually four die it is a combination of many technologies. The CPU is Intel N4. Rumor has it the GPU is TSMC N3, but I expect the first Meteor lake to use the TSMC N6 ARC GPU. It now appears that the mystery very large 14nm tile is a VPU (versatile processing unit), which sounds like all the different accelerators lumped into one chip and removed from the CPU and GPU tiles.
Alan
I would be surprised if Intel’s N7 product could outperform AMD’s N5 product… but it looks like they will at least be close.
I would actually welcome competitive performance from “Raptor Lake”, with Intel pushing the limits of their process and CPU architecture. It keeps the AMD chip designers on their toes. They have already responded somewhat by increasing the power budget from 105 W for AM4 to 170 W for the new AM5 socket, allowing them more headroom to chase Intel on frequency and absolute performance. Competition is good!
AMD “Zen 4” is likely too stay in a clear lead on performance per watt, though.
PS. I am a little confused by your recent new convention for naming Intel processes. N7 is a TSMC process name. But you now use it to refer to Intel’s “Intel 7” process. Is it deliberate? May I suggest using an underscore, i.e. Intel_7, if you find the bare process name to be awkward/ambiguous and quotes too tedious? That said, happy to see no use of “nm” anymore!
I would actually welcome competitive performance from “Raptor Lake”, with Intel pushing the limits of their process and CPU architecture. It keeps the AMD chip designers on their toes. They have already responded somewhat by increasing the power budget from 105 W for AM4 to 170 W for the new AM5 socket, allowing them more headroom to chase Intel on frequency and absolute performance. Competition is good!
Today, power consumption is a very important piece of a processor’s performance.
AMD “Zen 4” is likely too stay in a clear lead on performance per watt, though.
Intel says that the new VPU is a CPU-integrated inference accelerator for Computer Vision and Deep Learning applications. As for the VPU device itself, it will have the following components:
Buttress: provides CPU to VPU integration, interrupt, frequency and power management. Memory Management Unit (based on ARM MMU-600): translates VPU to host DMA addresses, isolates user workloads. RISC based microcontroller: executes firmware that provides job execution API for the kernel-mode driver. Neural Compute Subsystem (NCS): does the actual work, provides Compute and Copy engines. Network on Chip (NoC): network fabric connecting all the components.
assertion is that it uses Atom-derived CPU cores rather than RISC cores but otherwise sounds consistent. A machine learning etc. accelerator plus supporting “stuff”. Derived from Movidius acquisition…
Supporting the link to Movidius is that Intel has used VPU to describe those “vision processing and deep learning” parts https://www.arrow.com/ais/intel/wp-content/uploads/sites/6/2… A Compute-Efficient SoC for Edge AI To support the next generation of IoT edge devices, Intel has developed a line of processors to help deploy AI from edge-to-cloud with best-in-class silicon and optimized software with the OpenVINO™ toolkit. Built for vision and media deep learning operations — it has a unique power-efficient architecture that includes custom accelerator engines, programmable processors, and a central scratchpad memory. The Intel® distribution of OpenVINO™ toolkit offers frameworks and APIs to develop applications and solutions that use deep learning intelligence.