Intel betting on RISC-V

With the Nvidia acquisition scuppered, Arm appears headed toward an IPO, a long and laborious process that will take many months to complete. That’s not to say that Arm’s engineers won’t keep sharpening their designs, but it does give Intel a small opening.

Intel’s investment in RISC-V isn’t just calculated—it’s complementary. The company’s existing processors are as powerful as they are power-hungry, and they run the decades-old x86 architecture. RISC-V, on the other hand, is relatively new, having been developed at UC-Berkeley just over a decade ago. As an instruction set, it’s fairly trim, and chips that use it tend to be smaller than competing ARM designs. Still, RISC-V is immature compared with ARM instruction sets, which have been refined over decades with the help of feedback from myriad customers. While RISC-V is not quite ready to challenge ARM-based smartphones, it has begun to make inroads in embedded systems, another market where ARM excels.

In pushing RISC-V, Intel appears to be ceding the current smartphone market to Arm (it wasn’t much of a competition anyway) while betting on simpler, smaller, and even lower-power chips that promise to be in everything from cars to smart lightbulbs. In other words, Intel is trying to squeeze Arm from the top and the bottom.

While RISC-V chips tend to be small, their production numbers are not. RISC-V supplier Andes Technology said more than 3 billion SoCs using its intellectual property shipped last year alone.

The new fund will no doubt bolster Intel’s relationships with promising RISC-V designers, including Taiwan-based Andes Technology and US-based SiFive. The fabless companies have been making their designs with TSMC, though the new partnership will probably help Intel peel off some of that business. If the bet pays off, Intel will gain experience manufacturing low-power chips, an area TSMC has plenty of experience in.

Volume from RISC-V chips could not only help Intel fill its newly announced fabs—two in Arizona and two in Ohio—it could also help the company refine its manufacturing processes. TSMC was able to push ahead of the competition in part because it made enormous quantities of chips. That allowed the Taiwanese company to work out the kinks in ever more advanced nodes, and by reaching the most advanced nodes first, TSMC put itself in a better position to win new orders, giving it even more volume. It’s a positive feedback loop that has allowed the company to become a juggernaut in the foundry world.

Intel’s nascent foundry operation is small by comparison, but by starting with low-power and embedded systems, where all-out performance isn’t a requirement, the company can establish a beachhead that will let it win some orders in a new corner of the market. Intel is clearly hoping to combine revenue and process learnings from its foundry and IDM operations to create a similar sort of feedback loop.

Given enough time, Intel may be able to use its expertise with RISC-V to push into other markets, just as Arm’s designs are now found in everything from automotive brake controllers to laptops and data centers. That’s a big “if,” but to Intel CEO Pat Gelsinger, it’s likely an opportunity that’s too tempting to pass up.

But at $1 billion, is Intel’s bet big enough?

https://arstechnica.com/tech-policy/2022/02/intels-strategy-…

I found the Intel announcement here:
https://www.intel.com/content/www/us/en/newsroom/news/intel-…

It appears from the linked article that the $1B fund can also be used for ARM development. Mostly it seems an attempt to bring customers to Intel Foundry services and build that business.
Alan

Antonio,

So how does betting on RISC-V differ from Intel’s bet on Itanimum, popularly dubbed “Itanic,” a couple decades ago?

(For those who don’t remember, Itanium was Intel’s spectacularly innovative 64-bit architecture what was going to blow AMD’s x86-64 architecture out of the water and completely take over the PC market. Yeah, right…)

Those who fail to learn the lessons of history are doomed to repeat them.

Norm.

Antonio,

So how does betting on RISC-V differ from Intel’s bet on Itanimum, popularly dubbed “Itanic,” a couple decades ago?

(For those who don’t remember, Itanium was Intel’s spectacularly innovative 64-bit architecture what was going to blow AMD’s x86-64 architecture out of the water and completely take over the PC market. Yeah, right…)

Those who fail to learn the lessons of history are doomed to repeat them…

Norm.

It is substantially different than Itanium.
The Itanium ISA was Intel/HP proprietary, while Risc-V is open source and free to use.
RISC-V shipped about 3B units last year.
One way to think about it is Risc-V is to ARM as Linux is to Windows. Open source and free to use but with limited infrastructure versus standard but more expensive. While ARM will certainly dominate cell phones I suspect there are many high volume cost sensitive applications that will migrate to risc-V when the infrastructure is there.

Intel will offer X86, ARM, and Risc-V cores in their foundry IP library.
Alan

2 Likes

Itanium was Intel’s spectacularly innovative 64-bit architecture what was going to blow AMD’s x86-64 architecture out of the water and completely take over the PC market

Itanium was targeted at the server market, and was around before AMD’s x86-64.

I think at the time Intel had plans for Itanium products to eventually migrate down to the PC market. But the expectation was that x86 would continue to be the PC market for many years when Itanium launched.

Itanium launched in 2001 - but had been in development for years at that point. I recall there being compilers for Itanium/IA-64 in 1998 (probably earlier)

AMD’s x86-64 architecture was IMO a response to Intel’s IA-64. A very successful response.

So how does betting on RISC-V differ from Intel’s bet on Itanimum,

1> Itanium/IA-64 was a much larger investment than what is being talked about for RISC-V.
2> RISC-V is already being used commercially
3> RISC-V is targeting a very different market segment than IA-64
4> RISC-V is a very different architecture from IA-64. It is much closer to the basic pipeline architectures used as examples in college classes.

1 Like

Itanium launched in 2001 - but had been in development for years at that point. I recall there being compilers for Itanium/IA-64 in 1998 (probably earlier)

I feel… pretty confident… that investigations into the approach that became EPIC/Itanium had started by 1993-ish. (Based on conversations with an Intel employee at the time who was pretty elusive about what he was working on. I can’t remember if it was 1992 or 1994 when we chatted.)

1 Like

Alan,

… I suspect there are many high volume cost sensitive applications that will migrate to risc-V when the infrastructure is there.

That’s a pretty big “when.” Historically, new processors that were not backward compatible with existing software have consistently been caught “dead in the water” between software developers having no reason migrate software to a platform with an insignificant installed base of potential customers and users having no reason to adopt a platform that lacks the application software that they use. I doubt that there’s anything about RISC-V to overcome that dynamic.

Norm.

foo1bar,

AMD’s x86-64 architecture was IMO a response to Intel’s IA-64. A very successful response.

And there’s one critical difference between x86-64 and IA-64 (Itanium). The x86-64 design was fully backward compatible with existing 32-bit Windows software, whereas Itanium (IA-64) was not. Thus, users could buy x86-64 machines and continue to run all of their existing applications with an immediate gain in performance, creating an installed base sufficient to drive software developers to migrate their products to the x86-64 instruction set to take full advantage of the new architecture.

Note that the same thing happened fifteen years earlier, when the 80386 microprocessor took the x86 family from 16 bits (8086/8088/80186/80286) to 32 bits.

Norm.

As I pointed out though, volume was 3 Billion units last year… pretty good head start, with significantly more volume than X86.
Makes sense for a foundry to support an ISA with that sort of volume.

It will mostly be used in embedded applications like printers, routers, cameras, automobiles, garage door openers, etc…
Alan

3 Likes

So how does betting on RISC-V differ from Intel’s bet on Itanimum, popularly dubbed “Itanic,” a couple decades ago?

RISC-V is a RISC chip, generally defined as

  • open architecture
  • load/store architecture; no complex ops such as load, add, store
  • fixed instruction width (easy decoding)
  • lots of fixed sized registers all 32 bits (or 64 bit in that mode)

Itanium was a VLIW processor

  • closed architecture. Meant to keep out competitors.
  • VLIW = very long instruction word
  • sort of added on top of a RISC-ish design
  • one word (bundle) contained several operations;
  • multiple parallel bundles could be executed at once, flagged with a stop bit
  • new unique compiler was needed to find ops to slot into each portion of the instruction bundle(s)to keep the processor busy
  • new rotating register file (how to you debug and optimize this?)
  • delayed exceptions
  • had x86 compatibility module to run old CISC software (software and hardware penalty, IIRC)

Mike

2 Likes

And there’s one critical difference between x86-64 and IA-64 (Itanium). The x86-64 design was fully backward compatible with existing 32-bit Windows software, whereas Itanium (IA-64) was not.

Itanium/IA-64 was fully backward compatible with existing 32-bit Windows software as well.
Your implication that it was not is false.

There were a number of differences that resulted in x86-64 being far more successful than IA-64. But not having backward compatibility isn’t one. I would include price, 32-bit performance, being there for “free” on a machine that was being bought/used for 32-bit tasks, and the increased use of x86 with Linux for what had been proprietary unix workstations/servers.

Historically, new processors that were not backward compatible with existing software have consistently been caught “dead in the water” between software developers
You seem to be thinking the RISC-V market is similar to what you see with the Windows environment. Situations where you have large diverse software stacks that are controlled by a different company than who controls the hardware.
Instead, the software and the hardware are likely within the same company - probably working hand-in-hand to create a product, so likely within the same team.
Many of the types of products that will use RISC-V are not as sensitive to what kind of instruction set is being run. For example, there is already an Arduino that uses RISC-V. So you could take many things that were done on an Arduino with an ARM chip, recompile them for the RISC-V board, and start running them. Or if you’ve been doing your development with a RISC-V soft-core on an FPGA board, you could potentially go to production using a physical RISC-V core with some extra logic. Or maybe that’s your second generation lower-power version. Or maybe you can reduce chip-count by being able to have a RISC-V core, some application-specific logic, and an FPGA for future features/upgrades all in one package.

$1B is “cheap” investment when there’s projections of 62 billion cores by 2025.
If this $1B helps get just a tiny fraction more of those cores being made by Intel fabs, it is a successful business decision.

1 Like

So how does betting on RISC-V differ from Intel’s bet on Itanium, popularly dubbed “Itanic,” a couple decades ago?

Where Intel went wrong with Itanium was that it solved a problem that other Intel hardware and software architects were solving better at the same time. Instead of building “pure” RISC or “pure” CISC machines, they designed chips that were 90% or more RISC which accepted CISC instructions into an instruction decoding phase which took a few clock cycles. The important detail here was that what really mattered for overall throughput was 1) bandwidth of reading instructions 2) latency of acquiring data 3) IPC/throughput of execution. The amount of addressable memory matters a lot, but if anything it negatively affects throughput by requiring more bits for addresses.

How Intel took its eye off the ball was not the Itanium so much as the Pentium 4. The Pentium 3 was a very nice chip for its day and a close competitor to the (AMD) Athlon. Then Intel released the Pentium 4 and AMD brought out the Thunderbird. The Thunderbird reduced the number of clocks between an integer (or boolean) instruction and when that value could be used as an operand to a following instruction. This increased IPC. Pentium 4 had a horrible kludge called replay. If an operand wasn’t ready in time, the entire CPU went into a six-clock cycle loop. And there was no guarantee that the operand needed would be ready even then. (Yes, more replay.)

Then a few years later, basically Microsoft said, gee, what we really wanted as an x86 follow-on was something like this… AMD said sounds great and AMD64 was born. (Technically Intel’s EM64T is very slightly different, but only in ways that matter when creating hypervisors for virtual machines.) But this is important: You have a 64-bit addressable machine, but only infrequently do you have to pay the cost in instruction bandwidth for a 64-bit address. Why does it matter? ARM came out with an ugly 64-bit extension. Their customers told them so, and ARM turned around and came out with version 8 of their instruction set. Not quite as slick as AMD64, but a huge improvement for ARM. Finally, RISC-V came along as one of many 64-bit RISC instruction sets. It does not require lots of instruction processing, but the flip side is you have to pay that instruction bandwidth penalty at execution time.

I am tempted to say on every instruction, but that is not true. On the other hand, AMD64 has some neat tricks to improve IPC (Here as above Instructions per clock.) My favorite is to execute any of an if clause, a for loop step, or a while loop step in one clock cycle. It is the sort of thing that only a compiler guru could love. (I am one. :wink: Everyone else just waits for “decent” compilers to show up. I don’t think that is ever going to happen again.

Itanium/IA-64 was fully backward compatible with existing 32-bit Windows software as well.
Your implication that it was not is false.

Sigh! The overall Itanium architecture was intended to be capable of executing x86 code. The first Itanium CPU (Merced) ran x86 code slowly when you had a prototype compiler from Intel. It did not support many instructions expected to be only used by operating systems. But there was a lot of x86 code around that used some of those instructions. Hmmm. There are some instructions that are clearly operating systems only, and instructions that anyone can use. However, there are some instructions that are used system application software, such as semaphores. The big problem was–if you had one of these chips in your lab–a lot of applications back then just assumed they could access floppy drives. (And of course, some of that code was intended as copy protection, so there was legal jeopardy on offer if you even disassembled it.)

None of that really matters anyway, see: https://users.nik.uni-obuda.hu/sima/letoltes/Processor_famil… slides 48-50. The hardware support for x86 on Merced was much slower than most x86 machines you could buy at the time. (And/or Xeon machines.) I think it was McKinley where hardware x86 support was removed and only emulation was offered.

Oops! Well not really oops, too long a jump between thoughts:

I said: Everyone else just waits for “decent” compilers to show up. I don’t think that is ever going to happen again. I meant:

It is unlikely that any new “industrial strength” compilers for CPUs, GPUs, and so on, will be developed.

Many of those existing compilers have been targeted to multiple backends that generate code for different hardware. That does not, of course, mean that all those backends are of equal quality. That also doesn’t mean that any new backend is bad, or that it won’t eventually get sufficient loving care.

If any new hardware architectures are developed, lots of luck getting a decently supported compiler. (RISC-V does not count as a new hardware architecture, just as an instance of an existing hardware architecture.)

There have been several VLIW (Very Long Instruction Word) architectures developed in Russia. In addition, there has been some work done in the US on VLIW GPUs. So I consider that a potentially viable hardware architecture family.

I’ve got to stop leaping ahead like that. :wink:

The first Itanium CPU (Merced) ran x86 code slowly when you had a prototype compiler from Intel. It did not support many instructions expected to be only used by operating systems. But there was a lot of x86 code around that used some of those instructions.

Sigh!

You should note on slide 49 of the slide deck you linked to it says:
“IA-32 compatibility includes support for running a mix of IA-32 and IA-64
applications on an IA-64 OS, as well as IA-32 applications on an IA-32 OS.”(emphasis added)

So you could indeed run instructions that were expected to only be used by operating systems.

I recall there being a demonstration of booting an older OS on an Itanium machine - but I do not recall which one it was and was not able to find it in my searches.

Since you appear to disagree, please name one of the instructions that was not supported by the first Itanium CPU and point us to some evidence that was the case.
Here’s Intel’s guide to the IA-32 instruction set they supported on Itanium.
https://www.intel.com/content/dam/www/public/us/en/documents…
Warning for anyone that wants to look at it - it is a 600 page document.

Since you appear to disagree, please name one of the instructions that was not supported by the first Itanium CPU and point us to some evidence that was the case.

I have a vague recollection that the software emulator for x86 on later Itaniums may have missed some instructions.

I also have a vague recollection that there might have been an x86 core included in the early Itaniums? something like 1/4 of the chip was devoted to x86 support?

I also have a vague recollection that there might have been an x86 core included in the early Itaniums? something like 1/4 of the chip was devoted to x86 support?
It wasn’t an x86 core. The x86 instructions used many of the same resources - registers, execution units, etc. There was die area just for x86 support. For example, something has to decode the instructions and IA-32 instruction decode hardware isn’t something that can be used for other purposes. It wasn’t 1/4 of the chip though. It was less than 10% from what I recall. If it had been 1/4 of the die, they probably could have had an entire core on there. I wasn’t able to find today a die shot of Itanium with the areas labeled - but I recall seeing one way back when.

1 Like

foo1bar,

Itanium/IA-64 was fully backward compatible with existing 32-bit Windows software as well.

No, Itanium (IA-64) did NOT implement the full x86-32 instruction set. There was an x86-32 emulator (or “virtual machine”) for it, but emulators are notoriously slow because they require the processor to process actual code that translates complex instructions into the equivalent microcode, then execute that equivalent microcode. That does NOT count as backward compatibility. The x86-64 instruction set, on the other hand, was a direct extension of the x86-32 instruction set, implemented in hardware.

Norm.