Desktop Market shrinks, AMD share grows

https://www.tomshardware.com/news/amd-share-skyrockets-amid-…

The Mercury Research CPU market share results are in for the first quarter of 2022, and the results are somewhat dire — Dean McCarron from Mercury reports that aside from IoT/SoC, all segments of the x86 processor market declined during the quarter. Desktop PCs suffered the most as units declined by 30%, the largest quarterly drop in history. That’s an incredible reversal after two years of component shortages that kept many PC builders on the sidelines.

Surprisingly, AMD managed to carve out significant wins during the tumultuous quarter and has now, once again, set a new record high x86 market share of 27.7%, an incredible increase of seven percentage points over last year.

Both Intel and AMD suffered from the disturbingly fast decline in the desktop PC market, but AMD didn’t lose sales quite as quickly as Intel, resulting in a share gain for the quarter. Notably, much of the decline in desktop PC came as vendors burned through excess CPU inventory, which McCarron says impacted Intel more severely than AMD. As such, Intel still gained some unit share in the desktop PC market compared to a year ago.

AMD continued to take big strides in the mobile/laptop market as it set another record for unit share in that segment with 22.5%. AMD also gained in the server market for the 12th consecutive quarter, reaching 11.6% of the market.

The overall CPU market had a slew of impressive firsts, too, with McCarron saying, "In spite of the downturn, the market saw several records set, including record highs for server processor revenue, IoT/semi-custom units and revenue, and a new record high for combined client (desktop and notebook) CPU average selling prices.

“Lower shipments of low-priced entry-level CPUs and strong ramps of new mobile processors (Alder Lake CPUs for Intel and Barcelo and Rembrandt CPUs cores for AMD) resulted in much higher mobile CPU prices, which helped set the record client
(combined desktop and notebook) average selling prices of $138,
which were up more than 10 percent on quarter and more than 30 percent on
year.”

The mobile market also receded during the quarter, so both Intel and AMD saw a declining number of units sold. But, again, AMD’s declines were smaller, thus resulting in another quarter of share growth. However, this gain also represents an impressive 4.4 percentage point gain year-on-year.

McCarron says that AMD gained significantly in the commercial notebook market, likely helping to shore up its shipments. This quarter represents another record high for AMD’s notebook share.

Server CPU shipments fell during the quarter, but AMD continued its three-year streak of quarterly share gains. Intel says it is shipping its Sapphire Rapids Xeons to some customers, but we haven’t seen the general release yet. Likewise, we’re also waiting for AMD’s EPYC Genoa to come to market.

Mercury Research provided the following commentary: “For all-inclusive share, which counts not only PC client CPUs and servers but also IoT and semi-custom products used in items like gaming consoles, AMD gained share in the first quarter and set a new record high at 27.7 percent, beating the 25.6 percent record set last quarter. Recall last quarter AMD broke the record it had set more than 15 years ago of 25.3 percent.”

4 Likes

When the market was growing due to Chromebook sales to primary and secondary schools, it is no surprise that 1Q2022 is down across the board. Expect the same next quarter, then back to school and Zen 4 (Genoa server, and Raphael desktop will push AMD sales higher. Intel will have Raptor Lake desktops and Sapphire Rapids for servers. Note that the order I listed the code names in is significant. Release dates are all “2H2022,” but Genoa is expected before Raphael and Raptor Lake before Sapphire Rapids.

Flame-retardant: Teasers at trade shows may not match the above, but I am targeting the “bulk” shipping dates. I also won’t be surprised if both AMD and Intel’s latest and greatest server chips show up in the June Top 500 list. Intel may finally get Aurora listed and AMD should have at least one Exascale Milan system, but there may be some Genoa systems showing up.

Just to be clear…
Sapphire rapids is shipping in volume, but somebody is buying them all.
Aurora is now being built, but it sounds like it uses Sapphire Rapids with HBM, which is due out H2.
→ Intel does have a two cabinet test system up and running in their labs. They claim it would be a top500 system.

Frontier is now installed and going through testing, so likely on the next top500 list.
Alan

1 Like

Antonio,

The Mercury Research CPU market share results are in for the first quarter of 2022, and the results are somewhat dire — Dean McCarron from Mercury reports that aside from IoT/SoC, all segments of the x86 processor market declined during the quarter. Desktop PCs suffered the most as units declined by 30%, the largest quarterly drop in history. (boldface mine)

The fact that the desktop market is declining now is not really a surprise, at least to me. I shifted to a “desktop replacement notebook” for my home system about fifteen years ago, finding that such systems have plenty of computing power for my needs – and at the time, that included running computer simulations that were very intensive computationally when a major snowstorm or some other situation forced me to work at home for a day or two. I also found the “desktop replacement notebook” to be considerably less expensive than a desktop machine with a decent monitor and all of the other required peripherals that are built into the notebook – but the company continued to buy desktop computers for use in the office, I think more out of habit than anything else. That, and perhaps there was some thinking that desktop systems don’t “walk out the door” and vanish as readily as notebooks…

But COVID has changed the situation considerably. With COVID forcing companies to adapt to employees working at home and many employees now resisting – or even refusing – a return to working full time in the office, notebook systems have become the practical solution for most businesses. With a hybrid work model (where full time employees come into the office one or two days per week), each employee can bring his or her system back and forth when going to and from the office, using the same system in both places.

Norm.

The fact that the desktop market is declining now is not really a surprise, at least to me.

Why am I upgrading my desktop system with a new R9 5900X and a 6800XT GPU? Gaming. The 59000X will allow me to run simulations and still play games, while the upgrade to the 6800XT is purely to get some RDNA2 features to play with.* I can easily afford it, but I did wait until I found a card I liked for $850.

Desktop computers have become only a competitor to the consoles for those who like gaming at 4K with lots of eye candy. (Guilty as charged. :wink: Gaming laptops are actually better than consoles if the occasional drop below 55 fps is acceptable. (One of those things I want to test RDNA2 for. Hardware sites are using either an nVidia GPU or Intel CPU to do benchmarking. AMD with AMD allows the graphics driver to drop things coming from SSD into the graphics ROM. It is possible to this with AMD CPUs and nVidia GPUs, and is automatic with AMD APUs in laptops.)

  • Normally I would say “Don’t try this at home!” but the few remaining voices here are mostly others who know how to debug GPU code without bricking your GPU.

Why am I upgrading my desktop system with a new R9 5900X and a 6800XT GPU? Gaming.

That’s the last and only use case I can think of for a desktop that most of humanity will ever have.

Bob,

Why am I upgrading my desktop system with a new R9 5900X and a 6800XT GPU? Gaming. The 59000X will allow me to run simulations and still play games, while the upgrade to the 6800XT is purely to get some RDNA2 features to play with.

I have no doubt that you are in this category of user, but I suspect that this category of user represents a percentage of the user base that’s so small as to be in the noise in determining the market share of various types of systems.

Norm.

1 Like

I have no doubt that you are in this category of user, but I suspect that this category of user represents a percentage of the user base that’s so small as to be in the noise in determining the market share of various types of systems.

Gee, I thought that it was my compiler construction, in particular, generating library code for things like matrix multiplication that put me in the vanishingly small use case community. There seems to be a large community of gamers who can afford high-end systems. (I basically hit the buy button because playing a necromancer at 4k in Diablo II Remastered was jerky compared to playing a character without all those skeletons. :wink: And yes, that’s with eight threads running a VM.

Bob,

Gee, I thought that it was my compiler construction, in particular, generating library code for things like matrix multiplication that put me in the vanishingly small use case community. There seems to be a large community of gamers who can afford high-end systems. (I basically hit the buy button because playing a necromancer at 4k in Diablo II Remastered was jerky compared to playing a character without all those skeletons. :wink: And yes, that’s with eight threads running a VM.

Yes, there are a significant number of gamers who want, and can afford, “bleeding edge” systems, but they probably represent about 1% to 2% of the traditional desktop market. There are a lot more office workers and other commercial users, and a lot more typical home users, for whom the cheapest desktop or “desktop replacement notebook” that’s still on the market is quite adequate.

Norm.

1 Like

I was sort of pulling your leg–but not… When my brother and I were entering high school, my father basically bought a (letterpress) print shop from a widow and set it up in our basement as a source for spending money. He was doing in part as a favor to the widow–she wanted to move out but couldn’t leave a twenty tons printing press, compositor block, and 140 drawers of type in place. It turned out to be a boom and bust business for us, Christmas cards (including signatures) in November, and lots of graduation-related announcements in April. The print shop is still in use. My brother and I gave it to his college when they were looking for winter (four week) semester stuff.

But 500 years after Guttenberg, movable type is literally being taught as history–and it still should be. It helps to know firsthand how much work printing a book was prior to lithography. (Well, it wasn’t really prior to lithography, just to using lithography in printing.)

Why bring this up? When I got started in the computer language and compiler industry, FORTRAN was new and COBOL was not yet real. I stayed mostly with Algol family languages, including Ada, PL/I, and Pascal.* When I retired it was not quite possible to generate a compiler for a new language from the documentation–but on the other end, generating a new back-end (the part that generates machine code) is almost completely automated. Since front-ends interface with programmers, and they tend to stick to one language for their career, new front-ends really aren’t needed. So software engineers who know the details of using hardware architecture are as rare as aardvarks. (Careful here: not how to implement a hardware architecture, but how to use it. Most code today is 100x slower than it needs to be.) The HPC stuff isn’t that way because a few of us work on BLAS (Basic linear algebra subroutines). All that matrix arithmetic stuff calls trig libraries and BLAS. BLAS partitions the data between cores and machines.

There is another group going extinct. Relational databases took decades to work out. Today no one dares touch the human side of the interface, and on the computer side, computing the fastest way to execute a database command can be done in a few milliseconds. So why not put that optimizer in the SQL front-end so users don’t have to learn how to optimize a query. :wink:

I also worked on object-oriented databases, and on real-time software. (RTS: It doesn’t have to be fast, but late answers are wrong. So proving there will be no late answers is the hard part.) In both cases, there are situations where you need an OODB or proof of a real-time system. But almost all software today just counts on Oracle or whoever to be fast enough–if they aren’t? Buy faster hardware. If you crashed a ship into the side of the Suez canal? It’s just money. :frowning:

*APL was a lurch to the side, but I needed it to do the research for my Ph.D. Then I needed a language that I could use to describe what I was doing to my committee. I chose Green from the DoD-1 languages, but I had a few comments (about 50 pages) on the language. Jean Ichbiah liked my approach so I wound up working for Honeywell. Maybe I’ll finish the thesis this year… (Except for one criticism and when he first met me. His first words to me were, “Ten insults, ten! In one paragraph…” I said I didn’t know what he was talking about. He gave me section and paragraph, and I said, “Oh, I thought there were only nine, but they weren’t aimed at you.” I pointed out an error in the Softech compiler design during a presentation, and someone on the NYU GNAT team thought they would demonstrate that their compiler would handle it easily. They ran it during lunch–my example was written on a whiteboard at a conference in a few lines. Crashed the compiler, crashed the VAX, NYU had a good maintenance contract with DEC, so the machine was running again late that afternoon. My code generated a write to a junk address. The quick and dirty coding left zero in the address. Oops, overwriting the boot sector is not a good idea.

Anyway, even though the Ada team quickly fixed the bug in the language–both Softech and NYU insisted on additional “safeguards” in the language. One of those safeguards is still there. (It is a rule that the name of a task entry is invisible until the end of the definition. Of course, I was pointing out in the paragraph Jean lambasted, that renaming could get around the rule, and in fact, most task entries would be renamed as procedures anyway. In other words, lots of extra work to fix an already fixed bug in a way that didn’t fix it anyway.

Jean said he would fight the DRs (Distinguished Reviewers) about it, if I could come up with an example that wasn’t contrived. Otherwise, he had too many fights to fight with them. :wink:

Bob,

(Careful here: not how to implement a hardware architecture, but how to use it. Most code today is 100x slower than it needs to be.)

Not to mention 100x more bloated than it needs to be. Compilers for “object-oriented” languages often generate separate executable code for every class that inherits a function from a subclass. Years ago, I came to the conclusion that the acronym for “Object-Oriented Programming” – OOP – was fitting.

The quick and dirty coding left zero in the address. Oops, overwriting the boot sector is not a good idea.

Ya think???

Yes, definitely NAGI!

In other words, lots of extra work to fix an already fixed bug in a way that didn’t fix it anyway.

Which happens way to often when development gets mired in the politics of “work around it so we don’t cross somebody who screwed up”…

Norm.

Gee, I thought that it was my compiler construction, in particular, generating library code for things like matrix multiplication that put me in the vanishingly small use case community. There seems to be a large community of gamers who can afford high-end systems. (I basically hit the buy button because playing a necromancer at 4k in Diablo II Remastered was jerky compared to playing a character without all those skeletons. :wink: And yes, that’s with eight threads running a VM.

I think even most gamers aren’t also trying to run some heavy simulation workload in the background on the same machine they use for gaming. The simultaneous multiple heavy workloads is enough to make you a super-outlier.

4k in Diablo II Remastered

A few quick questions:

  • Does the jerkyness go away when you reduce the resolution?
  • Does AMD FSR work?
  • What about Google Stadia?

A few quick questions:
* Does the jerkiness go away when you reduce the resolution?

Are you asking about with old card (Vega 64) and R7 3800X or the new card (Radeon 6800XT) and R9 3900X?

I’ll answer for both. First, it is a bit of work to get the game to “Full Screen” and 2560x1440. My display is happy doing the upscaling. If I choose “Windowed” Diablo II Remastered will allow me to choose 3840x2160, but the Apply button drops it down to 2560x1406.

With my old GPU and CPU (R9and as a Necromancer some (crowd) fights were basically let the skeletons, mercenary, etc. do the work. Against bosses and semi-bosses (creatures you have to kill for the first five quests in a round) that strategy didn’t work. So I either went down to Merc and Goblin or in some cases three Skeletal Mages. (If there is a low wall or river you can “trap” the Mages on one side of, then all you need to do is stay alive. :wink:

With the new stuff, I can play with lots of eye candy, since the game runs at 2560x1440. I haven’t noticed any slowdown. I should try

* Does AMD FSR work?

No idea. I could use it with a 1080P monitor, but I don’t have one around, and neither does my son. (I don’t know what it says about my situation, but even my laptop is 4K.)

* What about Google Stadia?

I guess I should try it out on some game I am thinking about downloading. But I’ve just had this setup since Saturday. Just opened Steam for the first time, so I can play some 4k 60 fps games, or set some higher resolution and use RSR (FSR for those with Navi 2 cards. :wink:

* Does AMD FSR work?

No idea. I could use it with a 1080P monitor, but I don’t have one around, and neither does my son. (I don’t know what it says about my situation, but even my laptop is 4K.)
I thought the whole idea of FSR (just like NVIDIA DLSS and INTC XESS) is to be able to run the game engine at faster FPS using 1080p mode, while getting 4k quality video output.
Am I missing something?
Alan

1 Like

I thought the whole idea of FSR (just like NVIDIA DLSS and INTC XESS) is to be able to run the game engine at faster FPS using 1080p mode, while getting 4k quality video output.

Short summary: video protocols play tricks to compress the signal. As far as I know, the chroma data resolution displayed at 4k60p is the same as at 2k, but the luminance data is at a higher resolution. Magnify it enough and you can see what is going on.

Just that my monitor accepts 2560x1440p and does the rescaling. Running at 1080p rescaled to 4k (3840x2160) is throwing away too much detail for rescaling to 4k and when the monitor runs at 2560x1440 with the video upscaled from 1920x1080 then rescaled again by the monitor–it is the same issue as double rounding in floating-point arithmetic.

I’ll check on the issue, but as I remember it, 4k displays do not run the pixels at full 32-bit color. They do dithering, which is in the data from the video card. Ah, found it: https://www.rtings.com/tv/learn/chroma-subsampling Probably more than anyone outside the video production field wants to know. Basically, 4-4-4 is full luminance and full chroma, 4-2-2 is full luminance and half resolution chroma, and 4-2-0 is full luminance, 1/4 chroma. In other words, the chroma data is presented at half the luminance resolution in both height and width. The big issue here for video games, etc., is that some protocols cannot support 4-4-4. I think it is H-264 that supports only 4-2-2 or 4-2-0 at 60 fps. H-265 does support full 4k, but some video cable protocols don’t. Switch to 30 fps or a significantly lower resolution, and 4-4-4 is fine.) You may have run into this with cabling. My monitor only supports 4k at 30 Hz over HDMI, but 60 Hz over DisplayPort. I think it is HDMI 2.0 and DisplayPort 1.4, your monitor will probably have different versions of these standards…

Obviously, the amount of data you can squeeze into a cable’s bandwidth is limited. So there are tradeoffs going on here that should be gone when we shift to fiber-optic PC to monitor connections.

Anyway, I think that my monitor and video card shift to 4-4-4 encoding for 2560x1440@60 Hz. I could be wrong, but I’d need a way to amplify the pixels. Hmmm. I can barely see the color dithering in a picture with my phone. YMMV. Turning your camera display at an angle may help.

I think we are talking past each other.
These technologies do a lot of offline preprocessing in order to speed up in game rendering.
Perhaps the NVIDIA write up is the most clear from the various vendors:
https://www.nvidia.com/en-us/geforce/technologies/dlss/

DLSS uses the power of NVIDIA’s supercomputers to train and improve its AI model. The updated models are delivered to your GeForce RTX PC through Game Ready Drivers. Tensor Cores then use their teraflops of dedicated AI horsepower to run the DLSS AI network in real-time. This means you get the power of the DLSS supercomputer network to help you boost performance and resolution.

I think this is similar to AMD FRS. I am more sure it is the same as Intel XESS.
Alan

I think we are talking past each other.

Yes, we are. I am saying that with my upgrade, I see no reason to do junk testing. I could choose 8x MSAA, but any of the 2x antialiasing settings do a reasonable job. I haven’t found anything where even 4x MSAA causes slowdowns below 60 fps. In effect with this monitor, choices are 60 fps locked, and slower choices with tearing. If I had a monitor that supported 144 fps at 4k, I might feel differently.

I think this is similar to AMD FRS. I am more sure it is the same as Intel XESS.

Maybe there is a way to test Radeon supersampling with Diablo II Remastered, but I have a feeling that there are not that many games where it is needed on my system if any.

I am saying that with my upgrade, I see no reason to do junk testing.

I was asking as not everyone has $1,500+ to spend for more FPS. One of the selling points of AMD FSR was improved gaming FPS even on older hardware. So I was curious if you had tried it.

That said, I understand if you are too busy to test. I’ve found that as I get older, I am more willing to trade money for more time.

1 Like

That said, I understand if you are too busy to test. I’ve found that as I get older, I am more willing to trade money for more time.

No problem, and not my meaning. There may be games where RSR might make sense. (The version on offer to me is Radeon Super Resolution.) I don’t know if there are any significant differences. What I was trying to communicate was that 1440P runs at 4-4-4, 4k at 4-2-2, so at least the chroma part of the images would be calculated the same. If I tried upscaling 1440P to 4k I would be making more work for the GPU to get effectively the same resolution. That’s what I meant by “junk” testing. Similarly, right now frame rates over 144 are meaningless. There is no point in running uncapped so you can get frame rates the monitor can’t display. It does make sense for websites to run that way to get relative comparisons between GPUs. But it would be nice if they also printed a list that could run locked at the display’s maximum frame rate for different resolutions on most games.