________________________________________________________________________________
AMD marketing learned their lesson. There was controversy last generation with the boost clocks being _up to_, where many chips couldn't reach their advertised numbers at launch. This time with an advertised boost of 4800MHz AnandTech got up to 4950MHz and Gamers Nexus hit 5050MHz, though these are probably golden review samples.
It's really impressive to me how big of an improvement AMD has made under the exact same constraints as last generation. Same process (even down to the same PDK) though generally higher yields probably let them choose higher bins. Same exact chiplet size to mount to the same substrate. Same power availability. Same IO die with same interfaces and memory controller. Yet they achieve +19% IPC and higher frequency with just design changes. I wish there were more detailed information available about what day-to-day engineering work goes into these design changes. Speculating:
- Designing better workload and electrical simulations to make better decisions when evaluating changes.
- Running lots of simulations to determine the optimal size and arrangement of each cache.
- Improving CPU binning processes and data, improving on-die sensors, and improving boost algorithms to reliably run closer to the edge.
- Tweaking parameters of automated layout algorithms to reduce area of specific functional units.
- Improving the algorithms implemented in logic for various processes.
Apart from the performance increase, one of the impressive feats is the TDP of around 142 W, where Intel chips use from 200 W up to 260 W
Source:
https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...
And the Linux performance is looking great too:
https://www.phoronix.com/scan.php?page=article&item=ryzen-59...
Is it valid to compare TDP numbers between vendors?
Not really, the TDP is as the Anandtech article states, not the same as Peak Power for modern Intel and AMD CPUs.
The difference between TDP and Peak Power for AMD is around 35W, while for Intel it is up to 140W.
When choosing a PSU that difference can have a real impact on system stability.
No, but the linked data is actual power usage, not TDP.
Yeah, wow, and here I just bought a new setup with multiple home servers all on 3600 thinking the improvement wouldn't be that significant and the prices would be much higher initially anyway.
The 5600X looks fantastic at 65WTDP. Add the unavoidable price drop of Zen 2(+) on top of that. Buyers remorse is real.
_shrug_
I bought a Ryzen 7 2700X a few months before the release of the Zen 2 / 3700X. And yeah, it's faster for nearly the same price (though due to timing I paid ~$230 so it's not insignificant when compared to the launch day pricing.)
There's "always" more, more, more performance for the dollar in tech, and there is no perfect price/performance product. The Ryzen 5 3600 was and still is a great price/performance product and that doesn't change because a more expensive, higher performing chip is released.
You can make decisions at a point in time, or if time is irrelevant, you can keep waiting for new milestones... and then make a decision in that point of time instead.
For me, I waited nearly 10 years to replace a 3+ Ghz quad-core CPU, so the 4+ Ghz octa-core was a nice upgrade (albeit still not earth shattering) and I am very glad to have it. I love the recent increase in performance growth over time, but it will likely be quite a few years before I upgrade my desktop again. Of course, I'm just one guy, and I'm glad many enthusiasts and computer-using professionals will benefit from all this advancement!
This has always been the case, too. In school I once read somewhere that if you knew you had a computation to do that was 100 MFLOPS (or whatever seemed like a large number at the time), and you had to complete it sometime in the next 18 months, a solid strategy is to wait 9 months and buy more-modern hardware.
You'd end up with more 'residual' computing power (when your computation is done, you have the hardware lying around, and now it's more powerful than if you had bought on day 0), and it would probably cost less in energy as well.
we probably used the same book haha, I also remember questions like this from my architecture class. I suspect the book was written at a time when inter-generational scaling was a bit more dramatic.
Same here, but son was too excited about 5950x, and wanted birthday/xmas gift, so you go. I'm geeking out, for the first time since I did my own upgrade from Apple ][ -> PC 286 -> PC 386SX -> Cyrix 6x86 - I wanted to skip Threadripper, as he normally works in single app - whether it's game (Fortnite, COD, Roblox, etc.) or video editing (Premiere, After Effects, others). The only issue is that we have older motherboard B440, but were notified that BIOS would update would be coming!
> I wanted to skip Threadripper, as he normally works in single app
I thought those types of applications would actually be able to take good advantage of all those cores.
We've hit a performance problem with multi-socket setups (NUMA) - basically one core talking to the other's core memory (different sockets), hence slowdowns. And it's amplified by the use of atomics (shared_ptr, etc.). What we did was simply to disable one core (it was two CPU sockets).
I guess for servers, executing many (mostly) single threaded jobs (a.k.a "borg", "kubernetes", "docker") is okay, but when you have app using all threads (TBB-style) then it becomes bottlenecked.
SysInternal's
https://docs.microsoft.com/en-us/sysinternals/downloads/core...
actually captures this detail (probably there are better apps, but it was nice to inspect machines quickly). The tool measure some relative slowdown.
16 cores ought to be plenty for the next few years.
in my experience the ideal time to buy is usually two or three months after the most recent launch. at this point any major kinks will be worked out in a new stepping, and you don't have to fight over the last few units in stock. plus you get to see if the competition responds with price cuts and/or new skus. of course at this point, the next launch is six months away, give or take. I find that having a deliberate strategy like this reduces buyer's remorse a lot.
The 3600 is an incredible chip that will be relevant for a long time. Buyer's remorse is real because of marketing. Do you really _need_ that extra 20% performance?
The 5600X is also as expensive as the 3700X is now. The 3600 is cheaper, a lot. It's still a good option and a good deal.
This is why I almost always wait for second gen of a new kind of hardware. Eg, my current CPU is a Sandy Bridge. When the i series came out I wanted one badly, but I waited and on the first week of launch bought second gen.
The same with the upcoming Macbooks set to be demoed on the 10th. If I want one, I'll wait for at least second gen.
Regarding AMD CPUs this is an unusual situation, because 3rd gen or 4th gen may be optimal, which is highly uncommon. This is because next gen will run on faster ram. The bottleneck on the Zen 2 processors was cache delays. The bottleneck on the Zen 3 processors is ram delay. This shows to me there is more improvements to come. However, unfortunately, this also means cost will go up quite a bit in newer generations with smaller gains making it not as worth it. I don't game much and I do data science work, so processing is done in the cloud, so I have virtually no reason to upgrade my nearly 10 year old CPU as odd as that may sound.
It looks like AMD actually has two teams working on Zen. So Zen1 was from the first team, Zen2 was refinement of Zen1, and Zen3 is the first gen from the second team. So if you follow your rule, you might want to wait for Zen4, although it isn't clear to what degree the teams work together (if team 2 works pretty closely with team 1, then this could be thought of as a big refinement with and 'outside' point of view looking at it).
Zen was refined into Zen+, then later Zen2 was completely different.
Zen and Zen+ did NOT have an I/O die yet. Zen2 added the I/O die, and now Zen3 improves the compute die (without changing the I/O die).
It's arbitrary. If you count Core i series from Nehalem, Westmere is 2nd so Sandy Bridge is 3rd. Count for * bridge, Sandy is 1st. Additionally Zen+ is 2nd and it's least improved generation in this time.
My general preference: buy Tock generation (but Intel recently no Ticks for desktop, AMD does Tick+Tock on Zen2)
I know, right? That was the silver lining of the Intel stagnation era. You could buy a CPU and not feel like you're missing out for the next 3-4 generations.
There's buyers remorse and there's generational shifts. We're going through a huge transformative moment right now. With the addition of Zen3 and the new GPUs this a huge bang/buck improvement across the board. We haven't seen such a shift in many years.
Most of the time I would advise against waiting but right now there's a clear advantage to being on the other side of this shift.
I had a twinge of that. But for the same price I got my i7 3770k, I've gotten a 3600 and double the memory I had with even faster clocks. Oh, and more cores, higher clock speed, cooler running and less power hungry.
Just waiting on the radeon reviews before I decide on a new GPU to get into 1440p gaming.
I still use my 2012 Desktop with a i7 3770K. Except for NVMe upgrades and a RTX 2060 Super (originally 670, then 980) it's still the original Build.
I will be going for 3900X for my next machine. Probably waiting for the RTX 3080 Ti I now see being leaked, or a regular 3080 (not interested in the 3090).
Also still running my 3770k desktop. Also the original build with the exception of replacing my existing GTX680 with a 980 a year or two in.
Amusingly, my thoughts of upgrading were initially triggered by wanting new storage. I currently have the original 250GB SATA SSD I put in originally, along with a handful of hard drives from various older systems. Motherboard doesn't have an M.2 slot and I was thinking, hmmm...I bet there are a lot of things I can't upgrade at this point.
Waiting on these new Zen 3 CPUs to be available, but not in a huge rush since I also can't get my intended 3080, so we will see which comes into stock first. It will be a nice bump to a new gen CPU, DDR4, and a few gens of GPU once I can actually buy the dang parts. If they were around I would've built the thing a month ago.
I'm even willing to pay the premium for a prebuilt PC this time but it will be a while before I can find that Ryzen 5900X / RTX 3080 (Ti?) combo anywhere.
I have a 3770k at home too. How did you upgrade to an NVME? There's no M2 port on my motherboard and I can't boot the OS on the NVME / PCI express I bought.
My OS drive is still the original 100GB SSD from 2012. I replaced my 2TB HDD with a 2TB Samsung 970 Evo Plus in a M.2 PCIE 3.0 x4 riser card.
Unfortunately when I switched from the GTX 980 to the RTX 2060 Super all PCIE 3.0 lanes were being used (ASRock Z77 Extreme 6) so had to put the riser card into a PCIE 2.0 slots which means I am not getting the full performance.
https://www.win-raid.com/t871f50-Guide-How-to-get-full-NVMe-...
Thanks, I saw this forum a while ago but there's no way I'm running a hacked bios on a machine where I value my data!
Its "hacked" by adding NVME bootrom module. You can also run Clover EFI from HDD/pendrive without modding bios
https://www.win-raid.com/t2375f50-Guide-NVMe-boot-without-mo...
I went from a 1600 to a 3700X and it was quite a jump, the 5800x looks great but I think I'm staying with the 3700X this gen especially since I would need a new mobo (still on a B350 that I bought with the 1600!)
> wouldn't be that significant and the prices would be much higher initially anyway
I mean, the 5600X is about double the price of the 3600, depending on when you got the latter.
My problem is that I'm building a new rig and trying really hard to not go all out and get a 5900X instead of a 3700X.
I keep my desktop for a long time, so it seems to be "okay" to do. For reference, upgrading from launch year 2500K. But still..
Anecdotally I upgraded from an (overclocked) 2500K to a mere 3600 and am perfectly happy with the performance I'm getting, even running virtual machines or compiling large code bases.
Would a 3700X have been even nicer? Probably, but when I bought my new CPU last year it was almost twice the price for two extra cores. So I do feel your temptation, but I don't think you would regret the 3700X.
I have a different problem. I can't find a rational justification for getting one. My work is not significantly impacted by CPU speed and 32 gigabytes of RAM still feel sufficient.
Oh well...
> It's really impressive to me how big of an improvement AMD has made under the exact same constraints as last generation.
It's 8-core chiplets now, but yeah same die size according to marketing.
It was 8-core chiplets last generation, too. It's now 8 core CCXs instead of 4 core CCXs, but same total number of cores per chiplet as last gen it was 2 CCXs per chiplet and this gen it's 1 CCX per chiplet.
This image shows the change more clearly:
https://images.anandtech.com/doci/16214/Zen3_arch_19.jpg
You can see it's the same overall amount of "stuff" in the chiplet. It went from 8c / 32MB L3 per chiplet to... 8c / 32MB L3 per chiplet. But it's now not divided into 2 smaller chunks as it was.
Yield must be very good for AMD to not put out a cheap 5800 with two defective 4c CCXs
Defective 4c will go into the 3300x replacement (or an eventual 3100 replacement if things get really bad)
I think they put two 6c into 5900
It seems like evidence of incredible discipline, forethought and prioritization to achieve this. They must have really been thinking about how to build a solid foundation with a clear path for iteration. To achieve it without discovering some bottleneck they hadn't accounted for along the way is really impressive.
I still find the 4000 series for laptops impressive. I just ordered a 4800HS system with 8C/16T in 14” screen format. That’s astounding to me.
A Zephyrus G14?
Make sure to read the subreddit, if it happens to be that one. The CPU is fine, but dGPU integration is, uh, interesting.
Good guess and thanks for the tip. All I could turn up from searching Reddit is that I’d need to disable the AMD iGPU to use the dGPU for games. This doesn’t seem a problem for me since I will use the dGPU for CUDA work only, and I’d expect that should work ok or I’ll return it.
You absolutely don't need to do that. It'll automatically enable the dGPU when needed.
The biggest issue I've had is with making sure it turns _off_, and /r/zephyrusg14 is somewhat helpful there. On Linux it's more straightforward, but currently needs a few kernel patches.
Only issue I've seen with mine is there was a glitch that prevented the dGPU going 100 percent to sleep when not playing games and limited battery to only 4 hours.Otherwise works like a dream. Impressive cpu
Sorry about this everyone, we're trying our hardest to boot our infrastructure back into action. I'm off to get a hammer. Updates via our twitter @anandtech
You should or should not start using AMD Zen 3 processors in your infrastructure next time to avoid this problem. I don't know.
A somewhat unrelated request, but please hear me out. Would it be possible to add X-Plane 11 to the CPU and GPU benchmarks? It is easily scriptable and seems to be underrepresented in benchmarking space, but maybe there's a good reason for that. Regardless, I'm trying to convince my dad he should be building an AMD/Nvidia flight-sim machine instead of going with Intel, but there really isn't any good data to back up my claims.
Hah, I’d love to see that, although I’m also a developer on X-Plane and therefore biased. If anyone at Anandtech wants some help with setting that up, they can reach me at [username] at x-plane.com.
X-Plane is indeed super scriptable, which is used by various IHV’s to auto test new drivers and hardware. We also use it extensively internally in our CI testing suite.
Edit: To the OP, I can provide performance benchmarks from an i7-6700k vs Ryzen 3900X for your Dad. It’s unfortunately not super apples for apples. What I can say is that the massive L3 cache is doing wonders for X-Plane, and the greater core count is definitely going to be more relevant in the future as well.
How's office morale there after the launch of msfs 2020? Or does rising interest in flightsims have positive effects on xplane too?
It probably depends what the user wants - a game or a simulator.
FS2020 is pretty - lots of eye candy, but the flight model simulation isn't as realistic as X-Plane, which historically has put correctness above eye candy. FS2020 is also very buggy still (including crash to desktop after restarting a flight, etc), although getting better.
That, and there are not many plugins, add-ons and aircraft models available yet for FS2020, although it's growing. The default built-in aircraft have many missing features, buttons, autopilot modes, etc - which prevent folks from using these as "study level" simulations.
As of right now, serious simmers still use X-Plane/Prepar3d, but maybe that changes as updates roll out for FS2020.
Honestly quite good. Like Alupis mentioned, X-Plane might have a slightly different target audience than FS 2020. That being said, X-Plane 11 is also almost 4 years old now and I'd hope a sim released in 2020 beats it graphically.
We recently completed our move away from OpenGL, which served as the base graphics library for a good 20 or so years. It took forever, partially due to how deeply ingrained GL-isms were in the code base. But we are now all fancy and modern with Vulkan and Metal, which allows us to do some really cool and long overdue things with the sim that were previously almost if not completely possible. So that's quite exciting for me, not to mention that it got quite a bit faster in the process. I can't speak for the other parts of the sim since I almost exclusively work on the graphics and game engine bits of X-Plane, but I think most everyone in the company is pretty excited for the future.
Also, and this is purely from a flight sim user perspective, competition is always good. Just like it's good if AMD and Intel compete with each other. The only winner is the end user.
He's looking to build a completely new system, so it's a toss-up between the 10900kf and whatever looks to be good from AMD at a similar price, but I do appreciate the data point :) From the few sparse google sheets I've seen, it looked like X-Plane was still preferring Intel cores and NVIDIA GPUs. It's a shame really, everyone's benchmarking the GPUs and nobody's looking at the impact of the CPU as far as I could tell. But now that we have an AMD core that's 20% faster than the previous one, the tables might've turned.
Best of luck developing X-Plane, I wish I could help you guys justify the Linux support, but unfortunately I've not got the enthusiasm or free time to commit to a flight sim just now.
The X-Plane benchmarks in the Phoronix review look outstanding. I just need to decide between 5800 and 5900.
Side question, will the large cache on RDNA 2 close the gap with Nvidia?
It pains me to say, but I would not recommend an AMD card at the moment, at least if you plan on running Vulkan on Windows. The problem is with add-ons, they still run under OpenGL (because that's what the SDK provides and by now there is an insane amount of plugins relying on it), so X-Plane exports some of its render surfaces to OpenGL and let's plugins run normal GL code on it, while under the hood synchronizing the two worlds.
The problem is that the AMD driver has a bug with scheduling mixed workloads. Sometimes the GL side gets to execute first, sometimes follow up Vulkan work executes first. This leads to incorrectly composited rendering and really annoying flickering. And despite trying to have this fixed for over a year and many many emails with AMD engineers, we are nowhere near close to a fix for this. We are at the point where we are even considering integrating Zink just so we can ship an enjoyable add on experience to users.
Not every user has that problem, it depends on the hardware and software that runs (ie a very classic race condition). But I can't in good conscience recommend AMD over Nvidia here. And I hate that, because the Big Navi cards look amazing.
I’ve had tons of issues with amd graphics drivers, and it seems fairly widespread. I think they said it had been a focus this generation.
Thanks. I'll try to wait for the 3080/Ti.
Another side question if you'll indulge me, any hopes for temporal AA? It would be a huge win for image quality and GPU performance.
LTT just did his Zen 3 review and had a few Flight Sim 2020 benchmarks.
In short, even a 5600X will exceed an Intel 10900k.
Phoronix has X-Plane benchmarks in their review. Massive improvement over Zen 2 and ahead of the 10900k which was a surprise.
https://www.phoronix.com/scan.php?page=article&item=ryzen-59...
I suspect the 6900 xt will help Flight Sim out quite a bit, but there are no benchmarks yet. It may be worth it to tell your father to wait a week or two before buying a gpu, and in the mean time just use an old gpu. (Maybe I'm saying the obvious here, though?)
Can MS Flight Simulator 2020 be used as a proxy ? Because it's a pretty common benchmark for the current generation.
No worries!
CDN issue?
Wow AMD is killing intel in the performance category. I wonder if Intel will have its core2duo chip coming up sometime soon. This is very similar to athlon64 play a decade ago. However with TSMC AMD will have the manufacturing advantage.
Whlst the TSMC factor has done wonders for AMD now that TSMC overtook Intel. One advantage Intel has still is costs/control of their node and would be very interesting to see production costs compared as might be that Intel has more margin in production they can shave. Though they have stumbled keeping up with node progress and that has clearly been a cornerstone in the hurt they are getting in the market and AMD have capitalised upon that like champs.
Will Intel bring out a new architecture that will rise above all this as they did with the Core2Duo (actually typing this upon one of those cores), well - they need to.
But for me the shift on CPU's does seem to be large and faster cache access and latency. Say Intel released the same chips but added a very large SRAM layer - that for some applications would swing things for sure, not cheap but again - Intel controls and runs their own fabs, and whilst AMD could equally just do that, would the costs in them doing that with TSMC be just as comparable? There is money to be made in fabricating or TSMC would not be doing so well financially of all this. But that is a cost/income that Intel are tapped into with their own fabs.
However it pans, competition has been good for the desktop x86 folks and so glad AMD back in the game and going strong.
One AMD advantage is that it’s manufacturing basically a single part, not the large options of Intel. One part binned and combined into chiplet designs. 8 core to 64 core, its all the same part (1, 2, 4, or 8 of them). There is likely cost savings to this strategy as well.
I imagine they will do that on the GPU side as well at some point.
Also gives them a lot of flexibility with binning which should give them excellent yields.
The ones that don't have full 8 (well enough) working cores just use them for 5600X (6 core) or 5900X (12 cores) if there is 6 or 7 good cores.
The fully working chiplets you can bin for 5800X or 5950X (or for threadripper/Epyc for the very best ones)
Additionally the I/O die on the Ryzen chiplets also gets reused as the X570 chipset, also I believe based on binning.
Bonus: AMD satisfies their contract with Global Foundries by including one I/O die manufactured by GF with every CPU sold.
Intel also uses chip with defected cores for lower core SKUs.
Yes but Intel can't take 2 (or 4 or 8 or whatever number you want) of defective parts and sell it as a high end part. AMD can make a 5900X or a Threadripper/Epyc out of 6 core chiplets "defective". As Intel chips are one big monolithic designs they can't make a 6 core consumer desktop CPU out of a broken 24 or 28 core Xenon chip.
AMD's yields are going to be better too since it's not a monolithic core.
> Say Intel released the same chips but added a very large SRAM layer - that for some applications would swing things for sure, not cheap but again
Which they did for Broadwell but then immediately abandoned it. Anandtech recently did a retrospective on it:
https://www.anandtech.com/show/16195/a-broadwell-retrospecti...
(although at the time of writing this Anandtech's site is dead so pop that into your cached lookup of choice)
Intel did perform internal studies on the effect of 3d stacking on CPU layouts. For things like caches the reduction in delay caused by adding another layer is significant. AMD had to increase latency of the L3 cache in Zen 3 by 7 cycles as a trade-off for the increased size. If they were able to stack SRAM with another layer of transistors placed vertically, that latency penalty could be reduced.
That's talking about eDRAM which has different trade-offs due to increased latency vs L3.
Zen4 might bring on-chiplet SRAM cache using a cheaper node
I'm not sure what you're trying to say here. Zen2/3 already have 32MB L3 SRAM on each chiplet. And we know TSMC 5nm is more expensive than 7nm.
I mean on the IO die, not the processor one (or maybe a new one). 256MB L4 Infinity Cache for all the CPUs in the package. It doesn’t need to be on 5nm or 7nm. It can be on 14nm. But still on package.
(I should have said on package, not on-chiplet, too late to edit)
Can you really do an enormous SRAM on a bigger node without it occupying a huge surface? Would 14nm be cheap enough for that to be interesting? If so, why not also use it for other big structures?
IBM does that on mainframes with DRAM as L4. A cluster of four sockets has almost a gigabyte of memory in a separate chip with analogous function as AMD's IO die (and that attaches to the memory modules).
Of course, these are enormous and ridiculously expensive dies.
But eDRAM uses less power than SRAM. A large SRAM on an older process would have a substantial power penalty.
I was pretty sure SRAM uses less power than DRAM. Some research shows it is true for 6T cells, but questionable on smaller 4T cells, depending on implementation.
So, if you can use ~6x the area of eDRAM, you can have almost no power draw SRAM.
A completely different price-point and TDP, but the Xeon Phi had up to 16GB (out of the main die) that could act as a cache to main memory or mapped to a physical address range. No space for this on this package size, but EPYC package could fit it.
Is that just a rumour or are ITK?
Intel could also optimize their high end enthusiast dies by removing iGPUs, since virtually nobody uses or wants them.
And then they could call them kf, and release them last year!
https://ark.intel.com/content/www/us/en/ark/products/190887/...
AFAIK it still has igpu, just non-functional and wasting space
Not as clear-cut on wasting space. Maybe those chips had a defective GPU area and why disabled - would be good to get clarity upon that.
The other aspect is that they will act as a heat spreader at the die level, so whilst not best use of space - given all the factors - not something could easily dismiss and better a waste of space than a waste of a chip who's only crime was to have a non-functioning GPU upon it.
Intel's high-end desktop series, the Cascade Lake Xeon W, already lacks graphics.
Intel's out is to actually bring their 7nm node to market. Until then they are a sinking ship and they are certainly well aware of that.
10nm was / is a disaster failure and even if could somehow fenagle it into a usable state by next year will be trounced by tsmc 5nm zen 4. They need to cut their losses and leapfrog or they will stay technologically irrelevant going forward.
Intel does not have a Pentium M design as a backup plan like the last time when the P4 mobile and later desktop efforts reached a dead end.
On mobile they do have the new core architecture, used by ice lake and tiger lake, which at 4ghz were outperforming the previous cores (still used on desktop) at 5ghz.
That we know of, anyway. Jim Keller only just left a few months ago so who knows what’s in the architectural pipeline for the next decade.
You talking about his Intel stint which was cut short for "personal" reasons ???.
Afaik he is not a chip architect anymore - more of organizing engineering teams to perform to their best ability and asking the right questions.
The AMD guy who designed Zen architecture was also responsible for the ill-fated Bulldozer design.
> You talking about his Intel stint which was cut short for "personal" reasons ???.
Those scare quotes are kinda rude, and whatever you're trying to insinuate with them is wrong.
Rude ???.
Clearly a lame PR explanation considering his high profile stature at Intel (he is going to save us) and the suddenness of it all.
https://www.extremetech.com/extreme/311664-intels-jim-keller...
I see they used single quotes .. my bad.
It couldn’t possibly be Jim Keller left on his own accord and that Intel is doing damage control, right? Murthy’s head rolled for picking fights internally, particularly with Raja. Is it such a stretch that Keller wouldn’t want to deal with Intel’s political infighting bullshit and live out his days with family? He’s an accomplished guy and has earned every right to retire or work where he wants to.
You still haven’t explained what was so suspicious about him leaving.
Family, health, there is lots that can happen in life, where work is no longer a priority. Just because it's a common euphemism for something more, doesn't mean it has the be so in this case. Given his great track record, something bad probably happened in his life.
> Given his great track record, something bad probably happened in his life.
I think the implication in the parent comment is that things were so bad at intel that he couldn't fix and chose to leave instead. Whether that's true or not I don't know. But the rumours I've read are not good.
When AMD 64 came out it killed the P3 and P4 because of its IO enhancements. By moving the northbridge into the CPU no longer did desktop cpus feel sluggish and no longer would they lock up in IO based tasks. It was a large jump in the desktop space. (Before AMD64 copying a file on your hard drive would run your cpu at 100%. You couldn't multi task and read the news while burning a CD, and so on.)
Intel responded later with the Core 2 Duo, which not only met AMD on its playing field but beat it. For anyone who is wondering how Intel did this, here is an in depth article on the topic:
https://www.anandtech.com/print/10525/ten-year-anniversary-o...
The northbridge absolutely would not cause your CPU to lock up on I/O based tasks, and contained an arbiter to balance CPU and I/O initiated memory requests.
The CD burner issues were because they required hard real time control to not screw the disc up.
To add that Core 2 (which was given as an example in the parent post) was still using the traditional north bridge design.
The process matters a whole lot. You can see this with 14nm vs 10nm Intel chips and their power characteristics. The Zen3 core is excellent but if they were forced to use 14nm it would be far less competitive in terms of core count, cache, power, and max clock rate.
Archived copy:
http://web.archive.org/web/20201105140949if_/https://www.ana...
Does someone have an ELI5 on what AMD has done since ~2015 to turn their whole operation around like this, and why these new processors are so good? What's the catch?
Dr. Lisa Su took over as CEO in 2014 and seems to have driven a re-focusing at AMD, devoting the company's relatively few R&D resources effectively. Starting with large growth in non-PC markets, actually, mostly the consoles.
Jim Keller was certainly key in getting Zen up & going, but he left in 2015 so his impact on Zen 3 would be rather minimal to say the least.
Combine that with the timing of Intel struggling hard with new process nodes and there you go. A bit of AMD got better, a bit of Intel got worse, and now you've got this landslide result.
Yeah, it's worth noting that Intel's planned microarchitecture iteration on the desktop was delayed by over three years - from 2017 to 2021 - and in 2021 will be released on a larger process than originally planned.
AMD's execution looks meticulous by comparison, and they deserve all the praise and profits resulting from it. But their win is inflated by Intel completely imploding. Until next year they're merely iterating on 2015 technology - their Skylake architecture - on the desktop and in servers at least. This is not to say that Intel's 2021 chips will be competitive with this (they won't be) but if Intel with better management would have been at this point in, say, 2018 instead of 2021 this would have played out very differently.
The 10nm delay is of course legendary at this point, but 14nm was itself also about a year behind schedule. In early 2013 Intel stated Broadwell 14nm would be out that year. It didn't come out until Q4 2014. And Intel's Fab 42, which was originally scheduled to open in 2013 as their first 14nm fab, only just opened this year. Intel ended up needing to re-fit existing fabs to be 14nm instead.
So before they faceplanted with 10nm Intel first stumbled with 14nm. 2 generations in a row of issues.
Proper leadership would have taken this risk seriously, and would not have tied up microarchitecture development so much to process improvement. Basing your strategy so fully on Moore's Law keeping up is a recipe for disaster. Clearly Sunny Cove works on 14nm (that's what they're releasing next year!), and they had all the technology available to release that in 2018 too - maybe with lower clocks and core counts.
Seems like a classic textbook case of a large coorporation not understanding how to change their ways, ending up being beaten by a more agile smaller competitor. Oh well, hindsight is 20/20.
_> Jim Keller was certainly key in getting Zen up & going, but he left in 2015 so his impact on Zen 3 would be rather minimal to say the least._
Highly doubt that he did so in a technical role, maybe in a PM like role, but not a design lead. The team lead was Suzanne Plummer, architect Michael Clark.
I was saying many times, if you hire a JK level cadre, it's best to not spend his very expensive time just doing technical work.
Apple also helped a lot indirectly. They have been spending billions on R&D for their SoC's and outsourced the production of the waffers to TSMC which used Apples R&D money to contribute to their 7nm and 5nm node (though Zen 3 is "still" on 7nm).
Although AMDs and Dr. Lisa Su's achievement are not to be underestimated, I doubt the 7nm processes would be as mature as it is right now without Apple.
I think that's vastly overstating Apple's importance. TSMC has had a significant, steady cashflow from far more than just Apple over the years. Nvidia would if anything be the more likely one pushing TSMC's top end, as Nvidia makes by far the largest, highest performance dies out of anyone on TSMC. Apple's latest & greatest A14 only just hits the transistor count on TSMC's 5nm that Nvidia was pushing 4 years ago on TSMC's 16nm.
Apple's extra cash certainly didn't _hurt_, but TSMC wasn't struggling before Apple came along, either.
And prior to 7nm the other major fab, Global Foundries, as perfectly competitive with TSMC despite not having any Apple money.
If Nvidia was such a key partner for TSMC then why did they get priced out of 7nm? Nvidia made some huge chips on 16nm and 12nm, no doubt. I’m not so sure the transistor count comparison is all that meaningful though. Maybe compare the wafer allocation instead.
Clearly TSMC is heavily diversified and gets to pick their clients. Apple has also been the primary (volume) launch partner on both 7nm and 5nm, which to me indicates how much TSMC values that partnership. Imagine the slam dunk Nvidia would have had if the RTX 3000 series was on 5nm.
> If Nvidia was such a key partner for TSMC then why did they get priced out of 7nm?
They wanted more margins? But note that Nvidia does still use TSMC's 7nm for their largest & most expensive dies. The A100 is TSMC 7nm at a staggering 54 billion transistors on 826mm² of silicon. Nvidia retains the largest die manufactured on TSMC's 7nm. By a lot. The next largest would I think be Navi 21 at 536 mm² and 26.8 billion transistors.
> Apple has also been the primary (volume) launch partner on both 7nm and 5nm
Apple's die sizes & transistor counts are comparatively tiny. The first use of a new fab is pretty much always a small, low-power die. That's what yields best when yields are the lowest.
See also why Snapdragons are also among the first to launch on a new TSMC node, despite those SoCs having probably the lowest margins of the high-volume stuff TSMC makes.
I’m not convinced that Nvidia having the biggest die size is a particularly important metric here. It’s obvious to me that TSMC would rather produce tiny Apple chips and AMD Zen chiplets to for yield and wafer space efficiency. We all know that smaller rectangles pack better into circular wafers than big rectangles.
Apple could have made the A14 much bigger and faster than the A13, but they chose to cram more chips on the wafer, which I speculate they did to have more wafers available for their bigger upcoming iPad and Mac chips.
I think you have it backwards on margins. Per square mm it’s clearly more expensive to produce an A100 die than an A13 or Snapdragon SoC due to the wasted die space and need for golden samples on the Nvidia side of things where they’re not selling any cut down dies in GeForce cards.
> TSMC would rather produce tiny Apple chips and AMD Zen chiplets to for yield and wafer space efficiency.
TSMC sells wafers, not functioning dies. They don't care how you slice it up or how bad you yield as a result.
I was going to mention this. The point is that Apple and AMD buy more wafers.
It’s rumoured that Nvidia’s deal with Samsung was for working dies, not total wafers.
They did make a habit of badmouthing TSMC whenever they had problems. That probably played a role in TSMC allocating their scarce 7nm capacity away from NVidia when it was in short supply.
A cpu needs multiple years from their idea to the tape-out. So its possible Zen 3 is still based on ideas by Keller. But only AMD empoyees would know...
>Does someone have an ELI5 on what AMD has done since ~2015
Let's just go back to 2014/15. And Look at Intel's Roadmap ( Or Slides ).
2016 - 10nm - CannonLake - First Gen 10nm Process. Industry leading Density.
2017 - 10nm+ - Icelake - New Architecture.
2018 - 10nm++ - TigerLake - Optimisation[1].
2019 - 7nm - Sapphire Rapids [2]
Most people would expect the ETA year to be +1. As 14nm itself was delayed.
TigerLake has around 15%+ IPC. That would put it ahead of Zen 3. The Intel 7nm is roughly between TSMC 5nm and 3nm. Had Intel shipped that in 19/20, along with Sapphire Rapid. AMD wouldn't even lead, they could barely compete.
So to answer your question. It wasn't AMD did some magic. They have been executing, and shipping their product on schedule. ( Although in tech, that in itself is pretty damn impressive ). All while Intel was.... um..... I dont know what they were doing.
10m fiasco has _lots_ of other knock on effect as well. But I guess this is good enough for a ELI5 post :)
[1] This Process - uArch - Optimisation name came in later part. In 2014/15 people were still talking about Tick Tock.
[2] Sapphire Rapid was always a Server Specific design on the roadmap. But things changes bit. Along with Intel's plan of AVX512
Look on the bright side, if Intel had continued as planned we would be looking at a 10700k as the $500 best mainstream socket cpu... with 4 cores.
There were rumors back in the day about 8C Ice Lake-S but who knows what Intel would have charged for it.
Probably the single biggest reason is the fact that they use TSMC to fab all their chips, and TSMC has had much more consistent node updates compared to intel.
Intel's next generation cpu node was supposed to come out this year, and it's been pushed back (possibly due to covid) until the end of 2021 or even into early 2022. Meanwhile, AMD says they have 5nm around the corner.
I don’t think there’s any catch to it. AMD started again, went with a clean-sheet design, re-hired Jim Keller, then started under-cutting Intel with more cores and decent performance. It went from there really.
Unlike Intel who have micro improvements, each generation has been a big leap in performance so it’s been exciting for consumers.
Intel did something similar with their core2duos when AMD was on top around 12-13 years ago.
I still use a Core 2 Quad in a desktop I built in 2008 in my office (mainly because it has a proper floppy drive on it).
Those processors were such a leap over the Athlon XP 3200+ I had before and it runs a Linux desktop perfectly fine.
> mainly because it has a proper floppy drive on it
What are you using it for? Floppy disks were already very dead by 2008, let alone 2020.
Not the OP, but some industrial equipment and musical instruments have floppy drives in them, and it may be cheaper and easier to keep an old desktop running than to retrofit the equipment. E.g. I have an old synthesizer I like that stores patches on floppy.
I have a few old retro computers and a usb floppy drive doesn’t cut it for Amiga and Atari floppy drives (some say it works online but I’ve tried several usb floppy drives and IME it does not).
I know there are alternatives to using a floppy to transfer to these ancient systems but for me it works okay.
Maybe not ELI5, but I've been following this channel on Youtube for the last couple of years and it does a really good job of explaining the architectural and manufacturing improvements from AMD over the last several years. I really found the yield explanations really eye opening in particular.
https://www.youtube.com/c/adoredtv1/videos
Can you share the title of the video that explains the about yields (if you remember)? I'm not an electrical or hardware engineer, but I'm very curious about it. Thanks in advance!
I apologize in advance, but I'm not 100% sure this is the video i alluded to, but here's my best guess.
https://www.youtube.com/watch?v=ReYUJXHqESk
Both Apple and AMD now beat Intel CPUs. Intel made mistakes.
I’ve seen someone mention less than stellar memory latency and bandwidth on Zen?
The review goes over latencies and how they changed in Zen3, in fine detail.
For any practical purposes, Intel is now behind.
It's true that the Zen parts have higher main memory latency than their Intel competitors. If this is very important to your workload, you should consider them. I think this is caused by the fact that Intel's memory controller is more tightly integrated, while the AMD memory controller is on the other side of some wires from the cores. AMD compensates with huge cache sizes, but every time they increase the cache size they also make the cache slower, which isn't great.
INtel screwed up - that was pretty major win for AMD.
AMD hired the genius CPU designer Jim Keller, that is it.
His previous stint on AMD was when he invented AMD64.
I don't know much about Jim Keller, but a CPU architecture is far too complex for a single person to possible be responsible for such leaps. You can't just take an average team, add one exceptional person, and get results this good. I'm sure he's exceptionally good, but focusing on Keller like that seems rather unfair to all the other engineers who made this possible.
Jim has had such frequent success that I think he may have the rare talent of running large technical teams well. I suspect that may be what his talent is rather than CPU design. But maybe both, or maybe he has just gotten lucky.
I think the 2 key high level choices AMD made was betting on TSMC, and betting on the chiplet + memory controller design.
I don't know enough about CPU archtecture to chime in on that aspect, but I can say from experience that a single person with enough weight/respect behind his/her name can certainly get a stuck team moving in the right direction. I don't know what AMD's situation was at the time, but it's possible that no one felt they had the pull to change directions and his fame allows him come in and change the whole dynamic. Just a thought.
Keller left in 2015. It's quite unlikely he had any impact on Zen 3's substantial IPC gains over Zen 2.
Also worth noting that Keller was at Intel between 2018 and 2020.
and leaving Intel for "personal reasons"...
We only can guess what they can be?
His child has leukemia.
I suspect he would have left no matter what though, because Intel is so arrogant they wouldn't listen to him in the manner in which they would to.
That was the great thing about AMD; when you're already at rock bottom, you're humbled enough to listen to people.
I think this is an important reason, being down and hungry can do wonders for focus and quality. On the other hand doing well over many years means that wrong people got promoted and stay around and grow a big middle management layer that is invested mostly in themselves.
AMD either did well or went broke which forces management to take painful but important decisions.
> His child has leukemia.
source?
Pretty sure the stuff he worked on out extended out to zen4
How much do you think a one-of-a-kind designer like Jim Keller makes? I'm looking at his wikipedia page, and he's worked for AMD, Apple, Intel, Telsa. They all want him. I can only assume there's a bidding war on his talent.
My question is: how do you get into this line of work? Does anyone know how Jim Keller got to be such a one-of-a-kind chipset designer?
Who gets to be a chipset designer, even in times when JK started his career? Just very few people have an opportunity to try their hand at this, and JK was _blessed_ with being able to go from one good team, to another, and mostly isolated from corporate bs.
A big part of his success story, was him having this great opportunity to develop his skills.
Microelectronics is a career suicide in comparison to software today.
I'd say 100 to 1 sieve off rate for people getting into the industry, to ones who actually get to do serious architecture work would be too small. The ratio is likely to be much larger.
My totaly uneducated guess is: $2M base salary, and $20M in stock options per year.
How can Intel fail so spectacularly?
They have decades of X86 experience(they invented it) and literally more money than God(they probably spend more on office supplies than AMD on R&D).
How do you have everything going for you and still lose?
> They have decades of X86 experience(they invented it)
Sure, but AMD has decades of X86 experience, too. They were, after all, the second source supplier of the the Intel 8286, the AMD version being called the Am286. And AMD did, after all, invent the x86_64 ISA that we're all now actually using, while Intel was over fucking around with the utter failure that was Itanium.
Afaik AMD improved licensed Intel 286/386 designs as part of second source foundry agreement for IBM and their first attempt at their own architecture the K5 was really not that good.
They then bought a new chip startup called NexGen which became the K6/K7 later the Athlon.
AMD 286 was direct 1:1 copy. AMD 386 had direct 1:1 microcode copy.
I think it's a combo of Intel failing with 10nm paired with Apple bankrolling TSMC. Apple + TSMC having a deeper pocket than Intel.
AMD benefited by TSMC basically already investing in smaller nodes for Apple, combined with new manufacturing innovations caused by their innovative chiplet/infinity fabric designs. This enabled them to glue more cores together at higher yields and therefore winning massive benefits in multicore performance.
Intel had to worry about constraints caused by both the architecture combo-ed with manufacturing. I'm also not sure if they were even looking at something like chiplets at the time.
There's a lot of truth to this, but the TSMC part of the story isn't _entirely_ true: first-gen Zen was pretty competitive already on GlobalFoundries' process, at least on multi-threaded workloads. Of course the real trouncing of Intel has only started after they moved to TSMC, but the design knowhow was clearly there before.
The same way Apple and Xerox screwed up? Here explained by Steve Jobs:
https://www.youtube.com/watch?v=P4VBqTViEx4
Would the significant of Intel's current x86 bread and butter (core 2 duo line and descendants) being essentially a side project of their Israeli Labs, while their main design efforts, i.e. Itanium,etc, floundered a decade ago be an early harbinger of their demise?
Tbe way I see it the problem with Intel is that while they had a great head start, their total marketshare in the fab business was eclipsed by TSMC because Intel didn't for the most part ran their fabs only for Intel. TSMC has thousands of customers pouring money into the company and R&D efforts while Intel was only using them to manufacture Intel parts. They pigeonholed themselves, which is why Intel inevitably fell behind in R&D.
>How do you have everything going for you and still lose?
Complacency
Intel were top dog for several years, and were able to continue making huge profits while only making small, incremental improvements.
End users lamented the lack of _real_ improvements, but Intel continued to sit on their laurels because it was the easy thing to do - they simply didn't _need_ to do anything drastic. Which it pretty sad - with different leadership with technical vision, who knows where computing power would be today? But that's business...
Intel took it too far though, and didn't seem to see AMD catching them up in their rearview mirror. IMO it serves them right, and I'm so happy there is a underdog challenger that is outpacing them in so many areas. The desktop I bought last year has an AMD processor - the first in my household in many years, but hopefully not the last.
Sounds like a textbook example of why market competition is essential.
Or not paranoid enough.
They grew fat, probably acquired a lot of middle managers not contributing. And they were so far ahead they were not thinking about the customers and innovation but doing market segmentation and squeezing out more money. They thought they were invulnerable.
> They have decades of X86 experience(they invented it)
And AMD invented AMD64 ;)
The importance of AMD's use of chiplets should not be understated. Especially in the server space, it allows them to achieve far better yields on a monster L3 cache than Intel or any other competitor. Along with the modernized Zen UArch, this means they can achieve the same perf metrics at significantly lower cost than Intel.
Even more than yield issues I think that letting AMD better pool engineering effort between their desktop and server parts might have been really important. AMD is a smaller company than Intel and has to be clever about economizing.
And the new cpus are already sold out and being scalped on eBay.
This is starting to become a problem. It's happening to the new Nvidia GPUs, and the new AMD CPUs, and it will probably happen to the new Radeons later this month. At this point it will probably be impossible to purchase any of these products at retail before January.
I'm not sure what Nvidia, AMD, and their retailers can do about this situation, but it's pretty bad.
Scalpers are a symptom of an item having higher value than the retailer is choosing to price it at. That higher value right now is provided by a lack of supply. Or, possibly the unusually high demand during covid. The fix is that we just need way more stock. In Canada we're entirely sold out up here, but we maybe had a hundred units total in the high end SKUs (based on discussions on
http://www.reddit.com/r/bapcsalescanada
.
It was no where near as bad as NVidia's launch, I was watching the AMD subreddit during launch and lot of people got one. There were many shops which had them on sale. I even saw some on sale myself but didn't buy one. So I don't think it was a paper launch this time around.
It was very model specific. 5950Xs and 5900Xs were gone within a few seconds, 5800Xs were up for a few minutes, and 5600Xs were available for almost an hour in some cases.
This aligns with reports of retailer inventory, where availability of high end models was extremely limited, but there were many more of the mid-tier models.
They should be releasing these products at auction until demand falls to their MSRP. The amount of money Nvidia, AMD, etc leave on the table looks substantial.
I could easily have seen the bidding wars on the first supply of 3000 series GPUs having people buying them for $1500+ easily with the 3070 selling for $1k at least.
Auctions feel unfair to consumers. The free marketing hype from the product selling out may be worth a lot more to them than selling a few consumer-grade units for high prices at auction. It’s possible AMD knowingly targeted stock levels and prices to cause the sellout.
For the real high-value users they have completely different products for sale to capture that margin at even higher prices (Epyc, Tesla).
The hardware manufacturers need to get their products into the hands of actual end users to win collaboration with software developers.
Also, most of the high bidding on the 3000 series were anti-scalper bots putting in bids they had no intention of honoring.
I don't think AMD and NVidia _want_ their products to only be available to the very wealthy.
If an item is very desirable it'll always sell out, just give it a couple months and stocks will eventually stabilize.
Just curious, why scalping for PC parts wasn't a huge thing until 2020?
I think one problem this year specifically is that the releases have gotten a lot closer to the end of the year where lots of people want to upgrade/buy gifts before the holidays. Add some supply issues due to COVID and this is what you get.
It's very frustrating, they said that this would not happen btw
I'd like the manufacturers to disclose how many they built and sold. These launches feel very limited. For things to sell out in seconds is not normal.
Well, for the second time my 4670k is getting replaced by my 3900x. I replaced it in my workstation/gaming PC last year and demoted my 4670k to a home server, but now that I have a home server I've found an increasing number of things to do on it and it has trouble keeping up with transcoding video to multiple watchers, or simply writing to my LUKS raid 5. Been considering a second 3900x for it for a while, but with these gains I've bought a 5900x for my workstation and will put my current 3900x in the server.
Is CPU development moving back to a CACHE war fight?
Certainly AMD done well with the CPU cache changes and looking at their GPU, they leveridged that appraoch again very well it seems.
If Intel was to release the same chips and lob on a large SRAM buffer - things may flip again.
However it pans out - this real and credible competition is good for innovation and many area's of x86 have stagnated for so long that the whole Ryzen momentum has done wonders and totally flipped the positions of Intel and AMD.
Previously the cache and memory subsystem were identified as a very significant weak spot in the Zen 2 architecture. These are essentially completely gone now. The changes to the core also contribute a lot, but just rectifying the memory subsystem seemed to have had a huge impact.
The difference in performance between 3700X (32MB cache) and 4700G (8MB cache) is very small. I think AMD's advantage is the core; it's just better.
I was curious about the availability of a 5700X but it's just a typo in the headline. I'm waiting for a 65W 8C/16T variant.
Yeah that would be the sweet spot. Now I probably have to get a last gen 3700X since the 5800X is way much out of my budget.
why isnt there an equivalent??? the 3700x is a marvel.
So should I get the 5900x or the 5950x as a programmer who likes compilation-heavy languages like Scala and Haskell and that also play games?
At least there is no reason not to. You get the very best gaming performance + the best multithreaded performance on a consumer platform. Only the price speaks against that choice. 5900X and 5800X are more reasonable alternatives from that perspective (still very strong).
You'll see little meaningful difference in games (the 5950x has better single core speed than the 5900x but the 5900x is already fast enough to not be the bottleneck in almost all situations). It might be more future proof though, there might be game engines in future which can properly saturate all 16 cores/32 threads. (Ashes of the Singularity is one that can _now_, but it's uncommon).
If you're compiling really large projects and want to cut those times down it could be worth it though.
I'm in a similar situation and i'm going with a 5900x FWIW. The 5950 feels a little overkill, I don't expect to be fully taking advantage of the extra cores.
So far my thinking is that I'll go with the 5950x since it's more or less on par with the 5900x for gaming performance but has slightly faster single core performance. Also more cores are nice for running tests.
Interesting that they have some Apple A13 comparisons in there and that they're that close in some of the metrics!
Apple Silicon is going to be a game changer.
other reviews ( big list ) :
https://videocardz.com/95980/amd-ryzen-5000-vermeer-zen3-rev...
Do foundries ever put multiple chip designs on a single wafer?
The idea is to increase yields by putting the bigger chips in the middle, and smaller chips around the edges—perhaps for different customers entirely…
You need repeating pattern due to how steppers work. Theoretically you can have different chips, but they will be distributed evenly.
And "yield" in this case is not how many chips can be placed on a wafer, but how many of them will not have defects in critical areas.
Thanks!
There are foundries that do multi-project wafer runs for very small customers. It's not motivated by yields, though, but by sharing fixed costs with other customers if you don't really want to do mass production.
Plus customers pay per wafer, so you have put different customers entirely.
TSMC themselves don't care how many chips gets made from a single wafer.
If only it were easier to buy these. All the major retailers were sold out before the chips showed up on the site. The only people who managed to get them were via direct links to the listings.
The cheapest processor that AMD is releasing today is the Ryzen 5 5600X, but it is also the only one that comes with a CPU cooler in box.
Does AMD advertised this change as a "Eco-friendly" like Apple? It's reasonable for 8 cores or upper SKUs.
I didn't hear that. This was mostly done because for the higher TDP parts (105W) the box cooler was not enough cooling.
They also save a couple of bucks by doing this.
URL is timing out. Anyone have a cached copy please?
http://web.archive.org/web/20201105140949if_/https://www.ana...
Thanks!
The article notes benchmarks which can use AVX512, but none of the CPUs tested support AVX512.
Way to go AMD. Let's hope this shock will make Intel wake up and force the giant to innovate. As I see it, it's good for us consumers since the competition means we'll be getting even better CPUs and GPUs in the near future.
Side-note: both anandtech.com and archive.org are overloaded at the moment!!!
With this significant leap in single-thread performance, it will be interesting to keep it in mind when Apple is announcing their laptop CPU next week.
Are there any plans for a Ryzen 7 5700X?
For anyone that read the article and for us that can't access, can we get a summery?
1. About 19% IPC improvement
2. Higher clock rates (exceeding the advertised frequency this time)
3. Power consumption basically the same
4. Lower latency for most operations (notably 5 -> 4 cycle FMA)
5. Same process node, same die size
6. Slightly higher price
>"...not only performance per watt and performance per dollar leaders, but absolute performance leadership in every segment. We’ve gone into the new microarchitecture and tested the new processors. AMD is the new king, and we have the data to show it."
Hopefully AMD fills out the line-up some. There's only a single 8-core offering and it's $449, where-as the 1700 was available at launch for $329. I bought mine a couple months latter for $279. 8-cores, today, cost +60% more. Ouch AMD. We really do need healthy competition, it seems.
You're getting almost double the IPC or single-threaded performance today for 60% more money compared to the 1700X. Factor in inflation, that's not too shabby.
There's some odd stuff:
<<<
PDEP/PEXT Parallel Bits
Deposit/Extreact 300 cycle latency
250 per clock 3 cycle latency
1 per clock
It’s worth highlighting those last two commands. Software that helps the prefetchers, due to how AMD has arranged the branch predictors, can now process three prefetch commands per cycle. The other element is the introduction of a hardware accelerator with parallel bits: latency is reduced 99% and throughput is up 250x.
>>
It talks about pdep/pext and links them to prefetchers - what? Weirder yet "is the introduction of a hardware accelerator with parallel bits" all that's happened, and backed up by earlier info in the article, is that these were trapped+emulated before, hence their hideous cost, and now they're in hardware[0]. It comes across as if someone didn't understand that.
[0] calling it a 'hardware accelerator' is just peculiar.
The paragraph you quoted is referring to the last two _rows_ of the preceding table, of which you've only quoted the last row.
The last two commands (their words) were PDEP/PEXT, but yeah, ISWYM. The description is still weird, and I notice in a preceding row "ANDN - Logical AND". I mean, would anyone technical have made that mistake?
They weren't "trapped" but just implemented using complex/looping microcode whereas now they have dedicated hardware. The purpose of that hardware is to accelerate those operations.
Why do AMD's marketing materials and this article use the word 'uplift' over and over? Is there a reason they don't just say increase?
I would assume their marketing writers have a style guide to conform to, and that "increase" is a word they actively choose not to use.
That seems bizarre, but it still doesn't explain why anandtech takes the words from AMD's marketing and copies it over and over.
I’ve seen at least one other reviewer use “performance uplift” in their content too - perhaps there’s an agreement on terminology in place
Uplift is buzzwordy in that entire industry.
I can imagine some marketing genius saying that “up” and “lift” are both positive words while “in” and “crease” are neutral.
My worries not be able to get new CPU / GPU :/
I managed to get a 3090 RTX and also wanted to buy a 5950X, but I'm tired of these out-of-stock hard-to-get new pieces of hardware, too much hassle. So unless I can order it "for the next day" I will not order them anymore.
There is no 5700X for. Typo?
Warning, my experiments with Gcc and -march=znver2 produced results that were not encouraging. I have not yet identified the build target that produces the best performance on Zen 2.
Zen 3, of course, will be different again.
In which we see once again that software architecture is far more important than hardware performance.
(meta) not sure it'll help now, but maybe it was a bad plan to link the version of the article that loads it as one large page, that's a lot of images to load. Seems like that might be related to the cloudfront errors that are being generated now...
Perhaps anandtech should have used some of those shiny review CPUs to host their website, which is currently down.
Site not accessible: "502 ERROR
The request could not be satisfied.
CloudFront wasn't able to connect to the origin. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation."
One of the biggest advantage of Intel over AMD is naming.
Every new release of AMD I have absolutely no clue about the product line (if it's high end or low end or desktop/mobile), and I have no idea about the generation between two AMD processor whereas with Intel it's easy the generation is in the product name and the category is easy to understand (i3/i5/i7)
To be fair AMD literally copied the category scheme 3 generations ago and stuck with it (r3/r5/r7/r9).
This generation they've also moved to simplify further. The mobile chips with the same number were always one gen behind the desktop (mobile 4xxx ~= desktop 3xxx), but now they've skipped desktop 4000 both are back in sync.
Also Intel's naming scheme is getting complicated too. Base vs K vs KF Vs F. 10xxxgY vs 10xxxH/U on mobile are two entirely seperate tenth gens with lots of overlap.
Don't forget Intel's brand new Core i7-1185G7. A name so terrible that Intel only said it a single time during their announcement presentation (by comparison they said "4800U" 8 times in that same presentation)
what about AMD Embedded, if I look at the AMD page of mini pc they don't have r3/r5r/7/r9 category whereas Intel have.
I don't know what AMD embedded is compare to r3
Well, if we're going to throw embedded into the mix, Intel gets really complicated as well. What's this J1900? J3455? Quick: what features does the 3455 have over the 1900? Is Apollo Lake good? Or is Bay Trail newer? How does Cannon Lake compare to Apollo Lake?
Comparing NUCs, which is better: the BXNUC10I7FNK1, or BXNUC10I3FNK1?
I've got a i5 3350P. Quick, what does the P stand for?
have you looked at intel laptop skus recently?
edit: it's certainly fair to criticize the manufacturers for having bad sku names, but it really doesn't matter that much in the big picture. most people that buy computers neither know nor care about the specific CPU. especially with laptops, the decision is usually made for you by the OEM and you decide based on what size SSD and display you want. if you do actually care what part you get and are in a position to choose, you should make your decision based on benchmarks, not a sku name.
Yeah R5, R7, R9 vs. i3, i5, i7, i9...
10900K, 10900KS, 10900 vs. 5600X, 5800X, 5900X, 5950X
I see where confusion could creep in.
They both follow roughly the same scheme and even have the generation in the same spot of the name:
(i|Ryzen )[3579] (1 or 2 digits to indicate generation)(3 digits for model)(optional suffixes)
The problem with intel naming is comparing products across lines is hard. iX, Pentium, and Celeron are not on the same thousands, so it's hard to tell which are the same core design. Hiding Atom chips in the same lineup is extra confusing.
Of course, since Intel is pretty much stalled for the last few years, it's a little easier now.
I understand - but surely this can't be a downside to the product? How often does the average Joe need to learn their naming conventions?
I feel like I learn them during the purchasing/comparison period then never need to look again unless I'm repeating that cycle.
>the generation is in the product name and the category is easy to understand (i3/i5/i7)
Not really. Look at core count at the old I7 vs. the new. If anything Intels naming is even worse.
Have you seen latest Intel chip names for laptops?