💾 Archived View for dioskouroi.xyz › thread › 29444746 captured on 2021-12-05 at 23:47:19. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2021-12-04)

🚧 View Differences

-=-=-=-=-=-=-

Graviton2 and Graviton3

Author: ingve

Score: 115

Comments: 70

Date: 2021-12-04 22:09:12

Web Link

________________________________________________________________________________

erulabs wrote at 2021-12-04 23:32:55:

most workloads could actually run more efficiently if they had more memory bandwidth and lower latency access to memory

Turns out memory access speed is more or less the entire game for everything except scientific computing or insanely optimized code. In the real world, CPU frequency seems to matter much less than DRAM timings, for example, in everything but extremely well engineered games. It'll be interesting to learn (if we ever do) how much of the "real-world" 25% performance gain is solely due to DDR5.

I remember getting my AMD K8 Opteron around 2003 or 2004 with the first on-die memory controller. Absolutely demolished Intel chips at the time in non-synthetic benchmarks.

shoo wrote at 2021-12-04 23:44:20:

> everything except scientific computing or insanely optimized code

for insanely unoptimized code, such as accidentally ending up writing something compute intensive in pure python, its very plausible for it to be compute constrained -- but less because of the hardware and more because 99 %or 99.9% of the operations you're asking the cpu to perform are effectively waste.

throwaway894345 wrote at 2021-12-04 23:49:40:

I'm skeptical of this. I would expect many of those wasted instructions to be expensive memory instructions. I'm pretty sure regular Python code allocates a lot and the interpreter spends lots of time chasing after memory references.

erulabs wrote at 2021-12-04 23:47:19:

Certainly, but many "wasteful" operations are reading/writing main memory unnecessarily - and since a single trip out to memory and back can take many many CPU cycles, typically optimizing memory access time does more for "bad code" than optimizing the number of CPU cycles per second. But obviously, you're right too - faster is faster after all :)

klelatti wrote at 2021-12-04 23:51:01:

The team that designed the original Arm CPU in 1985 came to the conclusion that bandwidth was the most important factor influencing performance - they even approached Intel for a 286 with more memory bandwidth!

hinkley wrote at 2021-12-04 23:48:21:

In the 90’s there were people trying to solve this problem by putting a small CPU on chip with the memory and running some operations there. I routinely wonder why memory hasn’t gotten smarter over time.

eropple wrote at 2021-12-04 23:54:24:

Distributing work to leverage faster memory locality is _hard_. It's not quite what you're talking about, but consider the Cell processor used in the PS3 - the compute capability in HTPC space was supposed to be prodigious, but even having faster (streaming) access to RAM had the tradeoff of dealing with code dispatch. (It's not a perfect example, the SPE model was also just kind of a pain, but you have to think about how to get your code local to the memory you want, how to keep allocation nonrivalrous, etc. - it's a lot!)

erulabs wrote at 2021-12-04 23:59:06:

Heh, funny you mention this, because I was _so excited_ about the CELL architecture when it was announced - exactly because of memory read/write speed. Then I tried to actually write some code for it, got depressed, and moved back to x86 for the next decade :(

hinkley wrote at 2021-12-05 22:29:16:

I suspect some future improvement on borrow checkers will facilitate doing this to a degree. But it's likely to be one of those things that only comes into being when someone needs it very badly.

y4mi wrote at 2021-12-05 05:21:19:

DDR5 has basically minor cpus on the RAM for improved performance. That's the main reason why they're so expensive and sometimes need active cooling

Terry_Roll wrote at 2021-12-05 09:24:14:

So like a (micro)SD card that has some arm cpu for wear levelling and other performance enhancements.

It sounds interesting.

https://www.bunniestudios.com/blog/?p=3554

GordonS wrote at 2021-12-05 09:54:36:

Interesting - do you know what kind of performance gains DDR5 will provide for real-life workloads?

Having active cooling, making the system noisier and potentially hotter, seems like a pretty big down side.

vlangber wrote at 2021-12-05 13:40:05:

Anandtech compared DDR4 and DDR5 performance in their Alder Lake review:

https://www.anandtech.com/show/17047/the-intel-12th-gen-core...

spenczar5 wrote at 2021-12-05 06:12:13:

Man that sounds _so_ hard to program. I wonder if that's the reason.

baybal2 wrote at 2021-12-05 00:25:24:

Memory bandwidth is not a problem. Even puny 1 channel memory desktops don't usually saturate it.

It's memory latency.

tromp wrote at 2021-12-05 08:52:46:

For some problems, there is a choice of how to organize its data structures. One that requires random access, and another that is mostly sequential access.

The latter might take an order of magnitude more space, while still being faster.

An example of such a problem is the Cuckoo Cycle Proof-of-Work [1].

[1]

https://github.com/tromp/cuckoo

mrcode007 wrote at 2021-12-05 02:08:41:

Certain instruction sequences prohibit you from fully utilizing memory bandwidth and decrease it by as much as 40-60% so this statement is not true. For example, not using store and load buffers in ways they were designed to be used will lead to subpar performance for no apparent reason

ahartmetz wrote at 2021-12-05 02:45:35:

Can you elaborate? I have a side project(1) where all profilers I've used give a very muddled picture, so I'm very interested in the question of what slows down code on a modern "big" CPU with wide dispatch, a few kB of decoded ops buffer and a lot of OoO hardware.

(1) It encodes and decodes a protocol from a potentially untrusted source, so there is obviously a lot of waiting for previous results. That much is clear, however I expected profilers to show me some causal link between serial nature and slow execution, but they don't. I have tried perf, Valgrind-Callgrind and AMD ÎĽProf (because I have a Ryzen CPU on both of my main private computers). I'm not sure if the tools suck, my test cases suck, or I just don't know how to interpret the tools' results - assignment of cost to lines of code seeming highly unreliable is my main problem. Maybe the stupid things (most of optimization is about not doing stupid things, after that it gets properly hard) that these profilers are designed to catch aren't the stupid or unavoidable things my code is doing.

namibj wrote at 2021-12-05 03:50:02:

KDAB's hotspot is quite nice for analyzing perf recordings, and I suggest looking at stall cycle and "cycles with less than X uops dispatched" events to sample on.

Yes, attributing to lines in code is hard for optimized compiler output, but it can (in the continuous release/`hotspot-git` AUR package) attribute to the disassembly of a function.

adgjlsfhk1 wrote at 2021-12-05 01:50:29:

For well optimized code it is.

Proven wrote at 2021-12-05 00:46:31:

That's the same thing

mda wrote at 2021-12-05 00:02:02:

I find this claim hard to believe honestly,could you point to examples where performance is limited by Dram speed and not by cpu / caches? They must be applications with extremely bad design causing super low cache hits.

erulabs wrote at 2021-12-05 00:06:01:

> They must be applications with extremely bad design causing super low cache hits

Yep, this is exactly the case - also, on systems that are busy and context-switching often and thus flushing their cpu caches more frequently. Combine the two, busy systems running loads of un-optimized code, and boom, you have described how most computers run in the real world. This is why "synthetic" benchmarks, which are well designed code running on quiet machines more or less match up to CPU Frequency exclusively.

I don't really have any good charts to show you, but you might checkout an old review of the processor I mentioned as having one of the first on-die memory controllers:

https://techreport.com/review/5655/amds-opteron-146-processo...

The AMD Opteron 240 1.4GHz keeps up with chips close to 2x it's frequency - and the memory access times are close to 1/2 as costly (ie: almost all the performance gain from 2x frequency is made up by 1/2 memory access time) - this makes sense, but remember these are well optimized applications (POV-Ray and Lightwave were _extremely_ synthetic). In the real world, opening 10 misc windows applications from 2003, the K8 (particularly when overclocked) was a _beast_.

mda wrote at 2021-12-05 04:20:04:

Well, Opteron is an ancient processor, I don't think we can make any conclusion based on that. Today's server processors has enormous caches compared to Opteron.

Honestly in cases you mention, badly designed processes killing the Cpu, I fail to see how faster ram makes a huge difference.

erulabs wrote at 2021-12-05 05:00:15:

I mean - the people who designed Gravaton 3 seem to agree with my premise, so at least that’s some validation. Alternatively, do some CPU profiling with your workstation - a massive amount of time is simple waiting for memory returns.

sroussey wrote at 2021-12-05 01:30:45:

This is why you dedicate entire machines to the same kind of load. All application code or all database. There was a moment where people tried to integrate—which was indeed faster but only for very limited use cases.

hexxagone wrote at 2021-12-05 02:44:07:

In data compression, inverting a BWT with large blocks or using Context Mixing to compress large blocks (which requires huge context maps). These 2 cases require a lot of random memory accesses.

therealcamino wrote at 2021-12-05 01:48:34:

Anything where the working set is larger than L3 cache.

staticassertion wrote at 2021-12-05 01:07:28:

> They must be applications with extremely bad design causing super low cache hits.

So basically any program written in a language with pointer types exclusively.

throwaway894345 wrote at 2021-12-05 02:47:13:

I thought I’d heard that Java VMs go to great lengths to maintain cache coherence? I’d be curious to hear from the Lisp folks because I always hear that Lisps can be surprisingly performant.

staticassertion wrote at 2021-12-05 18:51:45:

There are some optimizations in the JVM that will improve cache coherence. Bump allocation helps, inlining and escape analysis help, etc.

In theory the GC can also rearrange memory to 'compact' it. I'm not aware of this optimization in practice.

throwaway894345 wrote at 2021-12-05 19:30:37:

As far as I know, all mainstream Java GCs are compacting; however, I don’t have an idea about the degree this improves cache coherence.

mda wrote at 2021-12-05 04:22:18:

Even then, today's cpus have enormous caches, and not all parts of program is pointers. you cant make a crappy application much faster just because you have faster ram.

veselin wrote at 2021-12-05 05:28:15:

Well, I disagree with pretty much everything in the claims.

First, most real unoptimised code faces many issues before memory bandwidth. During my PhD, the optimisation guys doing spiral.net sat nextdoor and they produced beautiful plots of what limits performance for a bunch of tasks and how each optimisation they do removes an upper bound line until last they get to some bandwidth limitation. Real code will likely have false IPC dependencies, memory latency problems due to pointer chaising or branch mispredictions well before memory bandwidth.

Then the database workload is something I would consider insanely optimized. Most engines are in fierce performance competition. And normally they hit the memory bandwidth in the end. This probably answers why the author is not comparing to EPYC instances that have the memory bandwidth to compete with Graviton.

Then the claims that they choose not to implement SMT or to use DDR5 are both coming from their upstream providers.

wyldfire wrote at 2021-12-05 15:10:11:

Wouldn't SMT be a feature that you are free to use when designing your own cores? I'm assuming Amazon has an architectural license (Annapurna acquisition probably had them, this team is likely the Graviton design team at AWS). So who is the upstream provider? ARM?

And if they designed the CPU wouldn't they decide which memory controller is appropriate? Seems like AWS should get as much credit for their CPUs as Apple gets for theirs.

Bottom line for Graviton is that a lot of AWS customers rely on open source software that already works well on ARM. And the AWS customer themselves often write their code in a language that will work just as well on ARM. So AWS can offer it's customers tremendous value with minimal transition pain. But sure, if you have a CPU-bound workload, it'll do better on EPYC or Xeon than Graviton.

wmf wrote at 2021-12-05 00:21:02:

I can't escape the feeling that AWS is taking credit for industry trends (DDR5) and Arm's decisions (Neoverse).

hodgesrm wrote at 2021-12-05 04:07:09:

> I can't escape the feeling that AWS is taking credit for industry trends (DDR5) and Arm's decisions (Neoverse).

ARM is just a design. AWS brought it to market. ARM-based server processors are still rare on the ground. IIRC Equinix Metal and Oracle Cloud offer them (Ampere chips) but not GCP or Azure.

We've tested Graviton2 for data warehouse workloads and the price/performance was about 25% cheaper and 25% faster than comparable Intel-based VMs. Still crunching the numbers but that's the approximate shape of the results.

magila wrote at 2021-12-05 01:07:08:

Yeah, the tone of these talks is kind of weird. They talk about how "we decided to do foo" when the reality is "we updated to the latest tech from our upstream providers which got us foo".

aledalgrande wrote at 2021-12-05 02:11:30:

Isn't making the CPU wider one of the things Apple also did with M1? Doesn't feel like they are the first.

wmf wrote at 2021-12-05 02:44:56:

Apple designed the M1. AWS is (probably) using off-the-shelf Neoverse V1 cores that they did not design.

[Imagine "you made this, I made this" meme here]

vineyardmike wrote at 2021-12-05 03:59:48:

They have a huge design team making custom silicon. They deserve a bit more credit even if they’re leaning on ARM IP.

MobiusHorizons wrote at 2021-12-05 17:40:50:

It is an important distinction between designing an SOC with ARM provided cores and designing the cores from scratch. People on this thread are comparing AWS’s achievement to the M1, but that’s just in a totally different ballpark. Obviously still hard to design custom silicon and custom servers around it, but it’s fair to say that’s a for cry from “optimizing the cores for the workloads that run on EC2” as has been suggested in this thread.

ashtonkem wrote at 2021-12-05 07:16:18:

The rest of the server has to be designed too, since they can’t just buy from Dell or some other OEM and put Graviton into it. At their scale this means management software and hardware too, which is a right old pain in the butt to design and deploy.

StreamBright wrote at 2021-12-05 10:51:04:

They do not "just buy from x". They have their own motherboard design and they also got Amazon flavored Intel CPUs. It was just question of time to start to produce their own CPU. A vertically integrated stack pays off in the long run.

wyldfire wrote at 2021-12-05 15:10:55:

Maybe but M1 doesn't really compete in this market.

pm90 wrote at 2021-12-05 00:48:30:

Like how Apple takes credit for packaging new technology in an easy to use product? What’s wrong with that? They’re not exactly hiding it.

phamilton wrote at 2021-12-05 00:04:09:

A recurring theme is "build a processor that performs well on real workloads".

It occurs to me that AWS might have far more insight into "real workloads" than any CPU designer out there. Do they track things like L1 cache misses across all of EC2?

uplifter wrote at 2021-12-05 00:36:28:

Reality varies. Its a truism in optimization that the only valid benchmark is the task you are trying to accomplish. These chips have been optimized for an average of the tasks run on AWS (which is entirely sensible for them), but that doesn't mean they'll be the best for your specific job.

w1nk wrote at 2021-12-05 08:59:42:

They'll definitely have information that traditional CPU designers won't. Check out this talk from Brendan Gregg (he's probably lurking), where he specifically calls this out:

https://www.brendangregg.com/blog/2021-07-05/computing-perfo...

See slide 26 (and the rest ofc :)).

virtuallynathan wrote at 2021-12-05 00:30:42:

Hard to track for other people’s VMs, but they probably have (or can sample) that data for every AWS-operated service (dynamo, S3, redshift, etc..)

vineyardmike wrote at 2021-12-05 04:01:15:

There is a strong internal mandate for internal services to switch over to gravitron. So they either likely have this data, or are just trying to free up more x64 cores for external customers.

sroussey wrote at 2021-12-05 01:33:51:

Yeah, this is key. They definitely optimize for their own services. And they don’t run S3 and redshift on the same cpu/server at the same time.

vineyardmike wrote at 2021-12-05 04:02:11:

They may not run those particular services on the same hosts but they heavily use Lambda (and docker) which can share hosts and be tossed around the data centers to saturate cores.

trhway wrote at 2021-12-05 00:37:17:

AWS can also build slightly different CPUs under the same name for different workloads and not tell anybody.

pm90 wrote at 2021-12-05 00:50:41:

Arm seems poised to replace x86 in servers. If I were Intel this would make me really nervous.

betaby wrote at 2021-12-05 02:26:46:

Very unlikely. See for example Linus reasoning

https://www.realworldtech.com/forum/?threadid=183440&curpost...

vlovich123 wrote at 2021-12-05 05:55:14:

I think the flaw in Linus’ argument is that this happened in the 90s-2010s for x86. A foundational time, especially for his worldview, but I don’t know that the pattern repeats (some of his viewpoint is colored by his time at Transmeta).

The development world today looks very different. Back then, language support for other architectures was more bespoke and CPU vendors had to add support for their chips. Today, there are plenty of very rich, platform-agnostic (both CPU and OS) libraries. Additionally, mobile development has sufficiently matured ARM development that I don’t think that argument holds. If it did, then developers wouldn’t be able to develop on their x86 MacBooks and deploy to their mobile Apple devices (yes it’s ARM now but it hasn’t been for the majority). I think the plain x86-box -> server story is pretty solid for but the cloud has changed that. Everyone is now starting out in the cloud with CPU-agnostic languages where switching architectures usually is as simple as changing 1 line in a config. In some cases it matters but the vast majority of SW dev shops don’t feel this like you used to in the 90s and 00s. Plus M1s now provide developers with local ARM development.

faeriechangling wrote at 2021-12-05 02:49:04:

Linus's reasoning is sound, but the issue is that ARM development platforms are becoming a thing and to be honest I see x86 as being in the early stages of a death spiral and so does Intel the way they're focusing on the fabrication side of their business.

If anything programmers are adopting ARM based computers faster than the rest of the market. As pretty much every developer tool gets ported for Apple silicon every company is going to shrug and go "May as well release an ARM Windows/ARM Linux build as well".

vineyardmike wrote at 2021-12-05 04:04:51:

I totally agree with everything you said except that devs are switching faster. I think first to switch was low end chrome books and surface go type devices. M1 is pulling the devs and professionals in, and gaming will be the last holdout (due to optimized IP that may be abandonware and never updated).

ta988 wrote at 2021-12-05 04:19:04:

The good thing I see at work is that we all make everything work for x86 and arm. So we can deploy on any kind of cloud platform cpu and not worry about that anymore.

solatic wrote at 2021-12-05 05:18:25:

We've been migrating our production to Graviton2 (now Graviton3). Our developers run x86 Macs. Everything runs on the JVM, Python, Node, Go, so nobody feels like there's a difference. The ARM transition has been transparent for us.

Linus' reasoning makes sense, but the real world disagrees with him (at least in our case).

jeffreyrogers wrote at 2021-12-05 04:03:24:

Linus's argument is that devs will use the same processors in production that they develop on. But everyone already has to develop for ARM because mobile runs ARM. And now the M1 Macs do too (and these AWS servers). So if you're forced to use ARM because of mobile and now there are good options for desktop and servers to use ARM as well, I don't see why people wouldn't switch to them. Basically Linus's own logic seems to contradict his claim.

dilyevsky wrote at 2021-12-05 03:41:09:

1. Fewer and fewer people run their stack on the laptop. There is tooling today to run even unit tests remotely pretty painlessly like bazel (and the like) and docker

2. With languages like java, go, python, node it doesn’t even matter

3. Devs are migrating to arm en masse (M1s)

ta988 wrote at 2021-12-05 04:17:51:

Agree with 1. I'm part of 3. Regarding 2, it does matter for anything that has bits of optimized c code that was only done for x86. I have a lot of Node and Python things that don't run on my M1 natively (they even crash on qemu x86 vms whatever the kind of cpu features I emulate).

dilyevsky wrote at 2021-12-05 22:38:50:

Right which highlights why C is an outdated language that didn’t fully live up to its original promise. Linus’s entire argument seems to hinge on the premise that most userland devs still use it as their primary language. Eventually those python and node bugs will get fixed and people will largely not care

_ph_ wrote at 2021-12-05 10:03:38:

That is, as far as the reasoning applies, why I consider the M1 Macs so pivotal. The MB Pro already was a very popular machine for developers. Now it not only got much faster and better, it also offers access to great ARM development machines. Be it for the largest market, smartphones, or for the cloud solutions based on ARM machines as the Graviton.

nichch wrote at 2021-12-05 02:42:40:

Unlikely for _now_. The ball has just started rolling with the M1.

freemint wrote at 2021-12-05 11:12:29:

Don't forget Ampere's A1 i found them really, really impressive for SAT solving and that you can get them at 1ct/core/hour at Orcale makes them really financially attractive.

jeffreyrogers wrote at 2021-12-05 03:57:02:

5 or 6 years ago Marc Andreesen was saying this would happen eventually. I was skeptical when I first heard the claim, but it's seeming more and more likely.

taf2 wrote at 2021-12-05 01:04:47:

Possibly on the desktop too... I imagine we'll see many m1 like windows pc options in the near future...

ta988 wrote at 2021-12-05 04:20:07:

They are coming. Clearly not as impressive as the latest m1 max and pro, but getting there.

adfgdtyhaet wrote at 2021-12-05 05:42:34:

I don't understand this at all.

The first and second transcripts seemingly contradict each other. The first one says:

Cores got so big and complex that it was hard to keep everything utilized.

But then the second one is about how they improved performance by making the cores bigger and more complex. Why is it possible to feed _their_ wide core but not the competition's? Why is it that idle transistors are bad in the competition but Graviton benefits from specialized vector instructions that are only useful in some workloads?

With Graviton2, one of the things we prioritized was large core local caches. In fact, the core local L1 caches on Graviton2 are twice as large as the current generation x86 processors.

This doesn't make sense. All modern x86 machines have both an L1 and L2 cache that are local to each core, with only the L3 cache being shared. My laptop has a total of 288 kiB of cache dedicated to each core.

The fact that the competition uses 32 kiB L1 caches has nothing to do with a difference in philosophy. _Everyone_ realizes that caching is important and _everyone_ uses the biggest caches they can get away with. The reason x86 designers chose a smaller cache is because, in their designs, increasing the cache size would reduce performance. Large caches are slower than small ones. Increasing the hit rate is not necessarily worth it if it makes every cache access more expensive.

most workloads could actually run more efficiently if they had more memory bandwidth and lower latency access to memory

In other news, water is wet. Memory has been the bottleneck for as long as I've been alive.