š¾ Archived View for dioskouroi.xyz āŗ thread āŗ 29396198 captured on 2021-12-04 at 18:04:22. Gemini links have been rewritten to link to archived content
ā¬ ļø Previous capture (2021-12-03)
-=-=-=-=-=-=-
________________________________________________________________________________
_Today I would like to tell you about the AWS Nitro SSD._
A bit light on technical details but very fun, very exciting. Kind of sad that such amazing work is no longer quite so public, is no longer something that say Intel is going to talk up in endless details with a product launch. A huge amount of the work & innovation here is extremely specific, extremely private- all this Elastic Fabric Adapter related stuff is advanced systems engineering, close integration of systems, that's Amazon's & Amazon's alone.
Anyhow. This article pairs very well with the "Scaling Kafka at Honeycomb"[1], which I found to be a delightful read on adapting & evolving a big huge workload to ever-improving AWS hardware.
[1]
https://www.honeycomb.io/blog/scaling-kafka-observability-pi...
https://news.ycombinator.com/item?id=29396319
(38 minutes ago, 13 points)
I wrote the AWS post and did my best to share lots of technical details; are there any specific things that you want to know more about?
Generally a fan of your posts, but this one was very heavy on marketing buzzology ("cloud scale"). I can't tell if there was a genuine use case for designing a proprietary SSD, or if it were some pet project. Is "75% lower latency variability" because the first gen SSD was a CS101 project, or because AWS have developed some material edge over what others (with much wider scope) in the industry have been doing for years? I can't tell.
I can't see a reason to buy or use this product.
I doubt that other companies' supposedly "wider scope" actually exists or gives them advantages. Both Amazon and Google make their own SSDs and have the largest computer installations in the known universe. The fact that Samsung makes a lot of SSDs for laptops may not give them wider scope at all.
It is faster than the old one because they used their experience to improve it. I'm not sure why that is hard to understand.
I actually think that these posts have gotten much better over the past 2-3 years, at least based on my taste; the level of technical details is just right. On specific topics, I wouldn't mind James Hamilton-level specifics, but you can't be too deep on everything all the time.
(hi Jeff! Hope you're well :D)
Hi Simone, doing well and we are trying to add more info while still being frugal with words and with the time of our readers.
Oh I would love some more deep dives or presentations by James Hamilton into various aspects of AWS. They combine the high level overview and the deep technical details in a very informative and entertaining way.
Why doesn't James Hamilton ever give presentations anymore?
Hi Jeff! Eeeeeek! I'd love to know so much more about the Nitro acceleration. All these accelerated fabrics are so interesting.
* What does the Nitro accelerator look like to the host? . Does the Nitro accelerator present as NVMe devices to the OS host, or is there a more custom thing it presents as? Does the Nitro accelerator use SR-IOV to or something else to present as many different PCIe adapters, per-drive PCIe, or a single PCIe device, or no PCIe devices at all, something else entirely (and if so what)? Are there custom virt-io drivers powering the VMs? How much change has gone into these interfaces in the newest iterations, or have these interface channels remained stable?
* What is the over the wire communication? Related to the above; ultimately the VM's see NVMe, & how far down the stack/across the network does that go? Is what's on the wire NVMe based, or something else; is it custom? What trade-offs were there, what protocols inspired the teams? Originally at launch it seemed like there was a custom remote protocol[1]; has that stayed? What drove the protocol evolution/change over time? What's new & changed?
* What do the storage arrays look like; are they also PCs based? Or do the flash arrays connect via accelerators too? Are these FPGA-based or hard silicon? Are there standard flash controllers in use, or is this custom? How many channels of flash will one accelerator have connected to it? How much has the storage array architecture changed since Nitro was first introduced? Do latest gen nitro & older EBS storages have the same implementation or are newer EBS storages evolving more freely now?
* On a PC, an SSD is really an abstraction hiding dozens of flash channels. There have been efforts like Open Channel SSDs and now zoned namespaces to give the PCs more direct access to the individual channels. Does the Nitro accelerator connect to a single "endpoint" per EBS, or is the accelerator fanning out, connecting to multiple endpoints or multiple channels, doing some interleaving itself?
* What are some of the flash-translation optimizations & wins that the team/teams have found?
And simply: * How on earth can hosts have so much networking/nitro throughput available to them?! It feels like there's got to be multiple 400Gbit connections going to hosts today. And all connected via Nitro accelerators?
It's just incredibly exciting stuff, there's so much super interesting work going on, & I am so full of questions! I was a huge fan of the SeaMicro accelerators of yore, an early integrated network-attached device accelerator. Getting to work at such scale, build such high performance well integrated systems seems like it has so so many interesting fascinating subproblems to it.
[1]
https://www.youtube.com/watch?v=e8DVmwj3OEs#t=11m58s
> * How on earth can hosts have so much networking/nitro throughput available to them?!
I feel this is something overlooked when people complain about the egress fees
If you have an existing EC2 instance with EBS storage and want to convert it to the new Nitro SSD, what will be the process for migration? E.g. a live swapping of attached storage devices, a quick reboot, or spinning up a new instance?
The Nitro SSDs are currently used as instance storage, directly attached to particular EC2 instances.
Thanks for the response. To clarify, does this mean that only some EC2 instances will be eligible (i.e. if I have an older EC2 instance I will have to re-create it)?
Nitro SSDs appear to only be available on specific new instances types, like the just announced Im4gn and Is4gen.
Are there plans to provide Metal instances with these new SSDs?
I don't know one way or the other, but great question. I prefer launching stuff to hinting about it :-)
Fair enough, and good luck with the rest of re:Invent
I'd like to see P99.9 and MAX latency for certain read and write patterns. More concretely a before and after wrt a specific workload would be even better.
> _A huge amount of the work & innovation here is extremely specific, extremely private- all this Elastic Fabric Adapter related stuff is advanced systems engineering, close integration of systems, that's Amazon's & Amazon's alone._
You speak my mind:
https://news.ycombinator.com/item?id=19162376
(from 3yrs ago)
Intel has stopped disclosing a lot of details on their newer products, probably because they're no longer far and away the market leader. I think if AWS ever develops a 4-5 year lead over everyone else we'll see similar disclosures out of them. Facebook publishes a lot of info about Oculus asynchronous reprojection techniques and computer vision because they have a 2/3 marketshare in VR.
One question I have is, I thought the cloud was supposed to abstract this kind of stuff away? Shouldnāt cloud services be sold in the āsolution domainā rather than by picking the backing technology behind your tool?
For example, why not have a file/object/whatever storage service; and a price matrix that lets you select key metrics like latency, throughput, and variability of either?
I donāt particularly care if my ultra fast ultra low latency is derived from SSDs, spinning rust, RAM, l2 cache, or acoustic ripples. But Iām not super in tune with cloud services to begin with.
There are effectively three tiers: managed storage services like S3 (object storage), EFS (NFS NAS), or FSX (clustered filesystem) where most of the decisions are made for you; the mid-level EBS (SAN) service; and storage-optimized instance types with local disks which you manage.
This custom SSD hardware family is what powers the EBS (cloud SAN) service, which allows you to pay for the performance level you need, where they give it both higher absolute performance and [now] better worst-case latency.
This announcement is saying that you can now get your own instances with the same performance characteristics for situations where you need better performance than a SAN can deliver and/or the robustness benefits of using per-node storage rather than a separate networked service.
The other part of this announcement is the implicit message it sends about the competition: they're telling everyone that their storage performance is more consistent than their competitors and increasing the number of areas where they can say they have an option which a competitor does not. Noting that this was driven by the EBS storage team is also a reminder that they have more people working on lower-level infrastructure problems than you likely do.
I think it comes down to the fact that at the end of the day, your software runs on real hardware, which isn't perfect. So rather than hide these imperfections behind an opaque surface, AWS let's you peek behind the scenes to optimize your software, debug issues, etc. It's really useful if you're working at a large scale.
They also have things like Lightsail if you don't care about the details and just want the packaged solution.
They already do. But some customers want to more finely control the various trade offs with different technology implementations, and services like this allow them to do so. Everyone else can keep using what they already have.
But it is abstract away, you just need to define your IOPS requirements, then pick the cheapest volume type that can satisfy it. You wouldn't have to choose if SSD is as cheap as HDD, but alas we're not there yet.
Comparing to life before the cloud, you would have to choose the vendor, the size of your drives, how many drive, then go through the procurement process, months in advance.
The cloud is 90% renting other peoples servers and 10% āI just want to write an appā. Based on the feedback Iāve seen on HN, that 10% quickly finds out the inconsistent performance of the paas will eventually become an issue.
If Iām not mistaken, EBS (as in, elastic block storage) already allows this, but it often wonāt beat the latency of a local SSD.
Related: I just updated
so that it now includes the new instances types so you can compare them on a price and resource basis.
that site has become so slow to load and makes my CPU to ramp up for several seconds, I have personally switched to using better alternatives.
It's open source and we're looking for contributors. Let us know if you'd like to help out.
Can u name a better alternative?
The second generation of AWS Nitro SSDs were designed to avoid latency spikes and deliver great I/O performance on real-world workloads. Our benchmarks show instances that use the AWS Nitro SSDs, such as the new Im4gn and Is4gen, deliver 75% lower latency variability than I3 instances, giving you more consistent performance.
Tl;dr: They now have custom SSD firmware that avoid latency spikes.
Directly between Armenian and Azerbaijani, Google translate should add AWS.
I was racking my brain trying to figure out what the parent comment had to do with Georgia. Seems like a dead brain day for me...
>Select a language to translate from:
>Armenian
>AWS
>Azerbaijani
Whole comment quoted by ksec sounded like some techno babble. I assume this was david's point.
>techno babble
Really? I am Genuinely curious. I reread my quoted part and dont what is techno babble.
That'd give them the opportunity to put an ad for google cloud right above it!
No
The issue with all AWS storage is that storage bandwidth eats your network bandwidth. And there are not-great documented multi-level throttles and bottlenecks involved in that.
Especially in the "Up to X per second" networking instances, which is basically all of them except the huge ones.
The activation of throttles is NOT well exposed in metrics, nor is bursting amount or detecting if bursting is occurring.
It is all somewhat shady IMO, with AWS trying to hide problems with their platform, or hide that you're getting charged in lots of sneaky ways.
Most instances are EBS optimized, and have dedicated bandwidth for EBS, optimized stack etc [1].
[1]
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-opti...
Hi Jeff, Is this AWS custom SSD based on their own SSD CTRL and FW not commercial SSD??
We also took advantage of our database expertise and built a very sophisticated, power-fail-safe journal-based database into the SSD firmware.
Assuming this means something similar to QLDB, did they put a centralized blockchain in the firmware? Pretty cool.
Errā¦ what? Is any database now a centralized blockchain? I donāt even get from the sentence that this is using any kind of cryptographic verification (though it might).
I think we can probably think more along the lines of the postgres wal (write ahead log) and _journaling_ file systems here.
I think a lot of younger people and crypto-influenced people today think of "journaling" as "like a blockchain" based on the overlaps (not the cryptographic portion, the "blocks" half).
I dislike crypto and have argued against it many times just on this site. But other than financial audit DBs, I havenāt seen many use cases for a cryptographic ledger db. AWSās own ledger DB, QLDB, was discussed in the context of blockchains on HN when it was announced several years ago.