AMD Reveals the Radeon RX 6000 Series, Coming November 18th

Author: jsheard

Score: 455

Comments: 406

Date: 2020-10-28 16:30:12

Web Link

________________________________________________________________________________

mastax wrote at 2020-10-28 20:35:36:

Rumors were that AMD wanted to push the performance and price so they weren't stuck being the discount option forever. This looks like it'll do that. Performance and power efficiency look good, and they implemented most of the Nvidia early adopter features. If they want to really compete with Nvidia as a premium option they'll need to match the non-performance features that Nvidia has:

- Driver stability. Nvidia is not perfect but the 5700 series was awful from AMD (had to return mine). They need to at least match Nvidia.

- DLSS. It started out as a gimmick but with DLSS 2.0 it's just 70%+ free performance improvement with the only downside being somewhat limited game support.

- Video Encoder. Nvidia's encoder has very high quality and is well supported by streaming and recording software. I wonder what sort of improvements the RX 6000 series has?

- CUDA, Tensor Cores, Ansel, etc. I don't really use these things but if I'm paying the same amount I want similar capabilities.

It's kinda crazy that they're using the same silicon in all three cards; the 6800 has 1/4 of the CUs disabled! It is a big chip for 7nm, though. AMD has said that availability won't be an issue (which could give them a leg up on Nvidia in the near term), but I do have to wonder about it.

The 80CU 6900XT was rumored months ago and there was a lot of speculation and incredulity about the memory system to feed all those cores. GDDR6X is Nvidia exclusive so some said a 512-bit bus, or HBM2, or even a combination of HBM and GDDR6. The big cache is an interesting solution, I'm curious how it'll perform. It makes sense in a gaming context where a lot of buffers and textures get reused over the course of a frame. I'm a bit worried about tail latency affecting the 99-percentile frame times which have a huge impact on the subjective experience of smoothness. Also bad for massively parallel compute work.

PaulKeeble wrote at 2020-10-28 20:50:12:

All that bothers me certainly but I have two further concerns based on the types of games I play. The first is Minecraft, AMD has atrocious abandoned OpenGL drivers and Minecraft regularly fails to reach 60 fps on AMD cards where it can get to 500+ on Nvidia, its one of the most played games on the planet. The second is Virtual Reality, AMD has lagged in the technology of VR and doing things smarter with support for foveated rendering and multiple pass optimisation are really important for getting hard to run games playable.

Raw performance looks fine, its all the lack of news on the features and zero acknowledgement that people play games other than the latest AAA games that get benchmarked and they need to run well too. Not to mention all the bugs, if I buy an AMD card I can guarantee based on history something weird will happen for the life of the card, it always has for the past 20+ years, I haven't yet had a clean experience with an ATI/AMD card and I have owned a bunch of them.

What I really need AMD to say and do is acknowledge how important a good customer experience is with their driver stack, how every game and API matter, that they understand that customer experience is reality and are now committing it to history for good. Then they can sell me a high priced card, until they do well its got to be a lot cheaper to be worth the risk.

snvzz wrote at 2020-10-28 20:57:06:

>The first is Minecraft, AMD has atrocious abandoned OpenGL drivers

Try that under Linux sometime. It's a different song there thanks to Mesa.

>opengl

Is not the most popular API, and AMD had, until recently, limited economic resources, which they mostly directed towards their CPU team and towards hardware. For their proprietary windows efforts, they needed laser focus on what's most important: The newest and most popular APIs.

There's money these days, and they multiplied the size of their driver team, but money spent on OpenGL would be indeed wasted, as there's a pretty compelling solution for this API from the Mesa guys, in the form of an opengl implementation that's on top of Vulkan: Zink.

I expect future GPUs and drivers (for all platforms) to absolutely not bother with opengl and simply ship Zink instead.

my123 wrote at 2020-10-28 23:49:41:

Microsoft will ship GLon12, which is being developed as a part of Mesa.

mastax wrote at 2020-10-29 06:15:49:

Wow, how did I miss this news‽

https://devblogs.microsoft.com/directx/in-the-works-opencl-a...

kllrnohj wrote at 2020-10-28 22:06:13:

> The first is Minecraft, AMD has atrocious abandoned OpenGL drivers and Minecraft regularly fails to reach 60 fps on AMD cards where it can get to 500+ on Nvidia, its one of the most played games on the planet.

I'm really not finding anything that supports this claim, can you substantiate it? I see scattered sub-60fps reports in Minecraft for both Nvidia & AMD. It seems to just be a "minecraft's engine is shit" thing, not an AMD thing?

But I'm not seeing any actual comparisons or head-to-heads or anything showing AMD in particular is struggling with Minecraft in particular. This unknown quality benchmark is the only thing I'm finding for Minecraft:

https://www.gpucheck.com/game-gpu/minecraft/amd-radeon-rx-57...

and it's definitely not struggling to hit 500 fps numbers at 1080p, either.

And other OpenGL games & apps are not showing any severe AMD issues, although the only significant sample of those are on Linux so that also has

a different OS, so there certainly doesn't appear to be anything _systemically_ wrong with AMD's OpenGL as you're claiming. Unless for some reason they just never brought over the way better Linux OpenGL driver to Windows, which would be highly unexpected & definitely something that needs evidence to support.

> zero acknowledgement that people play games other than the latest AAA games

This is just completely false and seems to just be because your pet peeve is Minecraft in particular? Both Nvidia & AMD discuss popular games not just the latest AAA games. Usually in their latency section, since that's where it comes up as more relevant (esports games), but it's definitely not an _ignored_ category entirely.

oatmealsnap wrote at 2020-10-29 04:12:12:

Yea, I get 60fps @1440p on an 8 year old AMD GPU, no issues. I haven't checked the actual frame rate, but I've not noticed any dropped frames.

gpderetta wrote at 2020-10-29 00:42:57:

The significantly better Linux OpenGL drivers are from Mesa, not AMD.

floatboth wrote at 2020-10-29 16:19:49:

Guess who maintains RadeonSI in Mesa — it's all @amd.com people :)

(Which is not the case for RADV, the Vulkan implementation)

kllrnohj wrote at 2020-10-29 02:06:45:

No. Mesa's "drivers" are either a software fallback or support infrastructure for some open-source drivers.

When you're using AMD or Nvidia's proprietary drivers, which are way faster, then you're not using anything interesting from Mesa.

EDIT: Also Mesa's open source Radeon driver came from AMD.

rhn_mk1 wrote at 2020-10-29 07:17:20:

In the case of AMD, the open-source driver comes from AMD itself. The proprietary driver replaces Mesa, but builds on top of the same kernel portion.

zamadatix wrote at 2020-10-28 22:53:07:

I'm not really worried about Windows OpenGL performance, even the new Minecraft edition (which performs enormously better all around) uses DirectX. Browsers use ANGLE to convert GL to DirectX for a reason, it's a crap API all around on Windows (not that it had to be but that's how it is).

The driver bugs are what I'm most worried about though. I remember when I got a 4670 in 2008 there was a cursor corruption bug (google it, it's common) and if you didn't shake the mouse enough for it to fix itself it'd crash. Then I built a new PC with a 5770 and I still had it. Later a new PC with a 6950 and still had it. Then 2x280X and still had it. It even occurred on Linux once. Then I went Nvidia for a few generations and it seems AMD finally fixed it. It seems fixed now (maybe because of new hardware) based on internet searches but from 2008-2015 the bug was there. And this is just one example of things.

Not that Nvidia drivers have ever been perfect but I never ran into this kind of shit so often for so long on their cards. Hopefully I don't end up regretting the 6900 XT because of it.

Aerroon wrote at 2020-10-29 00:31:33:

>_Later a new PC with a 6950 and still had it._

This cursor corruption bug was ridiculous. I ended up not playing some games (I think Dota) because I would get this bug. I went with an Nvidia card as my next one because of that as well.

It's interesting how something that seems relatively minor could have such an impact on purchasing decisions.

numpad0 wrote at 2020-10-29 04:34:30:

Is that an actual bug? Sounds like potentially a VRAM thermal issue.

csande17 wrote at 2020-10-29 04:57:20:

This used to be a pretty common bug, I think. The cursor is often drawn using a special "hardware mouse cursor" feature specifically designed to make a tiny bitmap move around the screen as efficiently as possible. It'd be easy for a driver to mess that up and cause the cursor -- and ONLY the cursor -- to look garbled.

You can see problems today like the mouse cursor not respecting Windows 10's night-shift color filter for the same reason.

throw_away wrote at 2020-10-30 15:32:08:

On a thread yesterday I said that Windows' settings and lack of native keyboard remapping (without registry editing) were the most ridiculous things about it.

This pointer not respecting night-shift hits the trifecta hat trick. There's a registry fix, of course, but the first time I saw it, I thought "huh, that's weird". How did this get out without somebody on the release train thinking to themselves the exact same thing?

Insanity wrote at 2020-10-29 11:56:52:

Cool, I never heard of that hardware mouse cursor feature!

Makes sense to optimize that though as people will usually be updating the mouse position frequently

officeplant wrote at 2020-10-29 13:38:17:

Obviously with the range of settings and mods out there I'm not sure of everyone's Minecraft experience on AMD.

But my simple 3400G based rig using the integrated Vega11 plays stock minecraft just fine @ 1440p and 20 chunk viewing distance with everything else set to max.

dmayle wrote at 2020-10-28 21:28:23:

I understand that Mesa (with the Zink OpenGL on top of Vulkan implementation) can be compiled for Windows. If you care that much, maybe give it a shot.

TwoBit wrote at 2020-10-28 21:48:45:

I'm sorry but OpenGL is dead on Windows. There are exceedingly few apps written against OpenGL and if you write such an app then don't be surprised with the result. Vulkan is the portable solution of course.

eloop wrote at 2020-10-28 23:58:09:

Really? I think the professional 3d software community would be surprised to hear this. Huge software suites with decades of opengl development.

pjmlp wrote at 2020-10-29 12:45:48:

Many of them have added DirectX backends in the mean time, and they will eventually had to either rewrite or contend to use OpenGL on DirectX 12 work Microsoft is doing.

The old ICD interfaces that OpenGL drivers rely on, might not be around forever, and so far Windows doesn't support them in sandboxed processes.

pojntfx wrote at 2020-10-29 09:57:06:

> Driver Stability

Ironically, the exact opposite is the reason I use AMD on Linux. It "just works" without installing anything and everything is super stable, while Nvidia is a huge proprietary mess.

peatmoss wrote at 2020-10-29 14:07:44:

I replaced an NVidia 1060 with a 5700XT and also feel it’s been a major improvement on Linux. Ubuntu mostly handles the NVidia driver installation and upgrades, but only mostly. Having the card supported in the kernel is excellent.

That, and Proton in Steam have made it possible to play some old favorite games as well as new titles with impressive performance.

I will say that real-time ray tracing has me at least considering one of those new cards...

Teknoman117 wrote at 2020-10-29 18:40:42:

My big thing was that the AMD drivers support all of the Linux kernel graphics features. Things like DRM (direct rendering manager) and KMS. One of the neat things about the former is allowing interprocess graphics buffer sharing. One example of where they're super useful is in all of the browsers that run their rendering process separate of their window process. Chrome and Firefox for example. You can just send a handle to the graphics buffer across a unix socket rather than having to use something like Angle (Chrome) and actually send the graphics commands across some IPC mechanism.

this is also one of the mechanisms that Wayland is built on top of.

m-p-3 wrote at 2020-10-28 20:44:02:

> CUDA, Tensor Cores, Ansel, etc. I don't really use these things but if I'm paying the same amount I want similar capabilities.

CUDA might be a bit tough. It's hard for an equivalent to catch on if everyone is doing CUDA. It's the chicken-and-egg problem. There's ROCm* if that matters.

*:

https://github.com/RadeonOpenCompute/ROCm

ZekeSulastin wrote at 2020-10-28 21:07:28:

I thought ROCm didn't support RDNA1 as of yet, let alone RDNA2.

my123 wrote at 2020-10-28 23:51:14:

Yes, it still doesn't... (officially) There is some support that was contributed by the community but it's not complete.

black_puppydog wrote at 2020-10-29 00:53:37:

That's too bad. Gonna start a workstation build and even though it's only gonna do deep learning for prototyping, fiddling with low level stuff is just a non-starter compared to just spinning up anaconda env.

Any chance AMD would hire some engineers to upstream rocm support into tensorflow/pytorch?

dathinab wrote at 2020-10-28 21:02:23:

Honestly at least for now for the desktop marked it doesn't matter that much.

Software devs might consider it as a factor because they might want to play around with it, but for everyone else is a irrelevant feature.

RavlaAlvar wrote at 2020-10-29 01:32:31:

They are 10yrs behind, I doubt they will ever able to catch up.

Tom4hawk wrote at 2020-10-29 17:05:28:

People claimed similar things about AMD CPUs and GPUs ;)

M277 wrote at 2020-10-28 20:38:22:

Am I the only one who notices a clear difference between DLSS 2 and native image? I get that "native" is also flawed due to TAA, but DLSS still has some work to do with regards to image clarity imho.

PaulKeeble wrote at 2020-10-28 21:26:40:

It is one of those technologies that is very well hidden in effect behind Youtube's compression. Given how aggressive compression is on Youtube the gap in what it looks like in front of you verses on the tube has widened quite a bit. DLSS seems to be indistinguishable on Youtube but in front of you on a monitor it clearly introduces some odd artifacts and is not native looking. The gain in performance is usually worth it but its definitely not just magic free performance with no impact on image quality.

dragontamer wrote at 2020-10-28 20:47:57:

I can see the visual downgrade.

IMO, VRS (variable rate shading), which is in all major GPU providers (NVidia, AMD, and even Intel iGPUs / Xe), provides the "upscaling-ish" performance gains that we want for the next generation.

Its much harder to see the difference between native and VRS.

https://devblogs.microsoft.com/directx/variable-rate-shading...

-----

Its not a "competitive" feature, because everyone is doing it. But it basically accomplishes the same thing: carefully upscaling results (2x2 instead of 1x1) in locations where the gamer probably won't notice.

izacus wrote at 2020-10-28 21:12:35:

Well of course you'll notice, but do you notice it more than if you need to drop off to non-native resolution because your card keeps dropping under screen FPS?

Because that's where the difference gets important. If I compare to my PS4 Pro, many games "look" better because they drive the UI at full native resolution while using checkerboard rendering to keep the framerate up. The same game on my 970GTX will chug, drop frames or have all the text and UI look blurry because I need to drop the resolution.

If DLSS 2.0 fixes this issue, it's a massive improvement.

kllrnohj wrote at 2020-10-28 21:54:41:

DLSS 2.0 is "just" an upscaler. There's a ton of techniques to upscaling, and a lot are way better than a shitty bilinear filter. Temporal upscaling, for example, is also a thing, and is very good (

https://youtu.be/A2A-rhCQCuY?t=605

). Upscaling in general has been a huge area of technique improvements & innovations thanks to the xbox one & PS4 struggling so hard with 4k desires.

DLSS 2.0 is a great addition to that field, but it's _far_ from being the only good option here. AMD has also done some things in this area, too, like Radeon Boost which was the idea of dynamically reducing resolution in response to player inputs, with the idea being if you're sitting still-ish you're OK sacrificing FPS for better quality, but as soon as you start whipping about you want the FPS more. Game engines also do similar things, and combined with temporal upscaling & antialiasing are quite compelling.

gambiting wrote at 2020-10-28 23:40:09:

For me personally, in Control, 720p upscaled with DLSS to 1080p looks better than native 1080p presentation. I say that with a hand on my heart, the DLSS image is cleaner, the lines are sharper, and the texture filtering is just vastly superior. I have no idea how they are doing it, because it seems to be pure black magic, the traditional logic of "upscaled can never look as good as native" has been completely thrown out of the window. It doesn't look as good - it looks better in my opinion.

kevingadd wrote at 2020-10-29 00:37:56:

The magic essentially works because they're feeding lots of hidden information in for the upscale - the 1080p is being constructed from a bunch of 720p frames + depth/motion vectors, etc. So it's natural that it is able to look better, and they wisely tap into the (mostly/entirely unused in games) tensor units on the GPU to do it without eating too much into your shader performance.

Insanity wrote at 2020-10-29 12:01:17:

I'm sorry but I don't understand how is natural that could make it look better than the native version.

Care to explain more or have some good resources where I can learn about it by myself?

dathinab wrote at 2020-10-28 21:03:31:

What I had been wondering about was "what if you place DLSS like technology into the screen itself?

ulber wrote at 2020-10-28 21:41:23:

DLSS 2.0 requires motion vectors as an input to re-project the previous frame, so an equivalent of that at least can't be implemented in the display.

DLSS is also a rather heavy computation, taking 1-2 ms per frame on high end GPUs. That's a serious amount of processing power to put into a display and doing so makes it exclusive for that use. On the GPU those tensor cores can be used for other tasks when they're not doing DLSS.

Dahoon wrote at 2020-10-29 02:58:25:

You need to add Nvidias supercomputer to the screen too then.

Kaze404 wrote at 2020-10-29 01:18:26:

> - Driver stability. Nvidia is not perfect but the 5700 series was awful from AMD (had to return mine). They need to at least match Nvidia.

Seriously. I have the RX 5700 XT and at least once a week the driver crashes with the same error on `dmesg`, with the only solution being to reboot the machine because the 5700 XT for some ungodly reason doesn't support software reset. I love the card and I feel like I got what I paid for in terms of performance, but the driver instability is absurd.

tim-- wrote at 2020-10-29 06:05:30:

You might want to see this:

https://github.com/rogeriomm/proxmox-pve-kernel-amd-reset/bl...

Kaze404 wrote at 2020-10-29 16:00:58:

I'm aware of the patch and have used it previously on my VFIO endeavors. It worked surprisingly well for me, but unfortunately for a lot of cases it simply doesn't, according to the author on the VFIO Discord. It's disappointing to say the least.

alasdair_ wrote at 2020-10-28 21:42:30:

>- DLSS. It started out as a gimmick but with DLSS 2.0 it's just 70%+ free performance improvement with the only downside being somewhat limited game support.

Watch Dogs Legion comes out tomorrow and I've been benchmarking it today, as have many others in the subreddit. DLSS is sometimes improving things but is also quite often leading to worse performance, especially if RTX is set to ultra. I have no idea if the issue is game specific or not but I'd be curious to know which games use it well.

ulber wrote at 2020-10-28 21:52:05:

4k DLSS takes something like 1.5ms to compute on a 2080ti IIRC, so the drop in render time has to be at least that to give any improvement. So it's quite situational and doesn't help with for example when frame rate is high already. Ray tracing at high settings would be expected to be one of those situations though, so something might be just broken.

zionic wrote at 2020-10-28 21:53:44:

That _has_ to be game specific. Every other game sees major wins with DLSS2.0+

alasdair_ wrote at 2020-10-31 04:37:15:

Apparently the issue is that the game is poorly optimized.

In addition, my CPU seemed to be overheating and was likely throttled. I tracked the issue down to my new RTX 3090 which was dumping heat inside the case in a noticeable way.

I re-applied thermal paste to my CPU and replaced my watercooler fans (due to rattling) and I’ve improved fps by 15%.

The latest nvidia driver also increased fps by 12%.

p1necone wrote at 2020-10-29 00:28:06:

Given the framerates they're showing games running at 4k without a DLSS equivalent I'm not really sure it's necessary _yet_, maybe when stuff comes out that really pushes the hardware though (and when they announce lower end cards).

rocky1138 wrote at 2020-10-29 03:33:21:

Virtual reality is pushing the boundaries of performance.

p1necone wrote at 2020-10-30 02:46:07:

Are they though? Most VR games just lower their graphical fidelity enough to get similar/better performance as non-vr games even on relatively mid range hardware.

snvzz wrote at 2020-10-29 00:31:41:

And even then, higher performance would mean they can source their upscale from a higher resolution.

sundvor wrote at 2020-10-28 23:26:35:

Speaking of features: Do the new AMD GPUs feature SMP, Simultaneous Multi-Projection?

Use case: iRacing on triple screens, it offloads the CPU, in a CPU limited title.

TheFlyingFish wrote at 2020-10-28 20:58:13:

The announcement briefly mentioned that they are working on a "super resolution" feature, but specifically what that is has so far been left as an exercise for the viewer. It sounds like it might be a competitor for DLSS, but only time will tell.

PolCPP wrote at 2020-10-28 21:43:46:

Which is a terrible name considering they already have a Virtual Super resolution feature (

https://www.amd.com/es/support/kb/faq/dh-010

)

taurath wrote at 2020-10-28 16:43:55:

Big shot across the bow of Nvidia - the whole 30-series stock problems seem like a big miscalculation right now. Granted it’s still early as this is just announced, so no real world benchmarks, but their 3090 competitor coming in at $500 under Nvidia makes it a really tough sell - not that a lot of people have even gotten them at this point. Rumors of Nvidia making TI versions in literal months after the 30-series launch are probably true.

tomerico wrote at 2020-10-28 16:57:06:

Addressing the price difference between RTX 3090 and 6900 XT:

3090 is priced at $1500 for its 24GB RAM which enables ML & Rendering use cases (Nvidia is segmenting the market by RAM capacity).

AMD's 6900XT has the same 16GB of RAM as 6800XT, with less support on the ML side. Their target market is gamers enthusiasts who wanted the absolute fastest GPU.

__alexs wrote at 2020-10-28 17:14:44:

The 3090 is priced at $1500 because it is designed for whales who want the best regardless of price.

snvzz wrote at 2020-10-28 20:05:37:

Except it isn't the best.

AMD performance is on par or slightly higher. Power efficiency is much higher.

And costs $500 less.

A slower card that uses way more power and costs $500 more is really hard to sell, even with NVIDIA marketing team being as strong as it is. At those prices, few people are going to automatically buy the product without exploring their options.

zamadatix wrote at 2020-10-28 22:36:50:

Don't make the mistake of even thinking about perf/dollar for top tier cards cost an additional $350-$800 for ~10% more improvement. Like the parent comment said, being the fastest is the reason these cards get bought not how good a deal they are. That being said I think the 6900 XT is far better than Nvidia were hoping it'd be. The 1st party benchmarks shown today had it on par with the 3090 (with "rage mode" and "smart access memory" in the comparison) which is certainly something unexpected from AMD.

Still it's more of a Zen 2 moment than a Zen 3 moment. From the lack of comments on RT performance compared to the competition (just that it was added to the hardware) it seems extremely unlikely the RT performance is at the same level. The cards also lack the dedicated inference hardware for features like DLSS or voice/video filtering. And the card still has less VRAM than the 3090. These are all minor but if you put them together it seems really unlikely we'll call the 6900 XT the absolute best performing GPU of the generationjJust like Zen 2 didn't topple Intel's claim of "best gaming CPU". We'll have to see 3rd party reviews and benchmarks to find out for sure though. What it does represent though is a huge upset in the 3070/3080 area where most cards are sold and a hint that there may be an Zen 3 moment coming for GPUs in the next generation where AMD really drives top tier performance to a new level after long stagnation instead of "just" coming close to taking the crown dead even.

Personally (and this part isn't going to be reflective of the average person) I was going to be a 3090 whale and I probably would still be if it weren't for Nvidia's shit stance on open drivers in Linux (one of my biggest gripes with my 2080 Ti). However with AMD being so close this round and me not having liked DLSS or RT on the 2080 Ti I'm willing to trade off for the 6900 XT. The $500 is a nice bonus but not really what's coming into play, like I said if I were trying to get perf/dollar the 6800 XT makes WAY more sense. This is similar to what happened with Zen 2, I was planning on getting the better Intel CPU for the couple extra FPS but I was fed up with meltdown type issues and Zen 2 was really damn close. Now I'm really excited for Zen 3 though :).

csdreamer7 wrote at 2020-10-29 21:33:53:

I definitely agree with you on the drivers. I will not even consider Nvidia. I have the same Electron breaking bug in two different Nvidia cards (my now dead 970 and the still alive 650 Ti). It makes both VSCode and Atom almost unusable for me.

jl6 wrote at 2020-10-28 20:21:14:

> AMD performance is on par or slightly higher. Power efficiency is much higher.

Are there benchmarks showing this?

NikolaeVarius wrote at 2020-10-28 20:23:35:

Their benchmarks dont show settings. I want to know if they were using Ray Tracing and other high end effects. The FPS by itself is meaningless

snvzz wrote at 2020-10-28 21:00:38:

I've watched their presentation, and they do show which Presets (ultra, extreme, whatever) they used.

And they've obviously used the same preset for both their cards and the competitors'.

There's some more details in the press kit, but I do agree with the principle that decisions should be withheld until NDAs expire and third party benchmarks are available.

What's clear is that, with the information in hand, buying NVIDIA Ampere cards is simply not sensible. Waiting for RDNA2 reviews is.

NikolaeVarius wrote at 2020-10-28 21:19:22:

My bad on the specs, I was following along in a live thread, not watching the stream.

colejohnson66 wrote at 2020-10-28 17:29:08:

But is 8 GB of GDDR whatever worth almost $1000? If AMD can put 16 in at about $500, paying another $1000 for an extra 8 is IMHO outrageous. Nvidia is charging that much because _many_ gamers hold to the “Intel & Nvidia are the best” even if benchmarks say otherwise.

atty wrote at 2020-10-28 17:45:17:

I do machine learning work and research, and when I upgrade I will pay the Nvidia “premium” without hesitation for that extra RAM and CUDA. I really wish it wasn’t so one-sided, I’d love to have a choice.

Edit: should clarify that I’d really love to get a quadro or one of their data center cards, which aren’t gimped in certain non-gaming workloads... but I’m not made of money :)

shock wrote at 2020-10-28 17:57:13:

> I do machine learning work and research, and when I upgrade I will pay the Nvidia “premium” without hesitation

Would you still do it _without hesitation_ if the money was coming of your own pocket?

Voloskaya wrote at 2020-10-28 18:32:11:

(Speaking for me) Yes. It's not 500$ extra for 8GB more, it's 500$ extra for being able to do ML at all basically.

AMD GPUs have near zero support in major ML frameworks. There are some things coming out with Rocm and other niche things, but most people in ML already have enough work dealing with model and framework problems that using experimental AMD support is probably a no go.

Hell, if AMD had a card with 8GB ram more than nvidia, and for 500$ cheaper, I would still go with nvidia.

Everyone wish AMD would step their game up w.r.t ML workloads but it's just not happening (yet), Nvidia has a complete monopoly there.

t-vi wrote at 2020-10-28 19:27:33:

I think PyTorch support is decent, I use it on my Radeon VII a lot.

- You can compile on

https://lernapparat.de/pytorch-rocm/

(disclaimer my own link)

- Arch has support out of the box:

https://aur.archlinux.org/packages/python-pytorch-rocm/

- AMD also has a ROCm docker with PyTorch (

https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni...

)

- If you know the secret URL (which you can infer from the other URLs on the front page), there are also test wheels for rocm3.8.

So AMD got a lot of heat for not supporting Navi (RDNA) with ROCm, but it seems that they are weeding out the things keeping people from running it (

https://github.com/ROCmSoftwarePlatform/pytorch/issues/718

and the links in that look like gfx10 is almost there for rocBlas and MIOpen). We'll see what ROCm 3.9 will bring and what the state of big navi is.

fluffything wrote at 2020-10-28 20:04:00:

I have a 5800 XT and I've just given up on ROCm support for it at this point.

Why would I get a Radeon VII when used nvidia cards for machine learning are extremely cheap, and then I don't have to worry about experimental stuff breaking one day before deadline lol

philjohn wrote at 2020-10-28 22:18:21:

Part of it is also AMD segmenting their gaming cards (RDNA) from their compute-focussed cards (CDNA).

Twirrim wrote at 2020-10-28 19:26:40:

> AMD GPUs have near zero support in major ML frameworks. There are some things coming out with Rocm and other niche things, but most people in ML already have enough work dealing with model and framework problems that using experimental AMD support is probably a no go.

_If_ AMD has made a sufficiently powerful GPU, that will add a lot of incentive to ML frameworks to support it. But it's going to have to be a big difference, I imagine.

Given how active AMD is in open source work, I'm a little surprised they haven't been throwing developers at ML frameworks.

NikolaeVarius wrote at 2020-10-28 19:43:56:

Takes time and money. NVIDIA has been tossing money and PhDs at this for over a decade

grp000 wrote at 2020-10-29 05:05:11:

With ML workloads being so ram intensive, more than a few times I wished AMD properly supported the right ML packages. Their cost/GB is such a good deal vs Nvidia.

sudosysgen wrote at 2020-10-28 22:22:22:

AMD had more powerful GPUs for compute than NVidia for all but the last 2 cycles, and yet it didn't materialize.

reilly3000 wrote at 2020-10-28 18:55:54:

Would PlaidML be a viable option for you?

https://github.com/plaidml/plaidml

Voloskaya wrote at 2020-10-28 19:41:36:

I don't think so:

* It seems there was not a single commit in the past ~6 months which by itself is already a deal breaker.

* Documentation is lackluster

* You need to use Keras, I use PyTorch. This is not a deal breaker, but a significant annoyance.

* Major features are still lacking. E.g. No support for quantization (afaik), which for me is fundamental.

* Most importantly there seem to be no major community around it.

It feels a bit bad to say that, because clearly a lot of work went into this project, and some people have to start adopting it to drive the momentum, but from an egoistical point of view, I just don't have the courage to deal with all the mess that comes with introducing an experimental layer in my workflow. Especially in ML where stuff can still appear to "work" (as in, not crashing) despite major bugs in the underlying code, leading to days or week of lost work before realizing where the issue is.

my123 wrote at 2020-10-28 23:53:28:

PlaidML is getting a lot of work, just see the other branches. :)

vkaku wrote at 2020-10-28 20:32:38:

I've been on plaidml w/ keras for a bit, got over most bottlenecks for now.

Check out keras-helper.. I made it to switch between various backend implementations which are non NVidia specific.

Pytorch may eventually need porting, but for now I don't need it. I've been trying out Coriander and DeepCL now but I decided to stick to Keras, which seems to be a decent compromise. Not using 2.4.x though, do not need it.

OpenCL based backends are cutting it for me, running production workloads without needing to install CUDA/ROCm is the best way to go.

fluffy87 wrote at 2020-10-29 07:21:05:

Why would a data scientist making >200k$/year (~1k$/ work day) spend a single second of their time trying to “workaround” something whose solution only costs 1.5k$?

Spending a week/year working around ROCm would already cost you 5k$ plus the opportunity cost. For a whole team that’s a money sink.

exged wrote at 2020-10-29 08:58:00:

To be fair, it's not uncommon for a ML researcher / engineer to use tens (~$10/hr on cloud, $100k from Nvidia) or even hundreds (~$100/hr on cloud, $1M from Nvidia) of GPUs to speed up their iteration time. If there was a way to spend half as much on hundreds of AMD GPUs instead that would be a huge win, well worth even months of the researcher's time.

The catch is that ML software stacks have had hundreds if not thousands of man-years of effort put into things like cuDNN, CUDA operator implementations, and Nvidia-specific system code (eg. for distributed training). Many formidable competitors like Google TPU have emerged, but Nvidia is currently holding onto its leadership position for now because the wide support and polish is just not there for any of the competitors yet.

ebalit wrote at 2020-10-29 12:27:49:

There are data scientist out of the Silicon Valley. In France, salaries for data scientist are mostly 30k€ after tax and social contributions, especially outside of Paris. 1k€ is still largely manageable on this salary but it isn't insignificant.

mamon wrote at 2020-10-29 13:22:13:

You've got to be kidding me. In Poland you can make 1.5x that. France must have very high taxes.

ebalit wrote at 2020-10-29 20:30:34:

Sadly I'm not ^^ This correspond to a full cost for the company a bit bellow 60k€ [0]. We do get many advantages that are paid through those contributions, but the available income is useful when comparing to prices.

0:

https://mycompanyinfrance.fr/calculators/salary

delusional wrote at 2020-10-28 18:50:37:

This is more believable to me than the RAM being the decision maker. As an ML amateur it looks like most of it is built upon proprietary nvidia tech, which makes makes AMD a nonstarter, even if they did have a technically stronger card.

sangnoir wrote at 2020-10-29 15:24:26:

> AMD GPUs have near zero support in major ML frameworks.

That is incorrect - I have been running Tensorflow on RX5*0 cards for close to 2 years now. I even transitioned to TF2 with no problem. Granted, I have to be extra careful about kernel versions, and upgrading kernels is a delicate dance involving AMD driver modules and ROCm & rocm-tensorflow. My setup is certainly finicky, but to say AMD GPUs have near zero support is false.

vokep wrote at 2020-10-28 18:34:59:

I do machine learning research on my own "just for the fun of it", and I also am looking to buy a 3090 asap without really even thinking about the impact on my wallet. I know I can _survive_ buying one, thats good enough. I need it.

gimme 48 GB gimme 128

ChuckNorris89 wrote at 2020-10-28 18:04:44:

If one makes bay area salaries, I can't imagine the price of a 3090 would be a problem. One probably pays more in rent than that even with scalping prices.

Now on the other side of the world, where a 3090 is several times your rent, you really need to think thrice about buying one.

causalmodels wrote at 2020-10-28 18:47:04:

Yes. GPU RAM is a major bottleneck for my personal work and buying cards is still cheaper in the long run than renting.

edit: Also CUDA is just too important to switch to AMD.

meragrin_ wrote at 2020-10-28 20:19:30:

The 6900 XT the direct competitor to the RTX 3090. It is not $1000 cheaper. The 6900 XT was announced as $999 and the RTX 3090 is priced at $1499. The RTX 3090 also has raytracing hardware, support for machine learning, and other things. We have yet to see how the 6900 XT behaves in less than ideal conditions and performs with raytracing on. In the end, there might be other things which justify the extra $500.

nightski wrote at 2020-10-28 17:43:08:

To be fair to Nvidia it's not the same. AMD card is GDDR6 and 3090 is GDDR6X (which may be causing Nvidia's supply issues).

snvzz wrote at 2020-10-29 00:37:07:

Doesn't matter much, when AMD is getting dramatically better performance out of their GDDR6 with their new 128MB "infinity cache".

At the end, performance is what matters.

sagarm wrote at 2020-10-28 20:59:15:

It's product segmentation. NVidia dominates the ML space, so they don't have to worry much about competitive pressures. Increasing the efficiency of ML engineers is definitely worth at least $1500 them and their employers, so why leave money on the table?

snvzz wrote at 2020-10-28 21:02:12:

>Increasing the efficiency of ML engineers is definitely worth at least $1500 them and their employers, so why leave money on the table?

This would make sense if these were Titan cards with ML drivers. Instead, they are not, only the regular drivers are available, and FP32/64 performance is artificially capped.

These aren't cards for ML.

nl wrote at 2020-10-29 04:35:05:

As an ML engineer I (and many others I know) would love them.

The ML driver and FP32/64 performance capping aren't really issues since in reality we rarely hit those limits.

malkia wrote at 2020-10-28 17:40:18:

It's worth. There are really serious cases where extra GPU RAM is worth every penny. Not games, but say game development (or similar).

ST2084 wrote at 2020-10-28 18:04:47:

Resolve and other creative apps

pkulak wrote at 2020-10-28 17:43:01:

Nvidia is also using much more expensive (and power-hungry) ram.

AnthonyMouse wrote at 2020-10-28 18:23:37:

Which makes it more _expensive_, but it only makes it _better_ if the expense actually translates into performance.

snvzz wrote at 2020-10-28 20:10:37:

>if it translates into performance.

Which it doesn't. Thanks to AMD having some new, large "Infinity Cache" feature which they adapted from their Zen CPU architecture.

athorax wrote at 2020-10-28 17:02:56:

Agreed, the 3090 is definitely meant for ML, but my guess is people are largely buying it for gaming purposes.

jmt_ wrote at 2020-10-28 18:27:47:

Steve from GamersNexus has a good video about this. His argument is similar to yours -- nVidia is marketing this card as a god-tier gaming card (and is of course very powerful for this purpose) but practically it's better suited for people who need high VRAM + strong compute on ML/productivity tasks. His argument being that, in terms of price to value of 3080 vs 3090 for regular ol' gaming, the 3080 is a better choice, and really access if you actually need something as intense as a 3090.

kllrnohj wrote at 2020-10-28 19:42:47:

Except the 3090 doesn't get the Titan driver, so the 3090 is still artificially slow at a variety of productivity workloads (see Linus Tech Tips review where they checked a few of these).

Aka, it's a more expensive 2080 Ti, _not_ a cheaper RTX Titan.

nightski wrote at 2020-10-28 17:37:56:

How can the 3090 be meant for ML when it has intentionally gimped ML functionality?

easde wrote at 2020-10-28 19:32:29:

In the end, the RTX 3090 is still better at ML workloads than every card other than the RTX A6000 or the A100. Although the 3090 theoretically falls behind the V100, Titan V and Titan RTX in terms of pure mixed-precision matrix math, in practice the 3090 still performs better on almost all workloads.

So maybe not 100% designed for ML, but better than anything else out there unless you want to sell a kidney.

theYipster wrote at 2020-10-29 05:33:18:

Because in reality, it’s only gimped for certain scenarios, and compared to the 2080TI, it’s a monster upgrade for ML!

Here’s a dirty secret: there is a ton of ML prototyping done on GeForce level cards, and not just by enthusiasts or at scrappy startups. You’ll find GeForce level cards used for ML development in workstations at Fortune 50 companies. NVIDIA would love everyone to be using A100s to do their ML work (and V100s before that,) but the market isn’t in sync with that wish. The 2080TI remains an incredibly popular card for ML even with only 11GB. Upping to 24GB, even with the artificial performance limitations for certain use-cases, enables new development opportunities and use-cases to explore.

When it comes to product stratification, the hard rule according to driver EULAs is that GeForce cards can’t be used in data centers. For serious ML development at scale, NVIDIA has their DGX lineup. In the middle are the Quadro cards, but they tend to be a poor value for ML. The cost differential with Quadro is largely due to optimizations and driver certification for use with tools like Catia or Creo (CAD/CAM use,) which don’t intersect with ML.

The Titan RTX may not have the gimped drivers, but the 3090 beats the Titan in many benchmarks nonetheless. Is the 3090 the best NVIDIA PCIe form factor card for ML? No. The A100 is still king of the crop and is the only Ampere card with HBM memory, and even the A6000 will outperform for many use-cases with 48GB of RAM. Still, the 3090 will be the optimal card for many.

I’m one of the lucky few to have a 3090 in my rig. I lead of team of volunteers doing critical AI prototyping and POC work in an industry give-back initiative, and price was not a leading factor in my decision to procure a 3090 over a Quadro. I chose the 3090 principally because I didn’t want a loud blower card in my computer (and I don’t need 48GB.) If someone donated an A100 to our efforts, I’d gladly take it, but it wouldn’t replace the 3090. It’s not a graphics card and it won’t play games, which indeed is an important value-added benefit of the 3090 :)

riku_iki wrote at 2020-10-28 17:54:17:

> gimped ML functionality

curious what is that?

paulgerhardt wrote at 2020-10-28 18:09:35:

The 3090 supports but doesn't let you enable the titan driver paths, SR-IOV, NGX, and critically, is capped to half-rate tensor operations with FP32 accumulate.

An official firmware release could change this but they're likely saving it for the Titan cards down the road.

moyix wrote at 2020-10-28 18:35:34:

Where do you see that this is artificial (i.e. in software/firmware) rather than just missing some hardware that's present on the Titan/Quadro cards?

paulgerhardt wrote at 2020-10-28 18:55:02:

The same thing happened with the 2080 / Titan and looks to be confirmed for the 3000 series by AnandTech.

https://twitter.com/RyanSmithAT/status/1301996479448457216

The Reddit AMA implies the Ampere architecture supports this configuration but is software limited under the FP32 section:

https://www.reddit.com/r/nvidia/comments/ilhao8/nvidia_rtx_3...

To epistemologically correct, I don't believe one will ever see a statement from NVIDIA confirming that this is a software limitation. I believe it's just inferred.

edit: This is close to confirmation:

https://imgur.com/a/RH8vyz9

-though I may suspect there are non reversible hardware fuses at play too.

M277 wrote at 2020-10-28 20:43:03:

They also did this with the TITAN in the past, I think, so it's not entirely unprecedented.

TITAN X got access to pro drivers shortly after AMD announced the Vega Frontier Edition.

meragrin_ wrote at 2020-10-28 20:53:57:

Maybe it is a type of binning. Maybe the machine learning specific hardware is more prone to errors so they just disable that on subpar chips and sell them for use in graphics cards?

moyix wrote at 2020-10-28 20:32:36:

The AMA section on FP32 seems to imply it's in hardware though, unless I'm missing something – datapath is a hardware term, at least.

I mean, I really _hope_ that it's just in the driver so that an enterprising reverse engineer can hack the driver and re-enable full FP16/FP32 accumulate :)

llampx wrote at 2020-10-28 17:29:07:

Wouldn't the Titan series be more meant for ML?

whynotminot wrote at 2020-10-28 18:12:50:

Let's be real, the only reason the 3090 isn't a Titan card is because Nvidia saw this coming and didn't want AMD to be able to top their "untouchable" Titan brand.

snvzz wrote at 2020-10-28 20:13:31:

>3090 is actually a Titan without the Titan name.

AdoredTV made an entire video[0] to demonstrate this is absolutely baseless conjecture that NVIDIA Marketing tricked people into inferring.

It absolutely destroys the Titan idea, leaving no room for doubt.

[0]:

https://www.youtube.com/watch?v=s23GvbQfyLA

whynotminot wrote at 2020-10-28 20:15:25:

I've gotta watch 30 minutes from some tech tuber to get your point? C'mon dude.

paulgerhardt wrote at 2020-10-28 21:24:49:

I was curious so I watched. It's a good video. I ended up going with the 3090 because of the extra vram and I'm familiar with CUDA but I'm not under the impression that this thing is a "Titan" even if it is all arbitrary corporate market segmentation. Relevant bit is at 15 minutes:

https://www.youtube.com/watch?v=s23GvbQfyLA&15m1s

numpad0 wrote at 2020-10-29 05:18:56:

People who hate video can process a 4 page article in 15 seconds rather than minutes

sudosysgen wrote at 2020-10-28 19:53:25:

There is no reason AMD wouldn't pull their own "Titan"-class GPU. A 6900XT with 32GB of HBM would destroy anything NVidia could muster.

whynotminot wrote at 2020-10-28 20:14:09:

I mean they kinda already did put out a Titan class card. My point was that the 3090 _is_ a Titan in all but name, and AMD unveiled their very capable response today.

I'm saying the only reason Nvidia didn't call it a Titan was because they knew it wasn't going to have an unassailable advantage over AMD.

llampx wrote at 2020-10-28 22:34:45:

There's no Titan-"class"

A Nvidia card is Titan if it has the name and the drivers, and not if it doesn't.

whynotminot wrote at 2020-10-28 23:35:51:

You're no fun.

sudosysgen wrote at 2020-10-28 20:35:25:

AMD Titan-class cards tend to have HBM. I'm saying that AMD can very probably still do even better.

snvzz wrote at 2020-10-29 00:39:24:

AMD has a CDNA in their pipeline which they haven't talked about in a while. That's their compute oriented architecture, whereas RDNA is graphics-oriented.

sudosysgen wrote at 2020-10-29 01:31:26:

Indeed. I still think it might be possible to see an RDNA2 HBM card.

colejohnson66 wrote at 2020-10-28 17:29:55:

I thought Quadros were meant for non-gaming workloads and the Titans were just more powerful than the top-tier mainstream/gaming card? Or have things changed?

andoriyu wrote at 2020-10-28 19:50:13:

Yes, Quadro was for non-gaming workloads. Titan meant a different thing with every new iteration. Some were explicitly "non-gaming card that you can play games on" other were "got tier gaming card".

RTX Titan for example was not as good in FP64 calculation compared to RTX Quadro. While first Titan was good at both. Nvidia kept changing what Titan means in order to extract most amount of money without hurting Quadro sales.

Even at its peak certain things were software limited for Titan, but not for Quadro. Nvidia has tons of artificial limitations for each of their graphics card once you go beyond gaming.

Macha wrote at 2020-10-28 18:08:17:

There's no titan this time around and nvidia have been talking about simplifying their lineup and implying the 3090 is to fill the titan shaped hole in the lineup, but also leaving them room to release a RTX Titan Ti or whatever if they change their mind. Also I imagine to head off consumers who would otherwise be like "Maybe I'll wait for a 3080 Ti".

I also expect been a kind of struggle between their consumer side who need some other reason to sell their top end GPUs beyond just the whales when game developers aren't really that interested in pushing out features that will require it given the niche ownership, while the datacenter people want to protect their Quadro margins. This to me seems their most likely reason for the on-again/off-again relationship with pushing their high end cards for compute usage vs gaming usage.

freeflight wrote at 2020-10-29 02:17:09:

_> Their target market is gamers enthusiasts who wanted the absolute fastest GPU._

That's also the target market for the RTX 3090, all the Nvidia marketing material describes it as a gaming card, it's Geforce branded and Ampere based Quadro's will be a thing.

There was/is also this whole thing:

https://www.reddit.com/r/MachineLearning/comments/iz7lu2/d_r...

alfalfasprout wrote at 2020-10-28 18:18:33:

Nope. Half-rate accumulate ops on the 3090. Definitely not an ML card.

tgtweak wrote at 2020-10-28 20:13:44:

A100 is nvidia's ML card in this lineup - and it has a much saltier premium.

The 3090 will probably stay at a premium due to a few factors - I don't think ML performance plays into this at all though:

DLSS - There's a reason AMD cites "raster performance" since with DLSS enabled the 3090 has a major advantage.

AMD-specific optimizations - near the end of the presentation AMD disclosed that with all the optimizations on (including their proprietary cpu->gpu optimizations, only available on the latest gen cpus)- they could pass 3090's _raster_ performance in some games.

I think for these two reasons, and the fact nvidia can't seem to get cards to vendors (and customers), there won't be a price drop on this SKU. They may however release a watered down version as a 3080ti and compete there.

meragrin_ wrote at 2020-10-28 20:56:17:

Aren't the optimizations also restricted to the 500 series motherboard chipsets?

carlhjerpe wrote at 2020-10-28 23:54:36:

Yes, which is one of the two supported chipsets for the zen3 arch.

Godel_unicode wrote at 2020-10-28 18:59:45:

Not according to the tensorflow benchmarks so far, it smokes the Titan RTX. Whether it could be faster is irrelevant, it's the fastest currently by a lot for the money.

nightski wrote at 2020-10-28 19:57:45:

FP16 multiply with FP32 accumulate performance is on par if not slower than Titan RTX due to half rate which is what the parent was likely referring to.

[1]

https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/a...

(Appendix A)

BTW - Whether it "could" be faster is indeed relevant because some of us are holding out for a Titan GPU next year with this unlocked. If you have unlimited budget or are under time constraints then by all means get the 3090, it is a beast. But if one has a 2080 TI then it's an important consideration.

Godel_unicode wrote at 2020-10-29 00:28:35:

I'm obviously aware of the theoretical limitations they referred to. My point that it smokes the Titan RTX in real world benchmarks stands, there's more to machine learning performance than just that one stat. Performance should be measured against real-world use cases, not whether there are lines in the whitepaper you object to.

If you can wait until next year you should always wait until next year, because there will (almost) always be something better than what is currently out. That's unrelated to whether or not the 3090 is good for doing ML research; it objectively is.

taurath wrote at 2020-10-28 17:10:40:

Good point - I wonder what the market split is there. Even so, wouldn’t it be far more economical to stack a bunch of 6800xts for ML, or is the ML support that far behind on AMD?

lumost wrote at 2020-10-28 17:37:03:

ML on AMD is currently impractical, the mainstream tools have hard deps on CUDA and the AMD projects are so far half-baked.

alfalfasprout wrote at 2020-10-28 18:19:28:

I used to think so but recent testing with ROCM has shown the tables are starting to turn.

chrisjc wrote at 2020-10-28 17:41:23:

> which enables ML

Are _individuals_ buying graphics cards for ML? I would think that it makes more sense to provision scalable compute on a cloud-plaform on an on-demand basis than buy a graphics card with ML capabilities?

wongarsu wrote at 2020-10-28 17:52:19:

As a company we operate a server with consumer cards. Running your own consumer Nvidia cards is very cheap compared to buying time on servers with enterprise Nvidia cards. On the cloud you are paying a premium for being on the cloud, and on top of that the premium for having datacenter-licenced cards.

Of course this calculation depends on how bursty your compute requirements are and how much you pay for electricity (datacenter cards are more power efficient)

chrisjc wrote at 2020-10-28 17:56:03:

Isn't the proximity of your compute to your data a huge consideration too? I imagine that the egress data costs (cloud storage to your local GPU) could grow quite significantly if you're crunching enough data.

I guess if you're rolling your own compute clusters, you've probably rolled your own storage solution too?

bart_spoon wrote at 2020-10-28 17:49:06:

I work in machine learning and bought an Nvidia card for my PC with the intent of doing ML on some personal projects. But I bought a 2070S, and would never drop that much on a 3090 for personal use.

So yeah, the huge gap ML capabilities between AMD and Nvidia are a selling point, but probably for a small enough group that it doesn't make a difference.

zimpenfish wrote at 2020-10-28 21:06:07:

I went for the 3080 for CUDA which isn't entirely for ML but also things like Blender, video rendering (plus 4K RTX Minecraft, of course), etc. If I hadn't had to upgrade my PC the month before, I'd probably have gone for a 3090 for extra future proofing.

FridgeSeal wrote at 2020-10-28 22:38:23:

I am doing this right now.

Using cloud services for work would mean using the exorbitantly expensive services from Azure, to say nothing of the painful and annoying experience that using Azure is at the best of times.

Instead, I can spend a moderate amount, build a machine that is more than adequate for work requirements will last ages, write it off on tax and still end up spending way less than I would have renting a cloud machine for a couple of months.

Godel_unicode wrote at 2020-10-28 18:57:55:

Yes, many people do independent research into ML and prefer to develop with local resources.

bitL wrote at 2020-10-28 21:01:56:

AMD can release 6950 with 32GB of RAM for the price of 3090 at some point.

llampx wrote at 2020-10-28 17:28:14:

I am a huge AMD supporter and recently bought their stock as well, but in terms of competition to NVIDIA for non-gaming purposes they have lot of ground to cover. NVENC for example is great for video encoding and supported in a lot of video editing programs. I use AMF in ffmpeg but from my understanding, NVENC is faster and with the 3000 series, better quality.

Same with CUDA vs ROCm.

subtypefiddler wrote at 2020-10-28 17:34:35:

It's not just CUDA vs ROCm, ROCm has come a long way and is pretty compelling right now. What they lack is the proper hardware acceleration (e.g tensor cores).

porphyra wrote at 2020-10-28 18:31:54:

ROCm has come a long way but still has a long way to go. It still doesn't support the 5700 XT (or at least, not very well) --- only the Radeon Instinct and Vega are supported. You can find salty GitHub threads about this dating to the very start of when Navi was just released:

https://github.com/RadeonOpenCompute/ROCm/issues/887#issueco...

And getting ROCm set up is still a buggy experience with tons of fiddling in the deep inner workings of Linux so it is nearly impossible for the average machine learning engineer to use.

It is compelling for certain bespoke projects like Europe's shiny new supercomputer, but for the vast majority of machine learning, it is totally unusable. By now in ML world the word "gpu" is synonymous with "nvidia".

subtypefiddler wrote at 2020-10-28 18:43:46:

Full disclosure, European here and in our team everyone is found of Linux so not representative of your average MLEngineer.

We actually had more issues with nvidia drivers messing up newcomers' machines during updates than with setting up AMD GPUs, but then again n is small (and AMD GPUs were for playing around rather than real work).

Still, a Titan Xp has CUDA support and plenty of memory, but it's better, IME, to upgrade to a model with less memory but higher cuda compute and access to tensor cores.

ris wrote at 2020-10-28 22:14:15:

Does ROCm even support Navi yet?

For the amount invested in the hardware development, the amount AMD have been investing in the software side has been shocking.

floatboth wrote at 2020-10-29 16:33:25:

H265 quality on AMD's encoder is excellent (has been since it appeared back in Polaris), since Navi it can do 4K60. H264 is not as good but totally usable.

quantummkv wrote at 2020-10-28 16:57:42:

But will even the TI versions work here? A 3090TI or 3080TI for example would have to launch at the most at the same price as their base models to stay competitive. That would result in a massive PR headache.

Also, would the TI versions actually help? Even a 100$ markup on a 3080TI would bring it close to the 6900 XT pricing. And the 3080TI cannot come close the 3090/6900 XT performance for that markup or otherwise it would risk cannibalizing Nvidia's own products.

Nvidia's only hope at this point is that either AMD fudged the benchmarks by a large margin or AMD gets hit with the same inventory issues.

dathinab wrote at 2020-10-28 21:08:10:

From what I heard the 3090 has some driver features disabled which some of their GPU compute focused cards have normally enabled, they could probably sell a 3090TI for +200$ by just enabling that feature :=/

dannyw wrote at 2020-10-28 18:47:07:

NVIDIA can resort to paying developers to only optimise for NVIDIA performance and/or cripple AMD performance.

lini wrote at 2020-10-28 22:38:07:

But will game studios do it now that both next gen consoles use AMD's RDNA 2 chips?

rasz wrote at 2020-10-29 04:34:29:

They already implemented this plan 20 years ago, "Nvidia, the way it's meant to be played!" means things like:

https://techreport.com/news/14707/ubisoft-comments-on-assass...

"Radeon HD .. gains of up to 20%.... Currently, only Radeon HD 3000-series GPUs are DX10.1-capable, and given AMD’s struggles of late, the positive news about DX10.1

Ubisoft’s announcement about a forthcoming patch for the game. The announcement included a rather cryptic explanation of why the DX10.1 code improved performance, but strangely, it also said Ubisoft would be stripping out DX10.1 in the upcoming patch

Ubisoft decided to nix DX10.1 support in response to pressure from Nvidia after the GPU maker sponsored Assassin’s Creed via its The Way It’s Meant To Be Played program."

https://techreport.com/review/21404/crysis-2-tessellation-to...

"Unnecessary geometric detail slows down all GPUs, of course, but it just so happens to have a much larger effect on DX11-capable AMD Radeons than it does on DX11-capable Nvidia GeForces. The Fermi architecture underlying all DX11-class GeForce GPUs dedicates more attention (and transistors) to achieving high geometry processing throughput than the competing Radeon GPU architectures."

GameWorks slowing down ATI/AMD users by up to 50%

https://arstechnica.com/gaming/2015/05/amd-says-nvidias-game...

https://blogs.nvidia.com/blog/2015/03/10/the-witcher-3/

dathinab wrote at 2020-10-28 21:10:57:

They kind of do that anyway in a slightly more roundabout way then you described.

(In the way they use their devs to help AAA games to fix some issues under some circumstances, there had been cases where optimizations speed up Nvidea but hindered AMD due to architectural differences but, surely it was all accidentally).

snvzz wrote at 2020-10-28 20:16:46:

I wouldn't put it past NVIDIA considering their scummy history[0].

[0]:

https://www.youtube.com/watch?v=H0L3OTZ13Os

overcast wrote at 2020-10-28 16:49:10:

When has there ever been enough stock on highend GPUs?

Tuna-Fish wrote at 2020-10-28 16:56:52:

It's not just that there is not enough to meet the usual demand, the supply is definitely much lower than typical. I see numbers for a major European distributor network, and there is maybe a fifth of the supply of what there was for the 2000 series, same time after launch.

Something is very definitely wrong.

quantummkv wrote at 2020-10-28 17:00:29:

Nvidia's issues might have to do with the yield issue on the new and untested Samsung 8nm process. But AMD is on the tried and tested 7nm TSMC process. I doubt they will have the fab issues, given that Apple's move to 5nm would have freed up a lot of capacity for AMD to use.

Miraste wrote at 2020-10-28 17:27:08:

> new and untested Samsung 8nm process.

It is not new and untested, it's been in use since 2018.

kllrnohj wrote at 2020-10-28 18:31:54:

It's been used since 2018 for smaller, power optimized designs. This would be the first significant high performance design, and at comparatively huge die sizes. Also claimed the process was semi-custom for nvidia, not the "standard" 8nm. Unclear what exactly was different, though, but depending on the changes (if any) it may actually not be yielding as well as a 2 year old process should

zimpenfish wrote at 2020-10-28 21:08:19:

Currently at day 41 of waiting for my 3080 card in the UK.

mdoms wrote at 2020-10-28 17:26:19:

Before Bitcoin stock was rarely a problem.

bart_spoon wrote at 2020-10-28 17:51:19:

Deep learning, cryptocurrency, and the resurgence of PC gaming have all taken place over the last decade, so its hard to pin production issues on any single factor.

Personally, I think its less due to these and more due to people using bots to buy up stock for scalping. The same stocking issues have happened for the new Xbox and Playstation, and even for non-tech hardware, like the newest Warhammer 40k box set. The bot scalping issue is just becoming more pervasive.

saberdancer wrote at 2020-10-28 19:37:59:

Add Covid-19 to the list, lot of people at home. Another aspect is VR which needs as much hardware as you can give it.

thrwyoilarticle wrote at 2020-10-29 11:35:59:

Also 1440p was largely ignored, so in a jump from 1080p to 4K there's a massive increase in demand from the hardware. And at the same time, we want better refresh rates than ever. 4 times the pixels and 3 times the frames isn't easy!

saberdancer wrote at 2020-10-30 23:03:33:

I recently moved from 1080p 16:9 at 60Hz to 1440p 21:9 at 160Hz. 6 times the amount of pixel/frames. One of the reasons why performance isn't stellar even though I bought 3080 at the same time. Most games can't run anywhere near 160 fps.

freeflight wrote at 2020-10-29 02:28:27:

Prior to the big cryptomining/ML hypes.

Back then these highend-GPUs used to be prestige projects that mostly existed for the marketing of "We have the fastest GPU!", the real money on consumer markets was made with the bulk of the volume in the mid-range.

bcrosby95 wrote at 2020-10-28 16:55:52:

Yes. The exception was when bitcoin prices went crazy and miners were buying them all up.

read_if_gay_ wrote at 2020-10-28 16:56:50:

That was mostly limited to AMD cards though.

diab0lic wrote at 2020-10-28 16:59:52:

Right but if you're a gamer and you wanted an AMD card and can't get one because of crypto demand you'll buy an Nvidia card, thus causing demand to spike. Obviously it didn't spike as high as AMD but consumers will feel the shortage if demand spikes over supply, which happened in this case.

Macha wrote at 2020-10-28 17:53:44:

I seem to remember the 1070/1070ti/1080 being affected too. The lower and higher end (1060/1080ti) were the main ones to escape

colejohnson66 wrote at 2020-10-28 17:32:26:

Nvidia as well made “crypto mining” cards that were just a mainline card without video output. I recall a video by LinusTechTips who attempted to get video output from one of them.

Edit: Here’s the video:

https://youtube.com/watch?v=TY4s35uULg4

onli wrote at 2020-10-28 17:38:46:

There was a time period where pretty much the only gpu you could get was a GTX 1050 Ti. At first it was only AMD cards, right, but things changed after a while.

deeeeplearning wrote at 2020-10-28 17:09:53:

Appears to be some evidence that most of the 3xxx cards were bought out by bots. Most sites were sold out more or less instantly and there's now a large supply available on ebay/amazon/etc for way over retail prices.

snvzz wrote at 2020-10-28 21:07:46:

Considering how bad Ampere lineup is against AMD, particularly when price taken into consideration, these scalpers are going to lose money. They need to sell the cards at higher than official price to make money, which is going to be hard.

And I do not feel sorry for them.

vkou wrote at 2020-10-28 18:18:12:

As a shareholder and a capitalist, I am deeply disappointed that NVidia is leaving hundreds of dollars of profit per card on the table, by allowing scalpers to pick it up.

But on a serious note - why are they doing so? Surely, they have good predictions of supply and demand - why are EBay resellers reaping the profits from the gap between the two? Why aren't these cards retailing for $1000, with price reductions happening as demand drops below supply?

potiuper wrote at 2020-10-28 18:59:39:

NVidia does not sell the cards direct, the retailer does. Retailer does not want to deal with that much volatility in price as customers would demand refunds, so it is left to the scalpers to take on the supply risk. It is surprising that the retail version of the cards are available upon release as is and not dedicated for a period of time to OEMs - Apple, Alienware, etc. (volume customers) negotiated for guaranteed supply at premium rates.

read_if_gay_ wrote at 2020-10-28 16:56:23:

Obviously YMMV but as an example, I don't remember it being that hard to get my 980Ti back when it was somewhat new.

m463 wrote at 2020-10-29 23:28:08:

Right before a new generation comes out?

Koliakis wrote at 2020-10-28 22:42:19:

Add to that, it's likely they won't have as many supply issues as nVidia because they're using TSMC and TSMC is totally on the ball when it comes to yields (unlike Samsung).

koffiezet wrote at 2020-10-29 00:47:02:

If you read between the lines, it's AMD's allocation of TSMC capacity that pushed nVidia out to samsung...

TSMC has to spit out Zen3, RX6000, plus the custom versions for XBX and PS5, all around the same time...

jonplackett wrote at 2020-10-28 22:56:42:

I was thinking this too - is this really just another TSMC victory - like it is with AMD on TSMC vs Intel still not making it to 10nm, and Apple making gains on A14 (soon to be compared VS intel no doubt) - it's the process that's driving efficiency for sure.

How much is this one really about TSMC 7nm vs Samsung 8nm?

lhl wrote at 2020-10-29 08:04:44:

This is on the same 7nm process as RDNA1 was, but with an over 50% (65% for the top-end 6900XT) performance/watt increases, so this absolutely isn't just about process (although having high yields is key to overall product success).

I'd say the crowning achievement for this architecture is the "Infinity Cache":

https://twitter.com/Underfox3/status/1313206699445059584

"This dynamic scheme boosts performance by 22% (up to 52%) and energy efficiency by 49% for the applications that exhibithigh data replication and cache sensitivity without degrading the performance of the other applications. This is achieved at a area overhead of 0.09mm²/core."

See also this presentation:

https://www.youtube.com/watch?v=CGIhOnt7F6s

carlhjerpe wrote at 2020-10-29 00:01:17:

I find it interesting that this isn't brought up more often. And it's also rather impressive that Intel isn't further behind.

Then again, I'm rooting for AMD all the way until they become the new evil.

redisman wrote at 2020-10-29 02:20:45:

> A14 (soon to be compared VS intel no doubt)

Isn’t A14 a “5nm” process chip? Why would it be compared to intel and their 14nm++++?

jonplackett wrote at 2020-10-29 17:44:35:

I mean it will be compared by Apple, when they launch their new apple silicon macs and want to show off how much faster their chips are than previous Intel - likely taking all the credit, when a large part is down to TSMC's 5nm process.

Jenya_ wrote at 2020-10-30 09:40:52:

The 5nm process from TSMC could be less of a success, as say Anandtech editor on Twitter (his A14 review incoming)

https://www.reddit.com/r/hardware/comments/jjwtjy/andrei_fan...

tmaly wrote at 2020-10-28 17:00:25:

26.8 Billion transistors, that boggles my mind.

How do you even begin to think about testing hardware like that for correctness?

I remember my days as an intern at Intel. I remember someone using genetic algorithms to try to construct a test suite for some chip. But it was no where near that transistor count.

devonkim wrote at 2020-10-28 17:39:08:

A long time ago when I was writing Verilog and VHDL what people were doing beyond the 1Bn gate (I still keep using gates rather than transistors as a unit, funny) scale were statistical sampling methods to try to uncover certain stuck / flakey gates so certain more risky or critical paths were tested a bit better. Then there's also error correction mechanisms that are more common nowadays that can compensate to some extent. We don't test for every single possible combinatoric like was possible so long ago, full stop. Furthermore, GPUs aren't necessarily to be tested in the same ways as CPUs. GPUs don't really branch and hold state with exotic addressing modes like how CPUs can operate, so testing compute shaders is in many respects closer to testing not a state machine as much as a big, wide bit bus. Heck, one doesn't need to even bother hooking up any output devices like HDMI and HDCP decoders to a GPU as long as the PCB is hooked up fine during assembly and the peripheral chips are tested to spec, too.

And for an extra measure of automation there are test suites that are programmed into hardware that can drive chips at line speed on a test bench.

dragontamer wrote at 2020-10-28 17:15:12:

> How do you even begin to think about testing hardware like that for correctness?

From my understanding: Binary decision diagrams on supercomputers. (

https://en.wikipedia.org/wiki/Binary_decision_diagram

)

Verification hardware is a major business for all these major CPU players today. Fortunately, solvers on this NP-complete problem have benefited not only from faster computers, but huge advancements in algorithmic improvements in the past 20 years.

thebruce87m wrote at 2020-10-28 19:11:30:

You mean in production? Digital logic is easy, use scan/ATPG:

https://en.m.wikipedia.org/wiki/Scan_chain

Basically chain all the logic together in a special test mode.

Analog blocks get tests written specifically for them and either tested via external test hardware or linked internally.

For example, if you have a DAC and and ADC, provide a link internally and drive the ADC from the DAC to test both. You can also test comparators etc from using a combination of the DAC and ADC, trim bandgaps etc.

If you’re real smart, you do this at probe (wafer), massively in parallel.

marcosdumay wrote at 2020-10-28 18:53:27:

You mean testing the hardware after it's produced against fabrication defects or testing the design before production against the specification?

Both are very different and very interesting on their own. For the design, verification can be done by computers and it's much more powerful than testing anyway. For the hardware, I imagine there are cores inside the chip with the sole task of testing sub-circuits (fab defects are much less varied than design flaws), but I stopped following that stuff at the era of single billion transistors.

andy_ppp wrote at 2020-10-28 17:06:21:

You break it down into smaller parts and test those as discrete units.

Tomte wrote at 2020-10-28 17:09:02:

Not just that: you have dozens, hundreds or thousands of unit A, and of unit B, and of unit C. You test each of these units once, and then some measure of interplay.

colejohnson66 wrote at 2020-10-28 17:33:40:

How long would formal verification on something this big even take?

KSS42 wrote at 2020-10-28 18:53:28:

Do you mean RTL to gates formal verification?

colejohnson66 wrote at 2020-10-29 14:47:54:

Yes. Also, I’m assuming[a] that a GPU would take about the same as a CPU because despite GPUs having billions more transistors, they have a lot of duplicated modules compared to a CPU which has many more different modules but probably only one or two of each (per core).

[a]: My knowledge of creating hardware is very limited. So I could be completely wrong.

blawson wrote at 2020-10-28 17:55:51:

Technologies like Chisel can make this a lot easier I think:

https://www.chisel-lang.org

I'm not super familiar with it, but one day perhaps hardware verification becomes as easy as software testing someday?

read_if_gay_ wrote at 2020-10-28 17:07:22:

> I remember someone using genetic algorithms to try to construct a test suite for some chip.

Could you expand a bit on how that worked? Seems like an interesting application.

tmaly wrote at 2020-10-29 03:26:05:

The idea was to try to find a smaller set of tests that could provide equivalent test coverage to that of a larger number of brute force generated tests.

Animats wrote at 2020-10-28 18:42:17:

It's sort of funny. During the period when GPUs were useful for cryptocurrency mining, demand pushed the price of NVidia GPU boards up above $1000, about 2x the usual price. This was a previously unknown price point for gamer-level GPU boards. Once mining demand declined, NVidia kept the price of their new high-end products above $1000, and sales continued. Now $1000+ is the new normal.

jbay808 wrote at 2020-10-28 18:49:12:

I think demand from machine learning applications has also helped keep prices high.

deafcalculus wrote at 2020-10-28 20:36:41:

Die size is up too. At over 600 mm2, it’s a huge chip.

globular-toast wrote at 2020-10-29 07:54:32:

It's actually insane. I haven't been running a high end gpu for several years now. The last one I had was an 8800GT. When I got that gpu I felt like I had "made it" in terms of being able to afford the hardware I always wanted as a child. Now with all this news about new gpus I decided to look on ebay for a mid-range AMD chip from the previous generation. Still going for £200-300! I can't justify spending that much on games. Honestly are gamers just expected to have no other hobbies any more?

arc-in-space wrote at 2020-10-29 08:59:34:

I mean, it's a one-time purchase that lasts you 2-4 years minimum. If anything, gaming is on the reasonably cheap side of hobbies, especially if you still otherwise have uses for a desktop, like most people here do.

Compare getting that mid-range AMD GPU to a Netflix subscription for two years(which I am guessing costs about as much, or at least in the same range), and I think it starts to look more favourable.

You do still need to buy actual games but their costs can be amortized across sales, and the fact that people typically buy few games and play each one for a long time.

lhl wrote at 2020-10-29 08:45:31:

A quick search shows the 8800GT was released in Oct 2007 and was 180 GBP. [0] Using an inflation calculator this comes out to 256 GBP today, [1] so about the same price as the cards you're looking at. That being said, it looks like even in 2019 average weekly wages (in GBP) have declined since 2008, so accounting for inflation, most Britons are maybe just simply poorer now [2] (2020 of course has been even worse. [3])

Anyway, ignoring all that, IMO gaming is still more affordable than it's ever been - you can get a used Xbox One S for about $100 on eBay, or for PC gaming, you can get a used RX 470 or GTX 1060 for a bit less than that, which are fine GPUs for 1080p gaming performance. Also, even relatively modern AAA games can now be played at low-medium settings at 720 or 1080p30 without a dedicated GPU on a late-model Ryzen APU.

[0]

https://www.theregister.com/Print/2007/11/12/review_nvidia_g...

[1]

https://www.hl.co.uk/tools/calculators/inflation-calculator

[2]

https://www.bbc.com/news/business-49328855

[3]

https://tradingeconomics.com/united-kingdom/wage-growth

globular-toast wrote at 2020-10-29 08:58:09:

But that was a brand-new, current-gen, mid-high-end Nvidia card back then. I'm looking at previous-gen, used AMD cards.

If you look at gaming builds from the last few years it's _always_ a 1080/2080. These cards are essentially the equivalent of the 8800GT back then, ie. the one with high performance that you got if you didn't literally have money to burn. But how people can afford this stuff unless they have no other hobbies I don't know. Maybe the answer is they can't afford it? I'm not a single penny in debt.

lhl wrote at 2020-10-29 09:54:56:

OK, just for fun, lets do a brand-new dollar for dollar comparison of what you get for the latest new GPUs in the same price bracket. ~250 GBP on Amazon.co.uk will get you a 6GB GTX 1660Ti or Radeon RX 5600XT. This gives you a GPU that has 6X the memory, and roughly 25X the performance of the card you would get 13 years ago:

https://www.videocardbenchmark.net/compare/GeForce-8800-GT-v...

(see also:

https://www.techpowerup.com/gpu-specs/radeon-rx-5600-xt.c347...

). The 5600XT plays almost every single game at over 60fps @ 1080p (often over 100fps):

https://www.notebookcheck.net/AMD-Radeon-RX-5600-XT-Desktop-...

By every relative and absolute metric, you are getting way more for your money than you did a decade ago, even vs other technology products (eg CPUs have made significantly less performance gains over the same period [0]) or experience (you can play the latest titles at much better fidelity and frame rates than the 8800 GT could for games of that era).

Have high end card prices gone up? Sure, due to both to increased demand volatility (multiple waves of crypto booms and busts) as well several generations where Nvidia has simply not had viable competition in the highest performance categories, but even with that in mind, performance/$ has still kept climbing in just about every single category. Note: new generation low-mid range cards still haven't come out yet. Expect another bump in price/performance Q1/Q2 next year (GA106 & GA107 for Nvidia, Navi 22 and 23 for AMD, also potentially Intel's Xe DG2).

[0]

https://www.cpubenchmark.net/compare/Intel-Core2-Extreme-Q68...

lhl wrote at 2020-10-29 10:09:54:

> If you look at gaming builds from the last few years it's always a 1080/2080.

Since you added this after I started writing, I think your idea of what hardware most gamers have is just plain wrong. One only has to look at the Steam Hardware Survey results to see just how few people buy the top-end cards:

https://wccftech.com/nvidia-geforce-amd-radeon-discrete-gpu-...

You can corroborate this w/ JPR's market reports which consistently show that mainstream and midrange cards (<$250) account for almost 80% of AIB sales:

https://www.jonpeddie.com/press-releases/the-add-in-board-ma...

This also doesn't account, as you've seen from the used market, that most gamers on a budget upgrading can simply sell their previous card and effectively only pay a fraction of the price every time they are upgrading.

globular-toast wrote at 2020-10-29 10:37:05:

Sorry for editing my comment.

I see that I was mistaken, or perhaps misled, about what kind of card is actually required to be able to have a decent gaming experience today.

You've convinced me that the mid to upper-mid tier is not really that much different to how it was back then. I also should consider that while I was earning a pittance back then compared to what I earn now, I lived with my parents and didn't own a car or anything else really apart from my PC.

lhl wrote at 2020-10-29 11:12:49:

Yeah, to be fair, I think that perception that everyone is gaming on (or even needs) very high-end GPUs has of course been purposefully cultivated by GPU vendors (while most sales are from mainstream cards, most profits are from enthusiast and high-end cards). Gamers in particular also seem to get overly hyped for the halo products, and treat whichever brand has better performance there very aspirationally. Nvidia's marketing has also been especially effective at jacking up prices on the highest-end the past couple years.

Now that consoles are basically gaming PCs, good gaming performance for AAA titles is increasingly anchored by each console generation, so there's a bit of a shift every 4-5 years - one happening now. There's also an insanely large backlog of games and a huge amount of F2P and competitive games that aim to be playable on iGPUs, which have also advanced considerably over the past few years.

phatfish wrote at 2020-10-30 00:55:01:

I got that same card (8800GT) and used it for many years, mainly because it was under £200. I think it was just a great deal at the time.

Very likely prices pushed back up after Nvidia grabbed any market share they lost to ATI that caused them to give us a good deal.

jahabrewer wrote at 2020-10-28 22:54:43:

Who cares? The RTX 3090 is not for gamers.

snvzz wrote at 2020-10-29 00:42:30:

Who's it for, then?

They've got no ML drivers and capped FP32/64 performance.

detaro wrote at 2020-10-28 17:02:07:

What's the story with ML stuff on AMD consumer GPUs nowadays? Gotten better, or still "buy NVidia for that"?

dragontamer wrote at 2020-10-28 17:06:24:

ROCm is spotty for AMD GPUs. Definitely don't buy these GPUs until AFTER ROCm has declared support.

The NAVI line was never officially supported by ROCm (though ROCm 3.7 in August seemed to get some compiles working for the 5700 XT, a year after its release).

-------

Generally speaking, ROCm support only comes to cards that have a "Machine Intelligence" analog. MI50 is similar to Radeon VII, so both cards were supported in ROCm. The Rx 580 was similar to MI8, and therefore both were supported.

The Rx 550 had no similar MI card, and had issues that are yet resolved today. Rx 570 had no MI card, but apparently its similar enough to the 580 that I'm not hearing many issues.

In effect: AMD only focuses their software development / support on Machine Intelligence cards. The cards that happen to line up to the MI-line happen to work... but the cards outside of the MI line are spotty and inconsistent.

philjohn wrote at 2020-10-28 22:25:15:

AMD is segmenting to Gaming (RDNA) and Compute (CDNA) - CDNA is essentially a continuation of Vega, which was a beast for compute.

dragontamer wrote at 2020-10-28 22:31:18:

RDNA's ISA is impressive. Load/store asymmetry, Subvector execution, 1024 vGPRs (!!!!), unlimited sGPRs (not quite, but no longer shared at least), etc. etc.

Based on the ISA alone, I'd prefer compute-applications to move to RDNA frankly. Its clearly a better designed architecture.

------------

I can't find any public information on CDNA. I've even emailed ORNL for information, but they responded back saying that NDAs prevent them from saying anything.

Whether CDNA is based on NAVI or Vega will be a big question as Frontier launches next year. I hope its Navi based, I really do, because its just better. But I'd "understand" if ROCm can't be ported easily to Wave32 or other issues (RDNA is a huge architectural change).

bitL wrote at 2020-10-28 17:08:08:

Both PyTorch and TensorFlow support ROCm, but there are still bugs that prevent running all CUDA code on AMD. Now that AMD has money, they can ramp up SW development there.

dogma1138 wrote at 2020-10-28 19:43:18:

The issue is that AMD doesn't support ROCm on it's newer consumer GPUs.

It also doesn't want to release a ROCm compatible driver for Windows, and you can't run ROCm in WSL/WSL2.

Direct ML is still in its infancy and is quite slow right now, AMD really needs to step up their game if they want to compete.

bart_spoon wrote at 2020-10-28 17:55:36:

The gap is closing, but Nvidia is still a clear leader.

snvzz wrote at 2020-10-28 20:18:29:

Just be careful the 3070/3080/3090 lineup are gamer cards and do not have ML drivers.

FP32/64 performance is going to be capped, as with prior generations.

oatmealsnap wrote at 2020-10-30 19:29:11:

I know several people look at the 3090 for ML use, due to its extra RAM. Hobbyists, not companies.

Do you mean "enterprise ML"? Like, if I ran an ML company I wouldn't be looking at these cards?

martinesko36 wrote at 2020-10-29 01:50:33:

Can you elaborate?

deeeeplearning wrote at 2020-10-28 17:08:09:

Miles away basically, they're still taking baby steps.

xvf22 wrote at 2020-10-28 16:46:22:

Performance is better than I expected, especially for 300w and while not verified externally, it looks like AMD is finally competetive with the top end. If they can get the required share of TSMC volume, then Nvidia may feel a bit of pain since Samsung seems unable to meet the demand.

Tuna-Fish wrote at 2020-10-28 16:59:01:

Note that the 6900xt benchmark was running in "rage mode", and therefore was not inside 300W. Then again, the card they are comparing to is ~380W, so there is some room to play with while still being more power-efficient than the competition.

filmor wrote at 2020-10-28 17:52:28:

"Rage" mode, what a nice name choice coming from ex-ATi :)

013a wrote at 2020-10-28 18:18:11:

Also important to note that it had Smart Memory Access on, which requires a Ryzen CPU (and furthermore, did they say specifically a Ryzen 5xxx CPU?)

zlynx wrote at 2020-10-28 19:50:03:

I wonder if Smart Memory is even using PCIe 4? Remember AMD uses the PCIe lanes for interconnect between two EPYC CPUs. It's supposedly very similar to PCIe but a bit higher speed.

Smart Memory could then be as efficient as cross-CPU EPYC memory access. Which is pretty good.

floatboth wrote at 2020-10-29 17:09:23:

Smart Memory Access is just marketing speak for resizable BARs (

https://docs.microsoft.com/en-us/windows-hardware/drivers/di...

)

kllrnohj wrote at 2020-10-28 19:47:13:

They said Ryzen 5xxx _and_ x570 chipset.

noir_lord wrote at 2020-10-28 20:12:18:

500 series, that would cover B550.

rasz wrote at 2020-10-29 04:42:43:

Which is Intel level of dubious fake market segmentation considering those are no longer Chipsets, but rather southbridges sitting away from main PCIE lanes and memory buses.

kllrnohj wrote at 2020-10-29 12:22:40:

A possible plausible explanation is that AMD's SAM is "just" that they fixed the BAR size limits so the entire vram can be mapped at once, instead of the still weirdly common & low 256MB limit. If so, that would actually involve the chipset (even if it's just the supporting BIOS & microcode) and not just the CPU.

rasz wrote at 2020-10-29 13:46:14:

But there is no chipset. PCIE controller is on CPU die.

kllrnohj wrote at 2020-10-29 13:55:45:

The software that runs the PCIE controller isn't on the CPU, though. And at least at launch only the 500 series chipsets support the Ryzen 5000 CPUs. Maybe this will come to the 400 series chipsets later that get Ryzen 5000 support, or maybe this is something that was cut to squeeze in Ryzen 5000 support for the 400 series chipsets.

freeflight wrote at 2020-10-29 02:38:00:

At this point SAM seems to be Ryzen 5xxx "exclusive".

Makes me wonder if that's related to the improved L3 cache access of Zen 3?

jsheard wrote at 2020-10-28 17:03:37:

The performance numbers shown were cherry picked though, and conspicuously didn't include anything with raytracing enabled.

Leaked 3DMark raytracing benchmarks showed Big Navi lagging far behind Ampere so I wonder how that's going to bear out in real games.

freeflight wrote at 2020-10-29 02:41:36:

The cherry picking happens pretty much with all of these kinds of presentations.

Nvidia did that by enabling DLSS and RTX in everything, that's how they ended up with "Up to 2 x 2080 performance" which in practice only seems to be the case with Mincecraft RTX running on 30 fps instead of 15 fps.

fluffy87 wrote at 2020-10-29 07:24:44:

Why wouldn’t you enable RTX and DLSS in a 3070 or 3080?

I mean, I get that you can’t enable these on AMD 6000 series because the card doesn’t support these well.

freeflight wrote at 2020-10-30 22:28:31:

_> Why wouldn’t you enable RTX and DLSS in a 3070 or 3080?_

Not every game supports RTX, those that fully rely on it are still performing so badly that a 15 fps increase can be marketed as "double 2080 performance", while saying absolutely nothing about the much more relevant rasterization performance.

Using DLSS can be misleading as picture quality is very difficult to objectively compare. But common sense makes the idea of "More performance with more details!" just sounds too good to be true, even with ML magic involved. Reminds me much more about something like Temporal Filtering: Giving more performance, but a slight cost of picture quality that might not be too noticeable with the extra raw pixels trough upscaling.

throwaway2048 wrote at 2020-10-28 17:16:51:

Very few games have implemented raytracing

mcraiha wrote at 2020-10-28 17:42:07:

But since both PS5 and Xbox Series X support ray tracing, there will be many upcoming PC games that support it too. You want a future proof GPU when you spend over 500 dollars to it.

doublepg23 wrote at 2020-10-28 18:17:29:

Aren't both of those going to be using the same AMD ray tracing tech?

oulu2006 wrote at 2020-10-28 19:53:40:

That's what I was thinking, seems a little odd that ray tracing can be significantly behind when both those consoles use AMD tech to implement their ray tracing capabilities.

zamadatix wrote at 2020-10-28 23:10:39:

I wouldn't expect ray tracing on the consoles to be better than Nvidia 2xxx ray tracing performance and that was only useful as a small gimmick just to put the "ray tracing" label on a game or a couple decent effects but at 30 FPS. I'm not even sure the 3xxx does ray tracing well enough to be worth it and it's ~2x the performance level.

I.e. just because games on the consoles can support ray tracing now doesn't mean they actually support it well. I still think it's a generation or 2 away from running well (as a current 2080 Ti owner)

leetcrew wrote at 2020-10-28 18:09:31:

imo "future proofing" doesn't really make from a price/perf perspective, especially for something so easily replaced as a desktop GPU. by the time the cool new feature goes mainstream, that high end GPU you bought probably needs to run everything on medium anyway. I expect the RTX 2000 series will age very poorly. a better approach is to aim for the price/perf sweetspot for a target resolution and upgrade often. I bought a 1080 TI more because I could than because it made sense.

snvzz wrote at 2020-10-28 20:19:24:

As these consoles use AMD hardware, future-proof means AMD hardware. The hardware these console games are made for.

nodonut wrote at 2020-10-28 17:40:01:

A low price and energy draw with ultra 4k gaming, meets the wants/needs for >90% of hobbyists. Assuming no issues with future testing/drivers, the 6000 line is forcing nvidia into a more niche market. Namely, Ml, ray tracing, and "flex" buyers.

I wouldn't be that worried if I were nvidia-- catering to the whales is good business, but I think we're looking at amd winning the lion share of the market

redisman wrote at 2020-10-29 02:27:33:

AMDs 3080 competitor is $50 less and 3070 competitor $70 more. The flagship is cheaper. In the end, most PC gamers balk at >$200-300 GPUs. These are all niche cards so far.

singhkays wrote at 2020-10-28 16:46:59:

Surprised they were able to catch up to Nvidia's latest cards after foregoing high-end for a while. "Catch-up" seems to be AMD's forte lately.

But we'll need to wait for real world testing to see how accurate these claims are.

mrweasel wrote at 2020-10-28 17:28:07:

Yeah, I want to see actual tests before I’m convinced that they’re on par with Nvidia. If AMD turns out to have caught up with Nvidia I will be very impressed, that cannot be easy to do while managing the same on the CPU side at the same time.

devonkim wrote at 2020-10-28 17:46:43:

There's enough break-downs I'm seeing where these new cards aren't really caught up with the newest RTX cards because DLSS 2.0 is the big differentiator and stuff like nVidia Broadcast just aren't there for AMD. The other suspicious comparisons are AMD benchmarks against RTX 2080 Ti when the newest RTX cards _all_ blow away the RTX 2080 Ti in almost every metric.

However, what I'm seeing out of this mess is that AMD is absolutely competitive on a watt-per-performance basis now. The other problem is that AMD is so far behind nVidia in software (read: AI research mindshare) that it's not clear if we'll be able to see that many titles take on raytracing in future titles or adopt the work necessary to do ML-based upscaling with AMD as the baseline software stack rather than DLSS.

CivBase wrote at 2020-10-28 18:08:35:

I suspect the RTX 2080 Ti benchmarks were used considering the RTX 3070 _just_ launched and only the Founders Edition is available for purchase. Based on what I've heard, the RTX 3070 is basically on-par with the RTX 2080 Ti in terms of real-world performance in games, so it's still probably a useful comparison.

meragrin_ wrote at 2020-10-28 20:30:19:

The RTX 3070 does not official go on sale until tomorrow. You might be able to pre-order somewhere though.

cma wrote at 2020-10-28 17:56:48:

DLSS 2.0, even though it doesn't require individual game training like 1.0, still isn't a generic feature available to any game that puts in the effort to support it, because Nvidia locks it down with watermarks and a non-redistribute let dll until you get approval and are added to their whitelist. Only a handful of games are whitelisted, giving a big disadvantage to indie games vs AAA.

NikolaeVarius wrote at 2020-10-28 18:55:11:

How many indie games are pushing 4k with ray tracing/eye candy which is really what dlss is for

dogma1138 wrote at 2020-10-28 19:51:40:

How many people spend $500-800 on a GPU just to play indie games?

And in any case since Unreal and Unity are integrating DLSS and other "GameWorks RTX" features more and more games will be able to implement them with essentially just a toggle.

The graphical fidelity in indie games has increased dramatically mostly due to the fact that Unreal Engine became very indie friendly and Unity has really stepped up their game with what they offer out of the box.

Eye candy is now easy because the game engines have a lot of these effects and the materials required for them built-in and you also can quite easily get really high quality assets, particles and materials in the engine marketplace for very low cost.

5 years ago developing a water shader for physically accurate water rendering would probably take an indie dev months to do complete and probably could've gotten them a speaking spot at GDC, today it's a toggle switch in the UE4 level editor.

cma wrote at 2020-10-28 22:20:41:

> And in any case since Unreal and Unity are integrating DLSS and other "GameWorks RTX" features more and more games will be able to implement them with essentially just a toggle.

That integration (UE4) comes uses the watermarked dll. Unusable until you get explicitly whitelisted by Nvidia. Only a few tens of games have been whitelisted.

dogma1138 wrote at 2020-10-28 23:19:24:

It's still in its development stages and NVIDIA essentially wants to guarantee that all games that launch with DLSS have a good enough implementation not to draw criticism.

The current rumour mill hints at 3.0 when it goes GA it's also when it should be part of unity.

However you can still grab the DLSS branch that NVIDIA maintains and work on it, if the result is good enough I haven't seen any evidence stating that removing that watermark is particularly difficult.

cma wrote at 2020-10-28 18:56:05:

It's a basic engine feature now with a toggle in the major engines.

NikolaeVarius wrote at 2020-10-28 19:15:00:

The point is that looking at DLSS supported games, majority are AAA titles, which is what DLSS is for.

Who cares if your average indie game doesn't support it, it probably isn't useful in those cases since you're already running it at 900 FPS

cma wrote at 2020-10-28 21:06:39:

4K and raytracing are just feature toggles with next to no added development time. To get assets with texel density necessary to really take advantage of 4K? There are huge photogrammetry libraries accessible to indies like Quixel megascans, etc. you probably have an outdated notion of where things stand, IMO.

> The point is that looking at DLSS supported games, majority are AAA titles, which is what DLSS is for.

No, that's my point, and it is in part due to lock out with Nvidia's explicit whitelisting system.

thrwyoilarticle wrote at 2020-10-29 11:43:04:

Unless it's Dwarf Fortress

Koliakis wrote at 2020-10-28 22:46:10:

> it's not clear if we'll be able to see that many titles take on raytracing in future titles

Considering AMD's core role in next-gen consoles, it's likely that there'll be broad support for raytracing in games (especially cross-platform games). I'd say the question is more whether the 6000 series is anywhere close to RTX in performance for RT (which afaik wasn't shown today).

redisman wrote at 2020-10-29 02:31:43:

I have serious doubts about RT being a thing with the next gen consoles. Possibly a few cheap effects. Maybe the Series X+1 or whatever refresh they do in a year or two. Even powerful PCs can’t really pull off major RT without tanking performance.

floatboth wrote at 2020-10-29 17:17:40:

For all the AAA games it's mostly for enhancing the rasterized picture with accurate (_rather than cheap_) lighting indeed. But that's kind of the optimal solution, rasterization is really good at producing the "base" picture.

nicoburns wrote at 2020-10-28 18:33:23:

Seems like this might be a Zen 1 kind of situation: not quite caught up with the competition but close enough that they're competitive.

singhkays wrote at 2020-10-28 18:31:51:

On the software side AI is a definite miss. The other is video editing acceleration support in Adobe Premiere and DaVinci Resolve. I haven't looked lately but I think Nvidia completely dominates acceleration of post-processing effects and such.

kllrnohj wrote at 2020-10-28 19:52:04:

> because DLSS 2.0 is the big differentiator

Is it really, though? Consoles have been doing upscaling without it for years, and one has to assume they're still going to be innovating on that front on RDNA 2.0 with the new generation, too.

The DLSS 2.0 mode where it's used as a super-sampling replacement is kinda interesting, but realistically TXAA in most engines is also pretty solid. It seems like a fairly minor extra feature at the end of the day as a result... Cool, but not game changing at all.

EDIT: although AMD did make mention of something called "super resolution" as part of their fidelity fx suite which sounds like a DLSS competitor but there's no real details. And of course the actual image results here are far more important

CivBase wrote at 2020-10-28 17:31:36:

I don't think I'd characterize what they've been doing in the CPU market as "catching up" anymore. They caught up to Intel a few years ago. Now they are just asserting dominance for the immediate future.

They've definitely been far behind Nvidia for a while, though. I haven't seriously considered an AMD GPU in almost a decade outside of budget gaming rigs, and even then I ended up going with a used Nvidia card. Hopefully this is enough to give them a serious foothold in the high-end GPU market so they can give Nvidia competition for years to come.

p1necone wrote at 2020-10-28 22:41:57:

They caught up/surpassed Intel in multithreaded performance with the first gen of Ryzen (of course depending on price point), but until Zen 3 Intel still had the advantage in single threaded perf. And because a _lot_ of games either heavily peg one thread only, or only scale to a small number of threads this meant Intel still had the advantage on most games at a lot of price points. It's only with Zen 3 that Ryzen is beating Intel in single threaded perf as well.

singhkays wrote at 2020-10-28 18:33:57:

The reason I say "catch-up" is because before Zen 3, Intel still had an IPC lead. With Zen 3, it's the first time in a decade that AMD is able to claim IPC lead.

kllrnohj wrote at 2020-10-28 19:57:02:

Intel had a single-threaded performance lead prior to Zen 3 but it's not entirely accurate to say they had an IPC lead. Zen 2 was fairly evenly matched vs. Intel's latest in IPC, but Intel's clock advantage made overall single core performance clearly in their favor.

So Zen 2 would be when AMD "caught up." Zen 3 would be where they surpass (assuming reviews match the claims etc etc...)

fomine3 wrote at 2020-10-29 04:29:11:

Intel's latest top IPC CPU is Ice/Tiger Lake, not Skylake derivatives. So maybe Zen3 is the catch up to Intel.

floatboth wrote at 2020-10-29 17:11:48:

WhateverNewLake was not (still is not?) present in the big desktop market, the big power-hungry 5.3GHz desktop chips are all Skylake+++++

01100011 wrote at 2020-10-28 17:03:27:

Good showing from AMD. It will be interesting to see how Nvidia responds. I'm curious if AMD will also suffer from supply issues.

Does AMD allow running in "rage mode" without voiding the warranty? Is that something that a 3rd party mfg will offer to cover?

zamalek wrote at 2020-10-28 17:12:17:

I assume so, given that Ryzen runs in "rage mode" all the time (and RDNA2 is the first architecture where people from the Zen team have been involved).

I also assume that you are going to need a high-end case (notwithstanding the PSU) to provide adequate cooling because, if it is anything like Ryzen, it will react strongly to the cooling available to it.

It will likely be noisy and, as always with OC, not all cards will have the same headroom: it allows them to avoid promising headroom that may not exist on your individual card.

kllrnohj wrote at 2020-10-28 19:59:30:

> given that Ryzen runs in "rage mode" all the time

It doesn't. "Rage mode" would be equivalent to Ryzen's PBO. Which is definitely not on by default.

It's likely equivalent to just dragging the power limit slider to the highest it goes on MSI afterburner. Letting boost go out of power spec limits, but nothing more.

Which can still give decent gains (look at how power starved Ampere is, for example). But the only noteworthy thing here is just it's in the official control panel instead of a 3rd party app.

zamadatix wrote at 2020-10-28 23:39:52:

Small correction - Rage mode increases the power budget but AMD confirmed it does not max out the power budget the same as sliding the slider all the way to the right.

Speculation point: because of this I almost wonder if it's equivalent to selecting a higher TDP rather than a warranty violating OC like PBO was. One of the things people railed on Nvidia for was for ever slightly less perf you could get a much lower wattage card that was quieter, maybe this was the response to that?

kllrnohj wrote at 2020-10-29 00:01:24:

PBO basically just selects a higher TDP, too.

The warranty violating part of all of these would be you're technically driving the power delivery system beyond what the spec strictly requires. For PBO that'd be your exceeding the spec that AM4 requires from motherboards, which could put you of spec for a motherboard's VRM solution. I don't know if there's a handshake there between the CPU & BIOS to ensure PBO doesn't exceed a given motherboard's specs, though. If there is it'd be hard to claim this would be an actual warranty-violating usage.

Rage mode almost certainly respects the vbios limits, so you'll likely see lower-margin cards have basically no improvement from Rage mode, depending on how much they skimped on the power delivery. You'll likely not see an actual push to void warranties as a result, too.

Although in both cases (PBO & Rage mode) it's going to be almost impossible for a vendor to actually reject a warranty issue unless you _tell them_ you did this. Afaik nobody is doing something like a blown fuse to keep a permanent record of these feature(s) being used or enabled.

snvzz wrote at 2020-10-28 20:24:05:

NVIDIA has simply made an expensive mistake[0] hoping TSMC would offer them lower prices or that Samsung's new process would be ready (it was not), resulting in their worst lineup in a hell of a long time[1].

[0]:

https://www.youtube.com/watch?v=tXb-8feWoOE

[1]:

https://www.youtube.com/watch?v=VjOnWT9U96g

arvinsim wrote at 2020-10-28 17:31:43:

That "rage mode" nomenclature might potentially come back to bite them.

redisman wrote at 2020-10-29 02:38:45:

It’s also a really weird “feature” to highlight. Oh wow you named moving the power slider slightly up - something users have been able to do for years and it takes a few seconds

zanny wrote at 2020-10-28 17:45:07:

AMDs been buying 7nm wafers from TSMC for two years now. They should be better equipped to meet demand, or at least more knowledgeable from experience about what kind of supply they can field.

Not sure how in the world they plan on supplying both their new cpu and gpu series with a holiday season launch, though.

01100011 wrote at 2020-10-28 18:03:57:

Exactly. On the one hand they are well established with TSMC, on the other hand, they are trying to meet demand for both CPUs and GPUs simultaneously.

Who knows what they'll decide from a business perspective? I wonder how the margins compare between CPUs and GPUs? They could, say, plan on limiting their higher end GPU SKUs which gives them temporary bragging rights in the GPU space but reserves capacity for CPUs.

redisman wrote at 2020-10-29 02:39:36:

And Xbox and PS5!

CivBase wrote at 2020-10-28 17:17:53:

This is great! I was ready to update to an RTX 30-series card once they were finally available, but now I'm strongly considering a RX 6000-series card.

Of course, even if AMD's benchmarks are representative of how the card will actually perform and even with the lower price, I still have to consider Nvidia for their software features. Shadow Play, G-Sync, and RTX Voice are some nice features. Not to mention DLSS...

With the $1000 price tag on the RX 6900 XT, though, I think Nvidia would be crazy to not lower the price of the RTX 3090.

djsumdog wrote at 2020-10-28 18:20:44:

We'll have to wait for the benchmarks and YouTube videos. What if the new AMD cards look amazing without DLSS and have comparable framerates?

I current have a small-form-factor case with a 2080-Ti in it. As it stand, the biggest nVidia card I can put in would be the 3080. I'm curious if the 6900 XT is a two slot design, or if it's massive like the 8090 with the three slot requirement.

moonbas3 wrote at 2020-10-29 10:20:24:

It's 2.5 slots and shorter than the 3090.

https://twitter.com/Radeon/status/1321590889396002816/photo/...

snvzz wrote at 2020-10-28 20:31:38:

>Shadow Play, G-Sync, and RTX Voice are some nice features. Not to mention DLSS...

AIUI AMD has their own alternatives to all of these (minus RTX Voice, which I hadn't even heard about until your mention). I do not know how good they are, but I expect there to not be much difference.

As DLSS 2.0, they're supposedly releasing a competitor in next month's drivers.

CivBase wrote at 2020-10-28 21:16:32:

I had not heard of ReLive until now, but it's nice to know that AMD has an alternative to ShadowPlay.

FreeSync is an adaptive-sync solution, but it isn't a complete alternative to G-Sync. G-Sync displays require special hardware from Nvidia, must adhere to a series of certifications from Nvidia, and don't always support adaptive-sync at low framerates.

RTX Voice is probably not super important for a lot of people. However, as someone who prefers to not wear headphones for long periods of time, it sounds like an extremely useful technology.

And DLSS 2.0 is a pretty big deal for anyone with a 4K screen. I don't have a 4K monitor, but I regularly connect my computer to my 4K TV so I would probably leverage DLSS often. I hope AMD's upcoming solution is comparable, but I need to see it before I give them credit for it.

snvzz wrote at 2020-10-28 21:19:29:

>However, as someone who prefers to not wear headphones for long periods of time, it sounds like an extremely useful technology.

Very random but: Have you considered full-sized open-back headphones? They're at least an order of magnitude better experience, in sound and comfort.

I use/favor/recommend Sennheiser's HD600.

CivBase wrote at 2020-10-28 21:29:31:

I got some Sennheiser HD 380 Pros for the office a few years ago. They're comfortable and sound good for the price, but they get a bit too warm for me after a couple hours. I was avoiding open-backed headphones at the time because I've heard they leak audio and didn't want to annoy my coworkers. Now I work remote, so maybe I should re-consider open-backed headphones if they breath better. Thanks for the suggestion!

snvzz wrote at 2020-10-28 21:46:52:

>but they get a bit too warm for me after a couple hours.

I use HD380 Pro at the office, too. They're comfortable for closed, but they still do get ears/head warm over time. This is in contrast to HD600. The sound is also like night and day difference.

>I've heard they leak audio and didn't want to annoy my coworkers.

At healthy (as in low and plenty sufficient) volumes this is not an issue. The reason I use closed headphones in the office is the other way around: I can hear my coworkers and they're louder than my music is.

>so maybe I should re-consider open-backed headphones if they breath better. Thanks for the suggestion!

You're welcome. I'm confident you can't go wrong with HD600, they're legendary all-rounder, neutral-tuned, uncolored headphones, and at their price they're a steal, but do watch/read some reviews before making the call, for your own sanity.

Do note that HD650/660S are less neutral and near-universally considered worse by reviewers, on both subjective and objective (measurements) metrics, even when more expensive. And the lower end HD599/etc models aren't even comparable on a level field, in neither sound nor build. (HD600 are modular and built like tanks. The parts are compatible with 580/650/660S and sold individually, for the worst case and unlikely scenario of breaking anything)

They're 300 to 600Ω impedance across the frequency range, so they do benefit greatly from a headphone amp, but will still sound great even from an anemic source.

If you end up feeling like buying a dac/amp combo to get that extra, perceivable improvement, do look into "Audio Science Review (ASR)" community for no-bullshit hard measurement driven reviews. There's excellent solutions that do not break the bank, and a lot of hocus-pocus that measures like shit and yet has outrageous asking prices.

hpfr wrote at 2020-10-29 16:33:35:

A good number of people seem to prefer the HD650 sound even if they’re less neutral, and they can be had for $200 via the HD6XX rather than $300 for the HD600, so I wouldn’t count them out immediately.

dannyw wrote at 2020-10-28 22:32:57:

What do you think about the HD58X and HD6XX?

snvzz wrote at 2020-10-29 00:02:06:

>the HD58x

Use tiny drivers with higher distortion and frequency response is v-shaped relative to HD600. They're better than e.g. HD579/HD599, but that's a low bar to meet.

>HD6xx

Are just a massdrop/drop brand for HD650, and thus everything regarding HD650 applies to them.

All in all, the price difference of 58x/6xx/650/600 does not make it sensible to settle for a worse headphone than HD600. Particularly since it'll last you a lifetime. This is unlike, say, your computer's screen.

And the HD660S is, besides worse, dramatically more expensive.

sundvor wrote at 2020-10-28 23:22:07:

Agree; in the home office I have some ATH-ADG1Xs which are open and they're very light and super comfortable. I also use ATH-MSR7NCs - closed - in the home office for very light noise reduction (desktop PC fans) and better music.

sundvor wrote at 2020-10-28 23:18:32:

RTX Voice is _INSANELY GOOD_, even with headphones.

Crying children, cats (e.g. Burmese outside the closed home office door when your meeting has started), lawnmowers outside, hammering away at the mechanical keyboard - it all just goes away like magic.

I'm in the market for a new high end GPU, but I truly need AMD to come up with their equivalent of RTX Voice. Already got my Ryzen. :-)

thrwyoilarticle wrote at 2020-10-29 11:47:28:

In case you're not aware, it's been superseded by NVIDIA Broadcast. The RTX Voice beta just crashes on my hardware.

holoduke wrote at 2020-10-28 20:22:31:

26.8 billion of transistors. Pretty impressive. Roughly 2 years ago vega20 and tu10 series have around 13 billion.

4 years ago most common models +-7m and if you look further back it roughly doubles every 2 years.

In 2031 we will have mainstream GPUs with 1 trillion transistors. Compare that with the 1996 voodoo 1 with just 1 million transistors.

sushshshsh wrote at 2020-10-28 20:24:28:

And still the voodoo 1 games were the most captivating. We went from "3d in arcades only" to "3d anytime you want" :)

PaulKeeble wrote at 2020-10-28 20:58:15:

For IIRC about £80. They weren't all that expensive. I remember fondly that click as the relay switched over the connection to the 3dfx card.

Even by the time Nvidia got on the scene with the GeForce 256 we were still talking £180 for the SDR model and about £220 for the DDR high end model. The market has got massive as have the silicon dies and the price per generation just keeps climbing.

josalhor wrote at 2020-10-28 16:47:33:

They claim to have a competitor for the 3090 at a price 30% lower (500$ less).

Impressive claim.

pixelpoet wrote at 2020-10-28 19:48:03:

I think Nvidia's claim that the 3090 costs anywhere near $1500 is even more bold. They might as well claim it costs $15, for all the good it does anyone actually trying to buy one.

snvzz wrote at 2020-10-28 20:22:28:

If there's no availability for the NVIDIA cards, and the AMD cards are actually available, what it means in terms of pricing is that the gap is even bigger than $500.

Considering these NVIDIA cards have capped FP32/64 performance and don't have ML drivers like Titan series do, the one sensible case for buying them is gone.

zamadatix wrote at 2020-10-28 23:32:42:

3090 is still great for rendering workloads where you want the VRAM and ray tracing hardware but don't want to pay the Quadro tax. Other than that though yeah it's just the "don't care about price" performer.

snvzz wrote at 2020-10-29 00:24:14:

That's only if the drivers allow for leveraging that.

AIUI FP32 and FP64 are gimped.

zamadatix wrote at 2020-10-29 02:16:25:

They have some gimped capabilities(I don't think FP32 as that's needed for games) but as I said you don't need any of those features for most 3D rendering tools so it's not worth worrying about the driver, you can see the 3090 wipe the floor with the Titan RTX in rendering workloads here

https://techgage.com/article/nvidia-geforce-rtx-3090-proviz-...

"it has half <category> performance" doesn't immediately make it worth 4x the money on an uncapped card, especially if your task doesn't involve a capped feature.

ZuLuuuuuu wrote at 2020-10-28 20:28:35:

I think some of the price difference is expected because of the memory difference, 24GB vs 16GB. But for gaming up to 4K that memory difference doesn't matter much. I guess RTX 3090 will perform better on other work loads than gaming.

zamadatix wrote at 2020-10-28 23:31:36:

A marginal amount but remember the only difference between the 6800 XT (16 GB, 72 compute units, $649) and the 6900 XT (16 GB, 80 compute units, $999) is "flagship cost" for the last 8 compute units to be enabled.

It's the same story on the 3090, you get the 7th compute cluster enabled and they happened to double the memory. Nothing about that more than doubles the price of the card from the 3080, it's the same "flagship cost" driving price.

sroussey wrote at 2020-10-28 18:24:56:

I half expected them to move their GPU to a chiplet design this year.

Also, curious if their APUs next year will be Zen3+RDND2. They have tended to be a year behind when they integrate the two. But I suspect the extra interactions between teams will fix it this time around.

bob1029 wrote at 2020-10-28 19:34:35:

I think chiplet design is coming with RDNA3. AMD has been putting out material regarding CDNA & InfinityFabric 3 for the datacenter market.

See:

https://www.extremetech.com/wp-content/uploads/2020/03/AMD-I...

shantara wrote at 2020-10-28 19:53:26:

RDNA2 seems to be based on the R&D AMD already done for Sony and Microsoft when designing new consoles. It brings largely the same architecture and feature set to the PC platform, with the exception of the proprietary storage technologies.

I think it's going to be the next year when we'll see AMD making some improvements beyond the current console status quo.

zamadatix wrote at 2020-10-28 23:26:14:

It wasn't in the video announcement but AMD has confirmed Microsoft's DirectStorage API will be supported on RDNA2 cards in Windows 10.

Worth noting Nvidia also got onboard this train with the 3xxx GPUs as well so it's not some console only proprietary feature.

llampx wrote at 2020-10-28 22:50:26:

Being realistic, I believe they will hamstring their desktop APUs to stay at an entry-level performance-wise, firstly because APUs are generally bought by the budget-conscious, and secondly because why cannibalize their own product line by not selling a separate CPU and GPU to the discerning gamer? Laptops are a different story.

I started thinking this way after seeing basically no improvement from the 2200G and 2400G to the 3400G.

0-_-0 wrote at 2020-10-28 18:34:26:

At this point a big advantage of Nvidia is DLSS an machine learning and a big advantage of AMD is that future console games will be optimised for its CPU+GPU architecture. I'm personally hoping to buy a TSMC card from Nvidia with a bump in memory capacity in the (hopefully near) future.

Aaronstotle wrote at 2020-10-28 17:55:12:

Glad to see competition returning to this space, consumers are in a great place right now.

djsumdog wrote at 2020-10-28 18:22:47:

I find it interesting they're going with the 6xxx naming scheme. So the Radeon VII was pretty much a market failure, right?

KSS42 wrote at 2020-10-28 18:56:18:

It wasn't really meant for gaming. Just a opportunistic card in between Vega and RDNA.

mchusma wrote at 2020-10-28 18:17:56:

Does AMD have anything like DLSS?

snvzz wrote at 2020-10-28 20:34:20:

Shortly after DLSS 1.0, AMD released a scaler that's better than DLSS 1.0's under an open license, which NVIDIA proceeded to immediately implement on their drivers, except with a two digit penalty of performance, whereas AMD took a negligible <2%, even on their older cards.

As for DLSS 2.0, there's supposedly going to be a new scaler to compete with it on the new drivers next month. I don't know which technology will actually look or perform better, but I do not expect the difference to be dramatic either way.

zionic wrote at 2020-10-28 22:20:06:

This is false, none of AMD’s scalars have beat DLSS1.0’s quality, or the substantial improvement brought by DLSS2.0

aoeusnth1 wrote at 2020-10-29 03:43:15:

Can either of you cite sources instead of making assertions?

fomine3 wrote at 2020-10-29 02:49:30:

As usual and also on competitor, their benchmarks is hype-ish. They overclocked their product in some benchmark and they show "Up To FPS" benchmark that I rarely see in reviewer's benchmark. We need to wait review embargo, as usual.

piinbinary wrote at 2020-10-28 16:52:36:

While RTX is still a selling point for Nvidia over AMD right now (it has more developmentand broader support than AMD's ray tracing), I think that a lot of gamers aren't going to be willing to pay the premium to get it (to say nothing of availability)

silentwanderer wrote at 2020-10-28 16:58:32:

The only sticking point for me is that Nvidia has features like DLSS that could extend the longevity of my card

mhh__ wrote at 2020-10-28 17:01:43:

AMD has technology to compete with DLSS (or so I've read), but talk is cheap and nvidia are already walking the walk.

DLSS could be an absolute godsend for VR, if you imagine the next gen of HMDs at ridiculous pixel densities but the game can cheat and basically fake your peripheral vision.

sudosysgen wrote at 2020-10-28 19:59:22:

AMD has come out with a DLSS equivalent, though more details need to be given.

That being said, AMD cards tend to age better than NVidia cards. My R9 290X went from being inferior to a 780Ti to 12-15% faster.

p1necone wrote at 2020-10-28 23:04:37:

AMD hasn't come out with anything like DLSS _yet_, but they've said they're "working on it".

sudosysgen wrote at 2020-10-29 02:41:57:

Technically they did already come out with a scaler that competes with DLSS 1, but for DLSS 2 they did say they are working on it. Of course, the first one isn't publicly available, and IIRC was focused on consoles.

throwaway2048 wrote at 2020-10-28 17:18:09:

Only if games support them, which relatively few do atm.

Generally speaking manufacturing proprietary features like that see poor support long term.

Physx, hairworks, SLI all come to mind.

snvzz wrote at 2020-10-28 20:26:30:

>(it has more developmentand broader support than AMD's ray tracing),

We don't know for how long AMD has been developing their raytracing, or how much effort they've put into it.

What we know, however, is that the consoles use AMD tech this time around, and that's what games that run on both PCs and Consoles are going to be designed for.

tigen wrote at 2020-10-29 00:48:04:

Ray tracing also uses standard APIs, the effects are not going to be limited to one brand of GPU (in general).

zamadatix wrote at 2020-10-29 02:23:29:

Most of the time. E.g. Crysis Remastered used a Vulkan shim with Nvidia proprietary APIs.

This should clean up quickly though.

mkaic wrote at 2020-10-28 17:37:24:

Yeah, as someone who works in 3D graphics, I was really hoping against hope that RDNA 2 would include better RT support, but alas, guess I'll keep waiting to try and get an elusive 3080.

snvzz wrote at 2020-10-28 20:28:23:

>I was really hoping against hope that RDNA 2 would include better RT support, but alas

I wouldn't jump to conclusions. I'd instead wait until third party benchmarks.

AMD has the advantage of being in both Microsoft and Sony's new consoles, too, which in practice means games are going to be designed for AMD's RT.

boardwaalk wrote at 2020-10-28 17:15:05:

Though, AMD is in the new consoles. Including the Xbox which likely has the same DX raytracing APIs as Windows. I would be surprised if RTX is the standard bearer in a few years.

unsigner wrote at 2020-10-28 17:21:49:

"RTX" is NVIDIA's marketing speak for "raytracing".

The APIs through which you access raytracing are either DXR (DirectX Raytracing), or Vulkan extensions - presumably both available on AMD RX 6000 series.

ksec wrote at 2020-10-28 19:07:18:

Keep wondering how is Apple going compete in the GPU space. Especially on Mac Pro with ultra low volume.

Would have been happy to see the RX6000 on a Mac.

lifty wrote at 2020-10-28 19:10:05:

Why not continue to use AMG GPU's like they have been doing until now? They need figure if the driver has any issues with the new architecture, but it shouldn't be a problem.

ksec wrote at 2020-10-28 19:19:22:

They could, but I remember reading some WWDC slides that suggest ARM Mac will only be using Apple GPU. Had Apple state Mac Pro would stay on x86 I would not have such concern, but they intend to go all in.

Which means Mac Pro will get an ARM chip _and_ an Apple GPU.

lifty wrote at 2020-10-28 19:38:30:

Well, I expect them to ship an ARM GPU with every Mac Pro, but in addition to that, also a beefy external GPU. The OS would switch between them based on the workload, live they've done before.

ksec wrote at 2020-10-29 08:29:39:

External GPU are not supported on ARM Mac. ( At least initially ) as per WWDC slides / documents.

pram wrote at 2020-10-28 23:43:14:

Big Sur has RDNA2 drivers already, I wouldn't doubt an 6900XT MPX eventually.

kilo_bravo_3 wrote at 2020-10-28 19:13:30:

I imagine Apple will keep buying GPUs for their high-end devices.

Being on ARM doesn't mean a GPU won't work.

fomine3 wrote at 2020-10-29 04:27:37:

Does they said HDMI 2.1 support? It's important to support 4K+HFR+HDR monitor setup.

Strom wrote at 2020-10-29 12:03:20:

Yes HDMI 2.1 with VRR is confirmed at the bottom of the official product page.

https://www.amd.com/en/graphics/amd-radeon-rx-6000-series

lclc98 wrote at 2020-10-29 05:10:26:

Specs[1] don't say which version of HDMI but under "Supported Rendering Format" it does mention HDMI 4k support.

[1]

https://www.amd.com/en/products/graphics/amd-radeon-rx-6800

fomine3 wrote at 2020-10-29 05:34:58:

HDMI 2.0 is enough to handle "HDMI 4K". So sadly I should assume that it only support HDMI 2.0.

Kye wrote at 2020-10-28 19:32:58:

It's surreal having an integrated GPU in a small non-gaming laptop that isn't completely useless (Vega 6 on a Ryzen 5 4500U). AMD is on fire. I hope they keep it up until I get ready to build a real gaming computer.

finikytou wrote at 2020-10-28 18:26:24:

for people in deep learning. I want to have a small and performant test server and was thinking of running it on a geforce 3070 (8gb). considering AMD doesn't support CUDA would I kill myself getting a 6800XT instead?

singhrac wrote at 2020-10-28 19:14:33:

I think you will still likely encounter a lot of upgrade headaches. If you can wait, see if PyTorch/TF add the 6800XT to their test servers (it seems they current test using a Vega 20, but I'm not certain).

easde wrote at 2020-10-28 19:41:23:

You'll want tensor cores, so I would stick to the 2000 or 3000-series Nvidia cards. Not to mention the headaches of using AMD cards with any ML framework.

dhagz wrote at 2020-10-28 18:37:55:

Honestly? I'd go with a Jetson dev kit. A couple hundred bucks and a whole lot of performance. Plus you get CUDA.

easde wrote at 2020-10-28 19:42:38:

An old $100 PC with an RTX 2060 6GB would be a _lot_ faster than a Jetson NX and only about $400 total.

finikytou wrote at 2020-10-28 19:19:04:

I was looking at the 4gb version would I be abble to do most of the kaggle as practice?

snvzz wrote at 2020-10-29 00:45:38:

Careful, there's no ML drivers for Ampere and FP32/64 is capped.

These are gaming cards. They'll charge you more for the compute cards, when/if they're available.

zamadatix wrote at 2020-10-29 02:32:38:

Not to reply to you twice in two different threads but for this threads sake you can absolutely run things like TF on them and they absolutely do kick ass in terms of performance per dollar when you do. I.e. look into what you need out of "ML" and CUDA and the perf/dollar before assuming you need to buy a Quadro simply because you're not playing games on it.

https://www.evolution.ai/post/benchmarking-deep-learning-wor...

fomine3 wrote at 2020-10-29 04:09:17:

Still useful for ML.

cwhiz wrote at 2020-10-29 11:27:40:

In order to achieve peak performance, you need a Ryzen CPU. I’m not sure how I feel about that.

vernie wrote at 2020-10-28 17:01:19:

Does it come with an improvement in driver quality?

aejnsn wrote at 2020-10-28 18:00:06:

Literally, I Ctrl+F'ed to look for a comment to this effect. Preach on. I would buy one if I could run it reliably in my Linux setup.

zohairshaikh72 wrote at 2020-10-29 13:02:06:

I'm glad I waited before investing in 3k series

pjmlp wrote at 2020-10-29 12:42:49:

DirectX 12 Ultimate support, nice catching up.

shmerl wrote at 2020-10-29 02:35:12:

Good announcement. Looking forward to Linux gaming benchmarks.

new_realist wrote at 2020-10-29 01:11:55:

Will this finally support the UP3218K on Linux?

abledon wrote at 2020-10-28 16:47:34:

hmm. 16 GB VRAM vs 3070 8 GB... looks better for fitting TF models in memory

trynumber9 wrote at 2020-10-28 18:05:26:

These cards are RDNA2, which is focused on gaming. If it happens to be good at any other workload that is incidental. Do not expect much support for this type of work on RDNA2. If you want compute, AMD will be announcing CDNA cards (perhaps Arcturus, which is allegedly 42TFLOPS in the same power envelope) sometime soon.

minimaxir wrote at 2020-10-28 16:49:48:

You won't be fitting a TensorFlow model into an AMD GPU anytime soon. (despite many efforts to address that)

abledon wrote at 2020-10-28 16:54:09:

https://rocmdocs.amd.com/en/latest/

Isn't a solution?

diab0lic wrote at 2020-10-28 17:04:41:

I have not tried this in a while but no. It requires a very involved setup compared to it's CUDA counterpart and certain functionalities don't work / performance characteristics for specific operations can vary wildly. If it look at the amount of time necessary to deal with it the calculus quickly swings in favor of just purchasing an Nvidia card.

Edit: And some sibling comments have pointed out that it doesn't function on current gen cards.

Nvidia still firmly leads the software game in this regard. I'm really hopeful this will change in the future but I had the same criticisms four years ago when I bought my last AMD card and nothing has changed yet.

FredFS456 wrote at 2020-10-28 17:02:51:

https://github.com/RadeonOpenCompute/ROCm#Hardware-and-Softw...

Note that ROCm does not have support for even the current-generation of gaming GPUs (Navi).

dogma1138 wrote at 2020-10-28 17:00:36:

ROCm is only available on Linux and doesn’t support NAVI GPUs.

Overall if you need to have a solution that just works with any ML framework out there Radeon GPUs aren’t fit for purpose.

deeeeplearning wrote at 2020-10-28 17:06:09:

Let us know how that works out.

FatDrunknStupid wrote at 2020-10-28 18:45:47:

ZX=Spectrum

randompwd wrote at 2020-10-28 16:55:07:

Having owned an r9 380, I can safely say I will never buy an AMD gpu again. Buggiest drivers(windows), even for day to day work. And lackluster support to boot.

pimeys wrote at 2020-10-28 16:59:12:

Having RX 580 at work and it's way nicer with Linux compared to whatever NVIDIA offers. The 6900 XT will definitely replace my 2080Ti.

eloff wrote at 2020-10-28 17:07:23:

Yeah, the nvidia driver support on Linux was so bad that I rage quit Linux and bought a Spanish windows 8 (I was in Latin America at the time) at a ridiculous price after days of troubleshooting a black screen on boot.

AMD had been rock solid for me with Linux ever since I realized that was the issue and bought an AMD card. I'll never go back to nvidia until they make Linux drivers a priority.

I still have the Spanish windows that I use for games with my very old nvidia graphics card. It's time to retire that system soon.

philliphaydon wrote at 2020-10-28 17:18:46:

I moved from Nvidia to AMD because of driver issues in linux. Definitely had less issues with AMD. I’ll be replacing the 5700xt with 6800xt.

heelix wrote at 2020-10-28 17:32:23:

Ended up with the last few weeks of December off last year and sat down with the mission of getting my Centos 8 box to work with steam. The RX580 worked lovely for VIM, but... went on a grand adventure to get their drivers working. Oh what a pain. Took me, with my limited knowledge most of that time to sort out the potion miscibility roles between steam, centos, and amd.

About January, they released a set of updates where all the planets aligned. Went from a grand adventure to a handful of commands. The rest of this year has continued to be a non-event for updates. I'll be looking at one of the ATI cards as lord knows I've failed to find any 3080s, of any make, in stock since launch.

pimeys wrote at 2020-10-28 17:40:53:

Did you use their closed source drivers or the ones provided in the kernel? I've been just using Arch Linux and the mainline kernel with the OSS AMD drivers, and everything's super good and solid since last year when I got the workstation.

heelix wrote at 2020-10-30 14:17:27:

Closed source. At that point in time I was trying to get Steam up and running on my Linux box to play some games which needed something more than stock. I'd only used the open source drivers for coding/work, which were lovely. A buddy had gifted me a stack of RX580s when he exited bitcoin mining - so I had a way better video card than I needed for IntelliJ/GoLand/VIM. Not saying it was the right rabbit hole, but my god that was an adventure getting 19.30 closed source drivers to work on a Centos 8 Streams box. Ended up spending much of December compiling kernels and banging a dead chicken on my monitor... rather than the original mission of playing Oxygen Not Included.

https://www.reddit.com/r/CentOS/comments/eplicu/steam_on_cen...

zanny wrote at 2020-10-28 17:47:42:

Kernel 5.9 seems to have finally made the 5700XT stable for me. Since 5.6 the crashes have gotten less frequent per release, but I haven't had one in almost two weeks since this latest kernel.

snvzz wrote at 2020-10-28 20:37:24:

Absolutely.

And NVIDIA's blob driver doesn't work AT ALL on current kernels ATM.

Apparently, it's going to take months this time around.

Zardoz84 wrote at 2020-10-28 17:08:09:

Having owned a few NVIDIA cards (and using now a RX580), I can say that I will avoid NVIDIA. Buggiest drivers(Linux) and strange artifacts and tears, even for day to day work. And the drivers are closed source.

shrimp_emoji wrote at 2020-10-28 20:43:01:

The Nvidia open source drivers (Nouveau) are awful (which is Nvidia's fault), but the proprietary drivers are decent IME.

G-SYNC works out of the box (unlike FreeSync) in Linux, and Nvidia even has a nifty control panel app.

But it is a hassle having to worry about having proprietary drivers in-situ during installation or else your G-SYNC monitor won't work or when your G-SYNC monitor doesn't work if you switch to an AMD GPU. ; p (Yes, the ones with the dedicated G-SYNC hardware, which aren't _all_ G-SYNC monitors, brick when not plugged into an Nvidia card.)

snvzz wrote at 2020-10-28 21:14:44:

>or else your G-SYNC monitor won't work or when your G-SYNC monitor doesn't work if you switch to an AMD GPU. ; p (Yes, the ones with the dedicated G-SYNC hardware, which aren't all G-SYNC monitors, brick when not plugged into an Nvidia card.)

This is intentional, by design. It's called vendor lock-in. NVIDIA abused their position at the time to plant g-sync screens everywhere.

Of course, later they had to adopt FreeSync like everybody else (TVs and Monitors across the industry, with HDMI and DisplayPort, Intel and AMD, on consoles and computers), but there's a significant base of sold screens with gsync modules that will tilt many GPU purchases to the NVIDIA side.

Even when they adopted FreeSync, they pressured screen vendors to use their own "gsync compatible" name for the same technology, by leveraging their market position.

snvzz wrote at 2020-10-28 20:35:36:

FUD.

And on the topic, the early experiences with the 3xxx Ampere lineup have been horrible, with reliability issues, ridiculous PSU requirements, and high return rates.

This is despite few cards have actually been sold, with availability being extremely low.

taurath wrote at 2020-10-28 17:14:40:

Their Linux support is very good compared to Nvidia but I will agree that their drivers are quite buggy on windows relative to Nvidia.

cududa wrote at 2020-10-28 21:32:55:

Interesting that the only comments below supporting the R9 series drivers are Linux folks.

gitweb wrote at 2020-10-28 17:10:41:

Getting AMD graphics drivers are easy now and they are great. NVIDIA has been trying to force users to login to even download their latest graphics drivers. I think you should try again and you'll likely be pleasantly surprised.

redisman wrote at 2020-10-28 17:38:49:

That's not the experience on Linux at all. It's very barebones and doesn't have a UI at all like on Win10. I will say Nvidias Windows experience is pretty good. You can upgrade the driver without a reboot!

zanny wrote at 2020-10-28 17:53:44:

Theres a third party open source clone of the AMD settings app available on Linux:

https://gitlab.com/corectrl/corectrl

It supports a lot of what the Windows version does, per application settings, frequency / temp / fan monitoring and curve adjustment, etc.

unethical_ban wrote at 2020-10-28 18:01:19:

Having owned an X600, X800, X850XT, X1900XT, 3850, 4870, 7850, RX580, and 5700XT, I can say I have rarely if ever had driver issues with ATI/AMD.

Historically, I did use nvidia when I wanted to do native linux on desktop/media center, because their closed source drivers were supported much better (even FreeBSD had hardware media decoding support). This was before nouveau.

bradlys wrote at 2020-10-28 19:32:59:

And yet - here I am - having owned probably close to as many AMD/ATI graphics cards as you and I want to never go back due to their drivers. The 4850/4870 launch left me with trauma. I've mainly used my cards on Windows. Since I've been using my 980 Ti and 2070 Super - I've not really had any issues with Nvidia. I'll get a 3080 instead of a 6800 XT - presuming supply is available. I'm super hesitant to go back to AMD. Been burned too many times!

snvzz wrote at 2020-10-28 20:39:23:

>The 4850/4870 launch left me with trauma.

I bought HD4850 on release and had no trouble on Windows.

Linux support wasn't there on release (unlike these days) but came quite fast, a matter of weeks IIRC, with open drivers. Support has been there on release for all my newer AMD cards. All cards still work, and are still well-supported by the open drivers.

In contrast, all my NVIDIA cards from before that ended up as dead hardware, and were a nightmare while it worked, with the moody NVIDIA blob drivers.

StillBored wrote at 2020-10-28 18:54:41:

Re ML: AMD seem to play less games with product segmentation, and it allows them to drive a truck through the pricing models of their competitors. Presumably, if the price/perf is good enough here, that will do to nvidia's ML market lock, what is going on with intel's midrange xeon line at the moment. The smart money is buying AMD and porting their stuff.