💾 Archived View for dioskouroi.xyz › thread › 24988668 captured on 2020-11-07 at 00:48:01. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
Given Intel's reputation stable Linux drivers, I'm excited for their discrete desktop GPUs.
Yeah! People say AMD has the best Linux drivers, but they're wrong! Intel's are feature rich and stable, and I think they're better than AMD's [1]. They're just kind of ignored since nobody "chooses" to use an Intel GPU in their computer. You just get one bundled in your CPU, until now that is.
Intel Xe will likely be very poorly received because they'll be compared to AMD and Nvidia's best offerings and "Intel is failing at everything" is the big story now. But AMD and Nvidia are treating the low end GPU market as some sort of a backwater, and there's a proper niche for people who just want a reliable graphics card with modern video outputs and modern video codecs. If it can run desktop 3D effects smoothly on multiple high resolution screens - like a better or less expensive GT1030 and RX550 - you can be golden. Not everything GPU is about gaming or high end compute.
[1]: Okay, AMD's Linux drivers are also really good, but they can still have more hairy edge cases and lack of development than Intel's.
You're absolutely correct. Especially in the low-power segment, which AMD and nvidia have really ignored lately.
If you want a <=75 watt GPU, which happens to be the limit for the PCIe slot without an external power connector, you have very limited options. For nVidia, the 1650 is the only modern part that will fit the bill. For AMD, you have to reach back to the RX 500 series, but good luck finding one. They're extremely uncommon even among the lower-end 500 series, as many of them require an external connector anyways.
AMD do have lower-powered "workstation" offerings, but they're so expensive that it's hard to imagine putting one in a workstation voluntarily. They're also extremely poor performers, to the extent that you might not really see much of a benefit over a Vega iGPU.
The Xe Max is only 25W TDP, which is very attractive if you're hoping to build a passively cooled Linux machine with a low-wattage power supply.
The Vega 11 cores in the Ryzen 3400G are pretty good. I'm using a Ryzen with Vega 8 in my laptop (Lenovo T495s) with Fedora which is about as nice as it gets TBH. I'm happy and I'm a difficult and hypercritical individual when it comes to being happy.
Yeah I feel like AMD are pushing the G series CPU's at the "low end decent" gpu market. They sip power and still are pretty rad.
Yeah got to be honest that I built a 3400G based system for my daughter so she could play Sims, GTA V and Apex Legends and it's not that much worse than my 1660GTX and 3700X to use. I'm starting to think I didn't need to pay as much for the extra juice.
> AMD do have lower-powered "workstation" offerings, but they're so expensive that it's hard to imagine putting one in a workstation voluntarily.
But goodness are they pretty.
I recently started using GT710 for driving 3x of my 2K monitors at 85hz via HDMI. It works quite well even with Nouveau on Linux. That's not exactly modern, but it has 4 HDMI and cost around $50 too. Also fits into PCIe x1 slot and using under 25W.
Agreed. It's really surprisingly hard to find a quiet card that can do 2x 4K displays @60Hz.
The old ones don't support HDMI 2.0 so can only do 30Hz.
I've got a Gigabyte 1660 GTX in my desktop connected to one 4K display at 60Hz and that's doesn't even spin the fans up on the desktop. Even flat out it's barely audible. I don't know about two monitors but I suspect the story will be similar.
But why would any of those people want this part? The only way I see it is if you want an Intel GPU for the compatibility in an otherwise AMD system. If not the Intel CPU you'd build already includes a more than capable GPU this generation.
The GPU is usually an important part to upgrade to get old desktop hardware on with the times. Not because of games, but because having modern or many Displayport/HDMI connections is nice, and modern hardware video codecs can offload a lot of heavy duty number crunching work from your CPU [1].
Any Sandy Bridge to 1st gen Skylake desktop does not have these connections or codecs and could often really benefit from a decent $50-$80 add on GPU. Desktop Sandy Bridge can still make for a very decent computer if you can keep upgrade costs in check. An Intel Xe would also be a decent upgrade if you now already depend on some crappy older Nvidia or AMD GPU.
[1]: Besides your basic h264/hevc/vp9 decoding, it seems Intel Xe has AV1 hardware video decoding, which you can otherwise only get in the very expensive/hard to get/power hungry AMD and Nvidia discrete GPUs (to be) released this month.
This would be a nice upgrade for an older server. Older Dell and HP servers can host piles of storage and ram; they can make great workstation bargains, but you need to add a GPU. The Intel Xe could be a way to do that without much power draw.
But the existing HP or sell server has huge power draw, so why try to save a few watts on a low power GPU.
Makes sense. Most consumer video cards is for gaming, but latest video cards also help for other works. dGPU with latest QSV would be blessing for NVR server.
This part is laptop only.
Sure, I should spell out that I'm commenting on the obvious and much leaked next step which is PCIe cards based on this line of chips.
maybe I'm missing something important, but it seems like it could only be a super niche part for a laptop. why spend extra board area and BOM on an eGPU that barely outperforms the iGPU?
seems like it would make more sense as an absolute bottom-of-the-barrel desktop part. maybe you have some old desktop that you want to turn into an HTPC, but it doesn't have the right ports or the ability to decode high bitrate video in realtime. plop one of these guys in for (ideally) $xx and you're good to go. of course, this would be the opposite of the high-margin stuff intel likes to make.
More recently Intel's GPUs have started to require proprietary firmware, which they did not need before, although probably it was embedded in the GPU before.
Every vendor requires proprietary firmware now, afaik.
> Yeah! People say AMD has the best Linux drivers, but they're wrong! Intel's are feature rich and stable, and I think they're better than AMD's [1].
I'd like to counter this. Intel's driver quality is going down for a year or so. My old EliteBook 850G2 is running with Intel Graphics but, it started to show some flickering and corruptions for no reason. Nobody bothered to fix it yet.
Intel's e1000e drivers started to behave awry on some older cards and, Intel didn't fix it. On the contrary, they continued to pile patches over this buggy driver. Kernel guys and Debian developers had to revert some patches back to stabilize the driver again. It still loses connectivity sometimes.
By older cards, I mean some of them are barely two years old.
IF, a big IF, Intel gets its thing together and restores the quality, it's great. Otherwise, I don't put them on a pedestal for driver quality.
On the Windows side, I have an Intel wireless card which is _officially_ supported by current drivers but, the new drivers bork the card after standby and system has to be powered off to restore card connectivity. The computer is a HP Spectre X2 convertible and, only older, out of the box Windows drivers (which are stock Intel drivers) can utilize the card correctly.
Intel also has an history of bad OpenGL implementations, where their drivers would state to implement features, which were actually done in software, making meaningless the feature query to decide what to call.
Is there any vendor who _doesn't_ do that? I remember conversations with Nvidia driver peeps on the OpenGL boards 20+ years ago, and it was very very striking that they would always, _always_, flatly refuse to implement any kind of "is it in hardware?" query. Waste of time even asking.
It wasn't spelled out, but I think the implied rationale was that doing so would just be used against them by competitors' marketing in a bullet-point feature comparison. And maybe a bit of "you don't care if it's hardware, you care if it's fast", which isn't unreasonable.
Sure but Intel was the worst among everyone else.
There are some obscure features in OpenGL like line stipple or selection mode. Most of them have been deprecated for over a decade, but there is still some CAD applications that rely on these functionalities and ancient OpenGL 1.3 semantics. It makes no sense to use silicon space to implement such features, if you can emulate them in software. Also, doing things in software doesn't always imply slow, because many old OpenGL features can be reimplemented with modern OpenGL using shaders, so it's done by hardware, just there is no dedicated HW support for it.
Back in the day they were doing this, shaders weren't still a thing, so you would get a slow CPU emulation instead.
> But AMD and Nvidia are treating the low end GPU market as some sort of a backwater
Are they though? The GPU embedded in my ryzen 4900 seems to be just as good, if not better than this xe. The desktop version I would expect to show even better.
The iGPUs are great, but low-end discrete GPUs are largely ignored.
The use case is for people who want HEDT CPUs but don't need high-end GPU power. Getting a modern low-power GPU allows you to still support modern codecs, modern display adapters, and high resolution displays.
So you're willing to pay $1000 for a HEDT CPU but not $150 for a modern GPU?
The price range for HEDT really starts at about $500. That includes low-end Core-X, 2nd gen Threadripper, and Ryzen 9 CPUs (which is essentially HEDT for many use cases).
As it stands, there is no GPU that:
- consumes 75W or less of power (i.e. running only on PCIe power)
- has a modern architecture (with the latest codecs, display adapters, good perf-per-watt, etc.)
- has solid support from open source drivers.
Understandably, it's a niche use case. But AMD hasn't launched a new GPU in that range for almost 4 years.
But this isn't a discrete GPU for desktops... this announcement literally says it's for thin and light laptop applications.
The desktop application is going to be OEM only and there's no indication at this point that it'll even be used to power external monitors vs. just a GPU offload for AI/ML workloads.
https://www.anandtech.com/show/16211/intels-dg1-gpu-coming-t...
> But this isn't a discrete GPU for desktops... this announcement literally says it's for thin and light laptop applications.
Yes, I know. This particular thread of conversation was started by my comment about my excitement for what this means when the desktop counterpart is released.
AMD is barely selling the desktop 4000 series (zen2) apus. OEM sales only, in limited countries. I obtained two through aliexpress, but that's not for the faint of heart.
With US retailers, you're limited to zen+, 4-core chips when zen3 cpus (with no gpu) are arriving shortly. Just because I don't want a high power GPU doesn't mean I want a CPU that old and limited.
AMD sells these older designs on generation behind nodes (eg: 14nm) as they need to be on time and under budget to get OEM design wins.
If they miss the timeline to get laptop chips out the door for back to school season, Dell, HP, Asus, etc will just go with Intel as the vat majority of laptop sales happen in one short period of the year.
TSMC 7nm production is at capacity, Apple has the bulk of the wafers and AMD has to share the remainder with other customers on that 7nm process. Nvidia literally got no wafers allocated by TSMC, hence why they have been forced to use Samsung's half baked 8nm process that performs much worse.
>They're just kind of ignored since nobody "chooses" to use an Intel GPU in their computer.
That is not entirely accurate though.
From a total GPU market presumptive, Low End / Low Power _equals_ integrated GPU. And both AMD and Nvidia simply cant compete with that. The market of a Low End GPU is tiny and unprofitable. When most consumers or customers are happy with their Intel iGPU.
I dont blame them for lack of SKUs in this segment. And AMD is competing with their own APU as well.
> But AMD and Nvidia are treating the low end GPU market as some sort of a backwater
I have to guess that's because the low end GPU market is not very profitable.
Exactly. The market for GPUs using 75W should get interesting soon.
> They're just kind of ignored since nobody "chooses" to use an Intel GPU in their computer. You just get one bundled in your CPU, until now that is.
People looking for Linux laptops certainly do. I've tried the AMD route twice the last ten years and I've been disappointed by crappy driver support and battery life in both cases.
Yeah AMD drivers on linux are not very good. I fell for the nvidia hate because of its propritory drivers so upgraded with an AMD Ryzen Apu instead of just getting a new nvidia graphics card. It's not even a recent APU it's almost 1+ years old. Now I am still using the my nvidia card as the Ryzen Apu can't output 4k and need to upgrade my graphics card anyway.
What apu? I have a laptop with a Ryzen 5 3500U that can drive a 4K monitor just fine.
Ryzen 5 3400g. I have the same Apu in my work pc that runs windows 10 so I know it's powerful enough to output to 4k but I don't even get the option to change to 4k in display manager. Tried the latest 5.9 kernel, Amd proprietary drivers many other things now I have given up just waiting for the new generation of nvidia cards to start arriving in quantity so I can get an older generation card cheaply
I don't know what's going on. The 3500U and 3400G should be essentially the same silicon. I'm running Solus, which should have the latest kernel. Maybe try a different distro using a USB live distro just to see?
I got a top of the line AMD card for compute > 1 year ago. Still does not have ROCm support, so its been sitting in a shelf for >1 year.
Some people here value "open source" drivers over "working drivers". I value "working drivers" over "open source" drivers over "proprietary" drivers.
An open source driver that does not work, is worth zero to me.
If it were true that "open source" means somebody can go and fix the driver, then somebody could have added ROCm support for the 5700 XT a long time ago. The fact that this has not happen, to me at least, means that open source is not as valuable as people seem to try to make it be here.
Sure it is better than closed source, but if your driver doesn't work, and AMD doesn't want to fix it, it definitely does not mean that anybody will fix it within the lifetime of the card. This card will be surpassed by the 6000 series next week, no chance I'm going to buy one of those.
Navi cards are _not_ compute cards. They are gamer cards.
For compute, you need Polaris op Vega, the documentation is very clear about chips supported.
> nobody "chooses" to use an Intel GPU in their computer.
I did. My laptop has i3-6157u CPU, I chose it due to the Intel Iris GPU there.
If I would be looking for a new one today, for similar reasons I would have picked an AMD APU.
But for your scenario, an integrated GPU is just fine, and Intel already cover that market well.
To justify it existence, a discrete GPU need to be better than that.
Ryzen normally doesn't have an iGPU though, and the APUs (which do) typically come out much later and don't scale as well as their CPU counterparts.
I think that this is because Intel's drivers are just Mesa drivers, and part of why that's possible in the first place is because Intel's not really competing with AMD or Nvidia and don't make efforts to keep their designs secret... which means that Intel isn't duplicating work already done in Mesa, but building on top of it and making Mesa better.
AMD uses Mesa as well.
I'd pay a pretty penny for a decent, stable, well-supported open GPU.
It'd need to cover the corner cases well. Multimonitor setups. High res. Etc.
Same here. My laptops have lots of Intel hardware and they all work out of the box on Linux. Hopefully Intel GPUs will also be an option for desktops.
This is a really confusing product for me. Do they really expect laptop manufacturers to spend money on a niche GPU compute workload accelerator? I'd say it's because Intel wants to show something for their GPU-building efforts, but they already did that with Iris Xe IGPUs.
The linked Ars Technica article says it well, "who will benefit enough from an Xe Max-equipped laptop"? I don't see anyone.
>This is a really confusing product for me.
1. Discrete GPU sells, Discrete = better than Integrated. ( Forget about the actual benchmark for the time being ) Especially true in market like China.
2. I would not be surprised if this was giving out for free with or for very little cost compared to just buying TigerLake. ( Or you know, normal price but with Intel Xe Max Marketing budget rebate )
3. The sole purpose is likely a Marketing exercise to address the world / market. We ( Intel ) now have a discrete GPU too. And they are good. ( You will always need to solve the Chicken and Egg problem with continue investment of GPU, revenue generation and demand for a product new to market segment)
4. Nearly 80% of PC sold are now Laptops. You need something to generate volume, view, exposures, Xe Max seems like well positioned.
5. You also need product volume to push developers to develop for their GPU / OneAPI. Just like Apple tells you e.g 60% of Active Devices, or 600M are already on iOS 14. Developer should plan their App upgrade with new API usage accordingly. Same with OneAPI and Intel Xe Max GPU. It is easier if you have a Total Addressable Market.
Q: So this is more or a marketing move than anything?
A: Possibly, Intel are _exceptionally_ good at sales and marketing. Despite their Technological incompetence in the past years, I have to give them credit for that.
> I'd say it's because Intel wants to show something for their GPU-building efforts
I think this hits the nail on the head.
In the same sense that Intel enabled/pushed a low volume Cannon Lake laptop as a China only SKU so they could tell the analysts, "see we've shipped 10nm" - even if the onboard GPU didn't work and had to have AMD RX540 instead.
I hate to be cynical, but I suspect there's an Intel exec who will "benefit enough" from Intel having shipped this. I share your bafflement.
It feels like, at best, a 'pipe cleaner' (i.e. a product that's put through the whole process to debug it and ensure that a later product goes smoothly).
That's pretty much what they did with the i740 back in the day, isn't it? It ain't stupid if it works.
A pipe cleaner seems reason enough not to be baffled.
I would use one. The ISPC compiler recently got support for this new generation of GPUs and in principle it’s exactly the niche they want to hit. The code I write is then used by my research group colleagues.
The same goes for AVX512. It seems niche but then you use it and it’s great, but only a weird subset of laptops have it for now (but MBA 2020 and MBP 13 2020 do have it, MBP 16 doesn’t)
How is ISPC's support for AMD MI supercomputing accelerators, or Nvidia's A100s ?
I feel like Hip, OneAPI, ISPC, etc. haven't improved anything over CUDA.
With CUDA we had one proprietary API. Now we have 3-4.
Ispc used to support NVPTX (since it’s based on LLVM) but for obvious reasons they dropped it.
I agree that OpenCL should have won but OTOH most of us have AVX2 or NEON to work with at least and things like ISPC make that trivial to write for, for the workloads where it matters.
Also I think if you have access to an A100 (which is 6k+ eu before tax academic price), you aren’t really worried about using a proprietary API.
Maybe a cheaper discrete GPU option compared to Nvidia MX GPUs, and paired with cheaper comet lake processors?
What poor souls use a low-power laptop GPU for content creation? This product falls squarely in the gap between inexpensive mainstream laptops and high-performance "workstation" or gaming laptops.
IMHO you're looking at it from a software engineer lens - from a product lens as I understand it, this is huge for OEMs as it's a way to upsell while gaining the ability to target a significantly larger chunk of the market. It provides a way to get 4k dual monitor support for pros and marketable video game support for prosumers, both in ~$600 fanless ultrabooks
Is the discrete accelerator required for dual 4K? The article says it only really helps
(vs the integrated intel GPU) with video encoding and machine learning workloads.
A discrete accelerator is needed even for single 4K.
Yes, intel's GPUs can handle that on paper, but they can't really do so while running any software which needs GPU acceleration, such as a browser. It's laggy at best.
4k and browser composition are non-issue for any PC GPU. It is purely matter of memory bandwidth, and even Broadwell-era iGPUs are fine.
Perhaps it is on Windows. Neither ChromeOS nor my X1 Carbon running Linux have been able to run at 4k without noticeable lag even in the _terminal_.
Also in MacOS. A 2015 13" MBP (which is Broadwell) can drive two 4k displays; the only limitation is, that each of them must be connected to separate TB port.
For Linux, I don't know, the only older machine with Linux that I have is Ivy Bridge one, and that is capable only of 4k@30, which is uncomfortable. With Kaby Lake, it's nice.
Also, lag is a latency. That would suggest, that your problem is somewhere else, not in the raw GPU performance. GPU itself doesn't have anything with the output resolution anyway, it is handled separate, dedicated block called output encoder.
Most of these modern OSes are doing compositing on the GPU. At 4k you're blitting some pretty enormous surfaces.
IMO if you're just going to run a VT and scale everything back up both compositing and 4k cause way more problems than they're worth. Give me tearing, pixels, and unupdated danged regions any day over input latency.
At 4K, we are still in the almost 32 MB framebuffer range. At 60 Hz, that's nothing a PC class hardware would have problem with. Mobile GPUs of few years ago, yes, that could be beyond their capabilities, but not iCore Intel GPUs and ddr3 or ddr4 ram.
If you are that sensitive to input latency, run your terminal full screen. In full screen mode most compositors skip the compositing step, as there would be nothing to compose with. So in this case, your single, full screen window does direct scan out.
Does it do those things better or cheaper than current offerings from Nvidia or AMD?
> Do they really expect laptop manufacturers to spend money on a niche GPU compute workload accelerator
macbook pro's if they keep their intel based cpus. if you use an external monitor it will always use the discrete gpu, besides that it might not be necessary. the discrete gpu inside the macbook pros heat really fast.
it really depends on how good these are and how much heat they produce. if they are slightly worse but have way better cooling than its a win-win
MacBook Pro isn't keeping their Intel CPUs. Apple officially announced a change to 'Apple Silicon' for Mac a few months ago.
The current rumor is that the MacBook is going to Apple Silicon on 10 November - less than two weeks from today.
It's true - less than two weeks. Also true: less than one week ;)
This week has been ten years long.
Well this is just outright confusing. The scale that Intel operates at means this product is going to be a drop in the bucket in terms of revenue which makes their enormous investments in dGPUs look a little silly. It fits in this really weird place where you want a little bit of extra GPU power for co-processing, but really _very_ little extra power. I literally can't understand how this product justified the engineering effort. Which leaves us with the fact that this must be a first release of something much more significant, which means that they've been forced to rush it out the door because their main effort is behind (well 'technically' we did ship the first series of chips in this project in Q42020). However, if you watch what happens a lot in Intel - they release a compromised first version of something, that ties them to a concrete failure (this isn't going to make any money) which then allows the executive team to can the project.
how often are you correct about your chain of reasonings as such? if it's non-negligible, would you mind sharing where I could read more of your ponderings? thanks.
Well, on this particular topic, I used to work for Intel so a lot of it comes from knowledge of how the company works institutionally. It's difficult to explain to an outsider but Intel has this weird combination of making enormous bets but with an incredibly ossified corporate culture. So some project will get kicked off - they'll buy a company like Nervana or they'll hire in someone like Jim Keller and give them a mandate to go off and do something huge.
The problem is how this intersects with the dynamics with the Intel organisation. Firstly, there's already five teams that already do whatever you want to do, so you go up the chain saying "I need 5 engineers to do X" and the answer comes back down the chain "Team Y already has 20 people, they do X, you should get them to do it". But so now rather than doing X, you're paying Team Y to do X, oh and Team Y doesn't give a shit about X, they care about Z, so about half your budget for X is now being subtly shifted to work on Z. So the cost of your project massively inflates.
Oh, and because when your project was approved they canned something else, some senior executive is saying that half your headcount should be "internal transfers" so suddenly your project has to find work for 50 software engineers at Folsom who just finished working on Intel's Modem team (before that they worked on Intel's previous GPU team too before that got canned).
Now comes the real issues, because you've got all these different teams that you rely on for your project and they don't answer to you, they answer to random different org structures, it's practically impossible to hold them accountable. Which leads to hilarious dynamics where team A will fall behind on their tasks, but they already know team B has fallen behind, so team A will lie about being able to deliver, because they know they'll never have to. So the critical path on your project isn't the team that's fallen behind, it's the team that's fallen behind _and_ all the other teams that have made the calculated judgement your work can be deprioritized because team B will sink your project anyway.
Finally comes the release. All of these internal fights don't change the fact that you're responsible for delivering a product to market, and almost certainly if it's an important project it'll be part of some Group's goals (like the Datacentre group or networking group). Now, the General Manager of the group isn't going to miss her/his goals. So you'll "ship" a product, by which we mean you'll send a handful of sample products to a partner. They probably wont work, they'll be a tiny subset of the functionality you originally promised. So anyway, that's the point at which someone realises we've just spent 10,000 years of engineering effort to deliver a discrete Graphics chip that is marginally less powerful than the integrated one.
But the product is out there, so you've got sales data now and revenue. So you need to agree your roadmap for the future! But the first product was shit, and getting it into any sort of state to be competitive would be an enormous amount of work because it was compromised at every point, so no one will sign off on throwing good money after bad and so what happens? Nervana? Fuck you! We're betting on Habana Labs now!
Double precision FP, cl_khr_fp64, could not be supported by intel’s GPU silicon until Xeon phi / larabee was inevitably cancelled. That finally did happen. I would love to add more double precision compute capacity without being robbed; it’s getting tough to find 1st gen nvidia titan cards. Here’s hoping this discrete unit will have strong fp64 performance.
Edit: It had been a while since I checked into current-model GPU fp64 throughput. The Titan V has outstanding DP performance!!! Order placed.
LDPPR4x is a very odd choice for graphics.
LPDDR4x soundly loses to HBM2E on energy/bandwidth ratio.
6-8 picojoules per bit to 1.5-2.5
Laptops generally can't afford HBM.
I have a hunch that DG1 is Tiger Lake with the cores cut out so that's where the memory controller comes from.
HBM memory is not that expensive, what is expensive is the (co-)packaging
But Intel has already committed to using active silicon interposer instead of abf for substrate, which is way more expensive.
I have no idea what that means, but DG1 does not have any kind of interposer. In the context of DG1, LPDDR is cheaper than HBM.
During an extended product briefing, Intel stressed to us that the Xe Max beats Nvidia's entry-level MX 350 chipset in just about every conceivable metric. In another year, this would have been exciting—but the Xe Max is only slated to appear in systems that feature Tiger Lake processors, whose Iris Xe integrated GPUs already handily outperform the Nvidia MX 350 in both Intel's tests and our own.
Two GPUs are more powerful than one, so I don't see the problem.
The problem is that you can’t use both at the same time:[1]
_To cut right to the chase on an important question for our more technical readers, Intel has not developed any kind of multi-GPU rendering technology that allows for multiple GPUs to be used together for a single graphics task (ala NVIDIA’s SLI or AMD’s CrossFire). So there is no way to combine a Tiger Lake-U iGPU with Xe MAX and double your DOTA framerate, for example. Functionally, Xe MAX is closer to a graphics co-processor – literally a second GPU in the system._
This basically seems to make it only useful as some sort of encoding/decoding co-processor, with potential but currently undescribed / undefined ML benefits.
[1]
https://www.anandtech.com/show/16210/intels-discrete-gpu-era...
SLI/CrossFire are dead technologies, the latest graphics APIs offer ways to use independent GPUs simultaneously without these.
The problem is neither the old or new method of multi-gpu has ever been really successful at actually working in the app you want it to.
> the latest graphics APIs offer ways to use independent GPUs simultaneously without these.
The problem is that DX12 or Vulkan require the _application_ developers to implement multi-GPU capability.
SLI and Crossfire made multi-GPU support the responsibility of the _drivers_, and had at least the theoretical possibility of working for unmodified applications and games. That didn't work out too well in practice, but it was pretty obviously the only approach that had much chance of delivering widespread support.
DX12 and Vulkan are realistically, too low-level for ordinary applications to use - but engines like Unity and UE4 get to implement native multigpu without having to adversarially hack around driver multigpu behavior. Sure it has to be done in the application, but overall it will result in better outcomes for AAA titles.
For other apps using higher-level OpenGL/DX11 the existing multigpu drivers are still available - for now - but NVIDIA just want you to buy a single faster 3xxx-series card, that will almost certainly outperform implicit multigpu on an older card. They aren't updating SLI profiles after January, and aren't supporting them at all on the 3xxx series cards.
You can use dGPU for rendering and mirror frame buffer content over to iGPU mostly fine these days, as NVIDIA Optimus laptops do.
That you can’t do multi-GPU means you can’t combine GPUs to make them into a NUMA GPU cluster that is better performing than any of its nodes.
They expect users to pay twice for their GPUs - once for the integrated one and once for the dedicated one.
They are probably attempting this because for a long time OEMs would include both Intel's iGPU (not like they had a choice) but also an Nvidia GPU, even if it had the same or lower performance than the iGPU, just so Intel doesn't fully monopolize their devices.
Then again, by this logic not sure why any OEM would pay twice for Intel GPUs, either.
I don't think there was ever a case where the iGPU outperformed the discrete NVIDIA GPU for games. Maybe for encoding or something.
You are shadowbanned.
What makes you think that I can see them?
The reason you can see his comment is that I brought it back from dead status to make it possible to reply.
It's also very odd that they did not consider that MX niche while knowing that casual laptop gaming has been crushed by mobile Ryzens, which have embedded GPUs blowing them out of water as well for "free"
they tried this (!?) a decade ago too.
https://en.wikipedia.org/wiki/Larrabee_(microarchitecture)
Larrabee was a weird attempt to force x86 into the GPU space. It was more hubris than silicon. Xe is Intel actually making a serious attempt to build decent discrete GPUs, without bizarre political constraints on the technological designs the team can pursue.
I view it as an attempt to explore the question "can a competitive GPU architecture be built as an extension of a mainstream CPU architecture?". And they tried it and it didn't work out, which suggests the answer is "No". But I don't fault them for trying – a willingness to try out different areas of the design space, even test them in the market, is a positive thing. Of course, you can see why a vendor whose major product is a mainstream CPU architecture might have a special interest in exploring that particular design question, but it still was a valid question to explore, even if they didn't get the result they were hoping for. We can learn as much from failures as from successes.
I think it was really obvious up front that Larrabee wasn't going to have enough graphics-specific hardware on the chip for it to have a chance at being a competitive GPU for consumer gaming. At best, they were aiming for a heavily compute-oriented chip that could also do graphics, but for graphics tasks it was going to be heavily reliant on brute forcing the problem with a large, expensive chip. So a bit further toward the compute-oriented end of the spectrum than any of the GPU microarchitectures than any of the GPUs AMD or NVIDIA have shipped. And it ended up that it was only viable for compute tasks if they stripped out the rest of the graphics hardware, and x86 compatibility wasn't enough to overcome NVIDIA's lead in GPGPU software support.
If Intel could have brought Larrabee to market with fabrication two nodes ahead of NVIDIA/AMD, it probably could have been successful. But Intel has a pretty solid track record of being unable to maintain two competitive microarchitectures at the same time, and I don't think they've ever quite been two full nodes ahead.
IIRC Larrabee actually worked but it failed largely due to internal politics, not because of technical issues.
Larrabee kind of lived on in the Xeon Phi line, although the Xeon Phis were purely targeted at compute with no graphics capability like Larrabee was.
Intel Xe is closer to taking the existing integrated GPUs and sticking them on a discrete card instead of the same socket as the main CPU.
As meh as Xe is, it's a working GPU which is dramatically more mature than Larrabee ever was.
And the i740 two decades ago. Intel does not have a great track record with graphics.
https://en.wikipedia.org/wiki/Intel740
It's funny to consider RISC-V in the light of Larrabee. RISC-V was conceived as a multi-purpose extensible ISA to unify the toolspace for MCUs, DSPs, CPUs, GPUs, etc.
Larrabee wanted to leverage the toolspace of the x86 ISA, but that same ISA complicated their architecture and made it nonviable.
Xe HP and Xe HPG seem more interesting but I wonder if they'll be too late to be competitive.
https://www.anandtech.com/show/16018/intel-xe-hp-graphics-ea...
If the standalone GPU (I am sure they will produce one soon) comes out cheap and power efficient, ironically AMD could benefit from that more than Intel. Most of the top AMD processors have no integrated GPU. I would certainly prefer 3600+Intel GPU over more expensive i7-9700.
If it's a content creation tool, shouldn't it be compared to the Quadro series?
Sorry for the aside, but is it possible to create a GPU cluster for, say, machine learning or rendering that uses both NVDA and AMD GPUs, together? I have 4 AMD GPUs and 2 NVDA 1080 TIs and was curious if its possible to wire up a computer that uses all 6 simultaneously
Sort of.
Mixed brand has a lot of bugs, and the different ways you'd hook up more than 4 cards have a lot of drawbacks so you would have to plan this carefully. It would be easier and more flexible to get a 4 slot board for the AMD GPUs and use the NVIDIA ones in a different machine.
EDIT: note also that the 4 slot board does not need to populate all the PCIE lanes necessarily depending on your workload.
Are there issues with using GPU drivers from several different GPU vendors on the same machine? I heard that from some people that were into the crypto-mining scene, and was curious if that carries over to all potential GPU cluster applications... Thanks for answering!
For ML maybe?
You can get the AMD builds of TF and PyTorch and then build a cluster out.
Realistically it'd likely to be more trouble than its worth. The AMD versions are always behind the core versions.
Of note here I believe TSMC is fabbing the bigger version of this chip for intel on their 7/6 nm node.
Would be nice to have a pcie version of this
Isreal Intel now working with Chinese companies
Anyone remmber when White Nations, employed white people to make tech without giving our tech to overseas nations (Israel/China/etc)
Multi culturalism, Feminism, Diversity, Open Borders, Non white population all thanks due to ZIONIST JEWS