Intel’s palpable desperation on display with Rocket Lake

Author: rwmj

Score: 178

Comments: 200

Date: 2020-11-06 15:16:13

Web Link

________________________________________________________________________________

nxc18 wrote at 2020-11-06 16:50:18:

Is there good coverage of how Intel became so uncompetitive? My instinct is to say this is just the natural result of the presence of MBAs who are trained to focus _exclusively_ on this quarter's results so ignore R&D investment and also shit on employees by doing gimmicks like hotdesking to pinch pennies.

I'm willing to bet my intuition is wrong, especially given my extremely deep bias against MBAs and 'this quarter' thinking. Any great sources on the full story?

graton wrote at 2020-11-06 18:59:25:

I think they have lost a lot of their best employees because their pay is not that great. They aimed for paying at the 50th percentile in wages and the top companies come in and can offer double or more their current pay.

Doesn't help they switched from large cubes with five-foot high cubicle walls which did a good job on minimizing noise and visual distractions to having cubicle walls with a height of about 3-4 feet high and much smaller. Plus they installed sit-stand desks and you have people stand and make phone calls that can be heard 30 feet away. Doesn't help for concentrating on problems.

dweekly wrote at 2020-11-06 23:40:19:

Pay at Intel is about half of FAANG, can confirm. They seem weirdly proud of this and then weirdly confused and disappointed when great people leave.

BostonEnginerd wrote at 2020-11-07 00:39:25:

This is a routine issue in the semiconductor industry. Payscales haven't keep up with software across that whole industry. The trade organizations are aware of this, but I haven't seen a lot of push to level the playing field. It's unfortunate, as there are lots of really fascinating problems to solve on the materials and integration side.

The industry trade organization did make a snazzy movie to try to attract young graduates into the industry.

https://yourewelcome.org/en/

sebmellen wrote at 2020-11-07 02:24:25:

The pandemic advertising was prescient.

Spooky23 wrote at 2020-11-07 04:16:31:

I worked for a tenant in an complex that was mostly populated by semiconductor manufacturing and research people.

You could use the parking lots as a proxy for employee prosperity. The chip guys drove better cars than the students, but the government and other corporate people had better cars on average. The chip bigshots made bank though. The chip people would also have crunch periods where they worked crazy hours.

The workers making bank were the tradesmen building the facilities that housed tools.

hinkley wrote at 2020-11-07 00:18:31:

So did Apple just sashay in and poach half of the best people?

seanmclo wrote at 2020-11-07 00:21:44:

Yes. In PDX, Azure recently had a big hiring frenzy and there was a lot of attrition as well.

ekianjo wrote at 2020-11-07 01:44:54:

Is that different at AMD?

cma wrote at 2020-11-07 00:48:51:

Only thing I can add to this is I had an Uber driver in SF area who was a computer engineer for Intel and wasn't doing Uber as a fun aside to meet people...

UncleOxidant wrote at 2020-11-06 21:55:21:

Yes on both counts. I was contacted a couple years back about a machine learning contracting gig. When I asked what the hourly rate was the recruiter said $45/hour. I laughed and said that the going rate for that kind of skill set is like $120+/hour. That has not been an isolated incident.

icelancer wrote at 2020-11-06 21:57:23:

>> $120+/hour

Even this would be a substantial discount for most qualified people.

PragmaticPulp wrote at 2020-11-06 23:20:32:

Given the company, it's usually long-term contracts that more closely resemble full-time employment. The location is also usually outside of the biggest, most expensive cities.

$120/hr * 40 hrs/wk * 50 wks/yr = $240K before self-employment taxes and paying for insurance. Ends up being comparable to typical compensation in cities that aren't SF, Seattle, or other top-tier markets.

Now if someone was doing more typical ad-hoc freelance work on a project basis, $120/hr might be too low for typical work. However, $90-120/hr as a contractor with a long-term contract and stable pay over 12-18 months is not uncommon.

Unless someone has a lot of new, small contracts knocking at their door nonstop, it can make more financial sense to take a $120/hr long-term contract over sporadic $200+/hr work that changes month to month.

RandomBK wrote at 2020-11-07 02:38:17:

Very true, though it'll depend on the seniority of the position.

$240K/yr, minus self-employment taxes, insurance, and other overhead, not to mention the lack of other non-monetary benefits and comp, is not a lot for someone doing serious machine learning.

I'd imagine that the type of specialized talent that Intel needs to catch up to AMD have plenty of options open to them. A comp strategy based on median industry comp might work to maintain their position, but probably won't churn out new industry-leading tech needed to jump ahead.

Spooky23 wrote at 2020-11-07 04:21:31:

For a contractor? You can do better than that making dashboards.

edoceo wrote at 2020-11-06 22:57:07:

I'm not qualified and get that rate

bigmattystyles wrote at 2020-11-07 02:11:41:

You would be surprised what you can get just by asking.

amelius wrote at 2020-11-07 01:55:57:

$120 is about a third of what my accountant charges per hour.

scrooched_moose wrote at 2020-11-06 23:28:17:

Man I hate that attitude. My company pulls that on absolutely every benefit asked about.

"We've done a survey and our pay and benefits are on par for the industry" = we are proud to be a profoundly mediocre place to work in every possible way.

Can't wait for this pandemic to end and the market for my industry/region to pick up again.

jp42 wrote at 2020-11-06 22:41:35:

This is so true. I have friends whose managers told them - "the only way to get more pay is to leave intel", "our hands are tied, we cant give more pay or promo", " to get more pay and promotion, one option is to leave intel and then come back"...all of these folks are in FAANG now.

seanmclo wrote at 2020-11-07 00:21:02:

Intel employee. Can confirm all of the above.

toomuchtodo wrote at 2020-11-07 00:50:42:

You should be a FAANG employee!

baskire wrote at 2020-11-07 01:17:28:

They also focused on diversity hiring vs focusing on hiring top talent.

mattlondon wrote at 2020-11-06 22:40:58:

Maybe just hubris and greed?

Seems like there were years when intel did not have much competition from AMD. Each year the "big news" from intel was their their Celeron/Pentium/iX/whatever was now up to 100MHz faster than last year's (during limited situations where temperatures allow blah blah blah). Or they add some new strange trademarked exotic-sounding technology brand name each year like "The new Intel Celeron/Pentium/iX with..." "Intel MaxT"/"Intel TruEvo/"Intel Cartara"/"Intel Artuga Plus"/"Intel Versuvia"/"Intel Centurior Max"/"Intel T-Xe" that no one really understands but is basically some sort of mundane enterprise feature no one actually cares about that does something for remote wifi management

Quick - rush to the store to get a new laptop!!! I totally need an extra 100MHz single-core maximum boost on a 4-core 2Ghz CPU (i.e. 5% max...) with Intel TruStep MagrittePro (TM) technology that does something to the colour gamut of my monitor in certain content creation scenarios.</sarcasm>

It strikes me that Intel have been caught napping and resting on their laurels. AMD appear to have come out with a great competitive product and Intel don't seem to have anything to compete with it with because they've been milking the market for the past 5+ years with tiny incremental clock increases and nothing actually "new".

They've allowed AMD to eat their lunch.

Maybe they've got a "real" new product they've been keeping in reserve that they'll bring to market now and surprise everyone with. Maybe they'll bring out something amazing next year. Who knows. Maybe. Maybe not. Seems like they've blown their lead for now regardless.

usrusr wrote at 2020-11-07 00:43:49:

Don't look for a change: their position was worse in the days of the Netburst dead-end and back then the company was just lucky that a second, independent path had been followed in the Haifa office to target a niche market (yes, laptops were niche in that age).

So has there been organizational dysfunction for decades? I think not: there are things Intel has always been doing very well and post-Netburst improvements after Core 2 have also been significant.

I believe that a big element is basically luck: you invest in a certain progression path and that investment _will_ yield results so you keep going. Another path might be sufficiently better to write off the inferior path investment but _you don't know that_. Perhaps the unsatisfying path taken is the least bad of all. Even Netburst improved over time, a bit, and so did whatever forgettable AMD had been building between the Athlon glory days and those of Ryzen. As long as you see progress, it's very hard to just give it up for a fresh start (that may or may not be better). We can just be lucky that a luck/lock-in imbalance has never persisted long enough to make the lucky one the only survivor, because then they would never leave the left dead-end they'll inevitable run into some day.

ndesaulniers wrote at 2020-11-07 01:40:50:

> whatever forgettable AMD had been building between the Athlon glory days and those of Ryzen

Phenom II! I had six cores, 7 years ago! They were dog slow, but there were six of them (and that's all I could afford).

mjevans wrote at 2020-11-07 05:14:15:

They weren't even that slow. Seven years ago they didn't win in __single threaded__ applications, but for any multi-threaded workload they were great.

klelatti wrote at 2020-11-06 21:41:18:

I think we've all taken the ability to reliably introduce process node improvements for granted to some extent. Intel has clearly been caught out by its inability to get 10nm - which I understand was overambitious - to work and pretty much everything else follows on from there.

At the same time AMD has been fortunate that TSMC has been able to continue with its node shrinks - supported by demand from and cash generated by high end mobile devices.

None of which is to say that Intel has been well run (cough McAfee cough) or that your criticisms aren't valid to some extent.

hinkley wrote at 2020-11-07 00:22:06:

Are there any industries where high-end manufacturing doesn't subsidize low-end and low-middle products substantially?

Are we saying this all started to unravel when most of the money moved to mobile and Intel still, after 20 years of warning shots, didn't have a compelling strategy there?

wmf wrote at 2020-11-07 01:17:54:

For Intel I think ~90% of the PC market plus ~90% of the server market provides enough volume to easily cover their R&D. The rest of the industry may have needed to pivot to mobile to fill their fabs but that's not Intel's problem.

totalZero wrote at 2020-11-06 18:52:41:

My perspective is that every process node is so critical that missing the mark on 10nm had far-reaching ramifications. This kind of technology has gargantuan inertia. The entire hardware ecosystem is strongly interconnected between different firms, and each firm's future technology depends strongly on its past execution. Failure to deliver on 10nm not only jeopardized the smooth rollout of the subsequent process node, it also hurt Intel's ability to deliver large quantities of full-featured chips to customers.

I don't personally believe that Apple would be going to in-house silicon instead of Intel for its flagship laptops if there were a viable way to avoid doing so. Intel is so hurt right now that I surmise it'd be willing to negotiate a fat discount for a flag-carrying customer, and the loss of Windows functionality is kind of disappointing at the upper end of the market (where some tools are clearly better supported by windows or at least x86).

simonh wrote at 2020-11-06 22:44:51:

Apple would be doing their own thing anyway. It’s not just about performance, there’s all the myriad other tweaks and customisations you can do on your own SOC to differentiate yourself. We see this with the mobile SOCs. Secure Enclave, sensors, Specialist machine learning accelerators for image processing, exactly the core count and cache you want, optimised bigLittle. The T2 chip is actually a modified iPhone SoC design. With Apple Silicon that can just be a sub-unit in the main SOC.

Yes moving to a new architecture is challenging, but Apple has done it before any this time it can be for keeps. Never again will they be beholden to another company’s priorities, or stuck with me-too processors their competitors have equal access to. A better Intel road map might have resulted in putting the transition off, but I think it was inevitable eventually ever since Apple bought PA Semi.

toast0 wrote at 2020-11-06 23:19:18:

> My perspective is that every process node is so critical that missing the mark on 10nm had far-reaching ramifications.

The problem with Intel missing on 10 nm is not so much that 10 nm is critical, but that it is critical to their roadmap. Large CPU design is heavily pipelined (like large CPUs), so you miss on the process node, you've still got a team building the refined next release for the year after, and a third team working on the more refined release for the following year.

Then you have decision making; it's hard to get a sense for if you need to go back and make a good new design on 14 nm, if 10 nm is going to be ready enough soon (but it's been several years now of not ready enough), splitting design resources.

mark-r wrote at 2020-11-07 01:32:55:

Moore's Law was literally created at Intel. It's painful to see them stumble like this.

hinkley wrote at 2020-11-07 00:30:19:

> My perspective is that every process node is so critical that missing the mark on 10nm had far-reaching ramifications.

I wonder if the migration from "tick-tock" to every third iteration is a case of believing your own PR. "Everything is fine" is what they should have been telling us, while internally it was flashing red lights and klaxons.

Or maybe this started even earlier, with the generations of hardware that gave us Spectre. The target became unreachable, they used smoke and mirrors, and when that blew up they just sort of gave up. Maybe the intervention should have come back then but didn't (cite the "MBAs took over" comment elsewhere in the thread)

wmf wrote at 2020-11-07 01:19:36:

AFAIK Spectre/Meltdown come from the 1990s.

agumonkey wrote at 2020-11-06 21:47:09:

Not to be pro-intel but I think they suffered from being too far ahead. They picked a path for 10nm when nobody else could even care. They got stuck in it for too long.. they're so invested and now there's a flocking of competition that can leverage faux-7nm processes that actually sells. If they can get back on track it will be a massive business success.

selectodude wrote at 2020-11-07 00:40:04:

Definitely seemed to be a first movers disadvantage where their competitors could see where they went wrong and leapfrog them.

solarkraft wrote at 2020-11-06 19:20:23:

AMD seems to be doing well on x86 - they could've gone with them and saved themselves a somewhat painful ecosystem breakage.

It's not just Intel's failure. They must think their chips are competitive against AMD's as well (or they're all in on iOS apps on Macs).

klelatti wrote at 2020-11-06 21:45:54:

I think its partly their desire to add their own IP (use of Apple Silicon as the name is probably revealing of how they think of the new chips) was probably decisive in making the move from x86.

Plus probably still cheaper than any x86 alternative.

toast0 wrote at 2020-11-06 22:48:46:

iOS apps on Macs would be easy enough for x86; the SDK iPhone simulator is more or less an x86 iOS VM.

sudosysgen wrote at 2020-11-07 01:19:02:

From what I've seen, they aren't. At the same node, and same wattage per core, a Ryzen low-power core has better performance per watt and leagues more I/O. That was back in the iPhone X days, I don't think it's gotten any better since

Personally, I'm very skeptical on Apple beating AMD.

baloney1 wrote at 2020-11-07 00:31:01:

The CEO is, literally, one of these MBA types and he has recursively applied this thinking throughout the whole organization.

k__ wrote at 2020-11-07 01:02:31:

_"received a bachelor's degree in business administration from the University at Buffalo School of Management in 1983 and his MBA from Binghamton University in 1985."_

_"January 31, 2019, Swan transitioned from his role as CFO and interim CEO"_

They got what they paid for, I guess...

gotfork wrote at 2020-11-07 01:27:11:

To be fair, most of the 10 nm problems were already in place when BK was still the CEO.

justicezyx wrote at 2020-11-07 04:26:44:

He is the manifestation of the problem then.

ekianjo wrote at 2020-11-07 01:45:55:

Most CEO are MBAs.

dwheeler wrote at 2020-11-07 04:39:16:

A CEO who has _only_ a non-technical education (such as an MBA) is VERY unusual for hardware or software companies that are _successful_. Often the CEO of this kind of company has at least some technical education, and usually the CEO has lots of it. After all, _most_ of the decisions in such firms will have a technology component to them.

A few examples:

* Lisa Su (CEO of Advanced Micro Devices (AMD)): BS, MS, and PhD in Electrical Engineering from MIT, and is a fellow of IEEE

* Jensen Huang (CEO and founder of NVIDIA): BS in electrical engineering from Oregon State University, master's degree in electrical engineering from Stanford University

* C.C. Wei (CEO of TSMC): Ph.D. in Electrical Engineering from Yale University

* Sundar Pichai (Alphabet/Google CEO): Has an MBA, but also has an M.S. from Stanford University in materials science and engineering

* Eric Schmidt (former Google CEO): BS in Engineering, M.S. degree for designing and implementing a network, and PhD degree in EECS, with a dissertation about the problems of managing distributed software development and tools for solving these problems

* Jeff Bezos (Amazon CEO): Bachelor of Science in Engineering (BSE) in electrical engineering and computer science from Princeton

* Mark Zuckerburg (Facebook CEO): In Harvard studied Psychology and Computer Science (did not earn a degree, but did study it for a few years and implemented the first version of Facebook).

* Tim Cook (Apple CEO): MBA from Duke University and a Bachelor of Science degree in Industrial Engineering from Auburn University.

* Satya Narayana Nadella (Microsoft CEO): Bachelor's in electrical engineering from the Manipal Institute of Technology in Karnataka; M.S. in computer science at the University of Wisconsin–Milwaukee; MBA from the University of Chicago Booth School of Business

* Reed Hastings (Netflix CEO): Bachelor of Arts degree in Mathematics (Bowdoin College), MS Computer Science (Stanford University)

I'm sure there are more examples, but I think that amply demonstrates my point.

Now let's compare this to Bob Swan (Intel CEO), who received a bachelor's degree in business administration from the University at Buffalo School of Management in 1983 and his MBA from Binghamton University in 1985. Maybe Mr. Swan can do well anyway, but his lack of technical education is extremely unusual when compared to most other tech companies.

lazzlazzlazz wrote at 2020-11-07 05:11:29:

I don't think it's a coincidence that the two least inspired, lowest quality CEOs on this list (Sundar Pichai and Tim Cook) are exactly the two who have MBAs.

I think you've made the point extremely clearly.

somethoughts wrote at 2020-11-06 22:55:51:

The interesting one I think about is if the current CEO is basically their version of Steve Ballmer at Microsoft. Not highly loved by tech, media or finance, but basically held the ship together long enough to enable Microsoft to figure out how to transition away from the sinking ship that was Microsoft Windows and into new markets.

Basically he just needs to keep it afloat long enough for Intel to be able to find its version of Satya Nadella and Azure to unlock the next leg of growth.

m_mueller wrote at 2020-11-06 23:16:26:

The examples you cite are people with operational excellence that were built up internally over many years. As an outsider I have no trust that Intel still has such talent, it seems they are fully run by bean counters now. Who says it's a Microsoft and not a Kodak? Microsoft had the immeasurable advantage of two big markets cornered for themselves: enterprise software and home computers software. Intel meanwhile is a market leader that is getting outplayed on every market, the only thing in their favor is missing volume by their competitors - that's not an advantage that is going to last long.

mark-r wrote at 2020-11-07 01:39:18:

Kodak gets unfairly picked on. They built their business by taking a cut from every single picture taken. There was nothing in the digital camera model that could replicate that revenue stream.

wmf wrote at 2020-11-07 04:49:13:

Let's mash this up with the evil HP business model that's on the front page. Kodak could have sold digital cameras that required Kodak DRMed flash cards that you would have to pay to erase and reuse. And maybe the photos could be further DRMed so you'd have to take the flash card to a certified Kodak lab.

microtherion wrote at 2020-11-07 02:04:25:

Except that the digital camera was literally invented at Kodak! What if they had pivoted to building cameras themselves?

mark-r wrote at 2020-11-07 02:14:32:

They did manufacture cameras themselves, but it wasn't enough. Remember that Kodak was making money in film manufacturing, film processing, and printing. They were literally making money from every click of the shutter. Even if they built the best cameras in the world, it wouldn't have saved them. And the ability of any electronics company to make cameras from standardized components made it impossible for them to keep a lead in cameras too.

flower-giraffe wrote at 2020-11-07 02:45:25:

> They were literally making money from every click of the shutter.

and that’s the pivot the software players made in tech, they monetised every click of the mouse (and every tap) while intel is stuck with the burden of the platform costs

somethoughts wrote at 2020-11-06 23:53:40:

Yep - no doubt - I'm not placing any actual bets. Mostly the thought is that similar to the Windows franchise, the existing Intel x86 server franchise [1], while for sure not growing, seems boringly steady enough for a good 3-5 year runway to have a decent shot at developing or finding tech oriented leadership talent. It will be interesting to watch.

As an outsider to MS, its unclear that any one particularly saw Nadella as the obvious successor to Ballmer until they announced it. He somewhat came out of wood work IIRC. Same with Lisa Su.

[1]

https://www.macrotrends.net/stocks/charts/INTC/intel/income-...

jbay808 wrote at 2020-11-06 23:06:59:

Maybe? Intel is in a high-technology field, where research can take a decade to bear fruit. I hope they're already planting the seeds for that growth today, or else that they have plans that will keep them afloat for a very long time.

Filligree wrote at 2020-11-07 03:13:03:

Do you think they'd be able to hang on for a decade without catching up?

tw04 wrote at 2020-11-07 02:16:21:

Really? Because Ballmer was there from basically the start (employee #30). He held positions of leadership all over the company and quite frankly knew it inside and out. Agree or disagree with how he ran the ship he knew the ship and it was his life.

The current Intel CEO has been there for 4 years as a CFO prior to taking over as CEO. We know he can spell Intel... we know he isn't even a little bit technical (his degrees are business administration and MBA and just about all of his previous positions of note are as CFO). So basically he's going to run the company like a beancounter... which always works out well at companies that need heavy R&D to stay competitive.

deepnotderp wrote at 2020-11-07 00:16:50:

I grew up in a neighborhood where probably 90% of households had at least one parent working at Intel, including mine.

Very simple: Cultural Decay

Nokinside wrote at 2020-11-06 22:13:07:

Let's not forget that four companies entered this latest race in process node tech and two failed. Intel's 10nm and GlobalFoundries 7nm.

This is high risk business. Unseen problems that come after pathfinding when big choices have been made may kill the success.

skavi wrote at 2020-11-06 18:28:12:

Intel never stopped investing a huge amount into R&D.

DylanBohlender wrote at 2020-11-06 16:57:46:

Here are a couple good articles explaining it.

https://www.extremetech.com/extreme/227720-how-intel-lost-10...

https://semiwiki.com/semiconductor-manufacturers/intel/28919...

The TL;DR is that Intel has always been a vertically integrated shop (meaning that they usually fab and design their own chips), and that is starting to bite them because pure-play foundries are improving their tech at a faster rate.

Intel has been unable to keep up with process advancements in their foundries, and that has led to pure-play foundries like TSMC taking massive market share. As chips get smaller and smaller, Intel has failed to keep up. They can only do 10nm for their mobile stuff and 14nm for their desktop stuff, whereas fabs like TSMC have been in 7nm territory for a while now and are moving into 5nm territory.

ckemere wrote at 2020-11-06 17:41:45:

Just skimming the first article, the TL;DR that I took away is that TSMC can get revenue from their old foundry nodes for much longer than Intel can. So the issue is not how fast TSMC improves, but more that Intel has to invest proportionally much more to keep up.

dragontamer wrote at 2020-11-06 18:21:51:

Note: the I/O chip on AMD Zen 2 / Zen 3 is a 14nm GloFo chip. Only the CPU-cores ("Zepplin" maybe, or whatever they call them now) are 7nm TSMC.

So AMD's strategy also leads to lower fabrication costs: because they can make a far cheaper 14nm chip to handle the slower portions of I/O (talking to RAM, or PCIe), while the expensive 7nm parts of TSMC are used only for the cores / L1 cache / L2 cache / L3 cache.

Teever wrote at 2020-11-06 22:05:37:

Is there any indication that Intel will move to this chiplet design? It seems like it's a no brainer and it's been proven to work by AMD.

What's keeping Intel from adopting this same practice? is it IP?

dragontamer wrote at 2020-11-06 22:12:17:

Intel has a competitor to chiplets, called EMIB, based off its Altera purchase. EMIB is pretty cool, but has only been deployed in a small number of situations so far (There was a Xeon + FPGA chip Intel made. There was the hybrid Intel+AMD chip, and finally the new big.LITTLE clone chip that Intel merged an Icelake + Atom core together). I don't know why Intel hasn't invested more heavily into EMIB, Foveros, and other advanced-packaging technologies... but Intel is clearly working on it.

Intel can do it, they just haven't decided to do so yet. They have the tech for sure.

Its simply a matter of priorities. Its not so much that Intel "isn't" investing into it, its arguable that Intel just hasn't invested "enough" into it.

AMD went all in: they literally bet their entire company on advanced-packaging, with AMD GPUs using an active interposer with HBM2, and now Zen-chips taking a chiplet strategy. And to be fair: AMD had to do something drastic to turn the tide.

We're just at a point where AMD is finally reaping the benefits of a decision they made years ago.

--------

If I were to take a guess: Intel was too confident that they could solve 10nm / 7nm (or more specifically: EUV and/or quad patterning), which would have negated the need for advanced packaging.

AMD on the other hand, is fabless. They based their designs off of what TSMC was already delivering to Apple. Since TSMC leapfrogged Intel in technology, AMD can now benefit from TSMC (indirectly benefiting from from Apple's investments).

Intel's failure bubbled up from the fab level: Without 10nm chips, Intel was unable to keep up with TSMC's performance, and now AMD is advancing.

----

AMD's strategy just works really well for AMD. AMD is forced to keep buying chips from GloFo (which are limited to 14nm or 12nm designs). All of AMD's decisions just lined up marvelously: they fixed a lot of issues with their company by just properly making the right decisions in a lot of little, detailed ways. A happy marriage of tech + business decisions. I dunno if they can keep doing that, it almost feels lucky to me. But they're benefiting for sure.

AMD took something that seemed like a downside (the forced purchase of 12nm or 14nm chips, even when 7nm was available), and turned it into a benefit.

AnthonyMouse wrote at 2020-11-06 23:00:20:

> They based their designs off of what TSMC was already delivering to Apple. Since TSMC leapfrogged Intel in technology, AMD can now benefit from TSMC (indirectly benefiting from from Apple's investments).

Did Apple actually buy a significant stake in TSMC or are you just referring to the fact that Apple is one of their large customers along with Qualcomm, Nvidia, Huawei (until recently) etc.?

dragontamer wrote at 2020-11-06 23:30:29:

> Did Apple actually buy a significant stake in TSMC or are you just referring to the fact that Apple is one of their large customers along with Qualcomm, Nvidia, Huawei (until recently) etc.?

I'm talking more like the later: all of these companies (Apple, Qualcomm, NVidia, etc. etc. and of course, AMD) are effectively pooling their money together to fund TSMC.

I don't mean to single Apple out as if they're the "only" ones funding TSMC's research. (And I can see how my wording earlier mistakenly can be interpreted in that manner. I was careless with my wording). Its more of a team effort (although Apple does seem to spend significant amounts of money trying to get first-dibs on the process).

zinekeller wrote at 2020-11-06 18:28:40:

Small nitpick: if I remeber correctly, AMD now uses 12nm process from GloFo now instead of 14nm (due to PCIe 4 requirements).

dragontamer wrote at 2020-11-06 21:46:49:

If so, it hasn't changed since Zen 2. IIRC, AMD is on record saying that Zen2 / Zen3 have the same I/O chip, and that only the 7nm stuff is changing.

Zen 2 has PCIe 4, so you could very well be correct.

avmich wrote at 2020-11-06 17:41:03:

> The TL;DR is that Intel has always been a vertically integrated shop (meaning that they usually fab and design their own chips), and that is starting to bite them because pure-play foundries are improving their tech at a faster rate.

Are you saying that vertically integrated companies are inherently disadvantaged because pure-play companies have a bigger list of orders and can spend more on evolving technology?

There is advantage to be vertically integrated, namely a better global optimization across domains, like between design and manufacturing. Would be interesting to know why it's not enough - if it's always not enough. If not always, then this particular case needs more reasons.

ethbr0 wrote at 2020-11-06 18:33:21:

The balance likely comes down to volume. Specifically, total global chip sales.

If Intel builds Intel (I believe their contract fabing is a rounding error?), then they're directly tied to Intel chip sales.

This sets up a potential death spiral. Intel misses a process node deadline, Intel's products are uncompetitive, Intel sales decrease, less demand for Intel fab, less money for Intel fab improvement.

Intel can temporarily paper over this by shifting money from other areas of the company, but it's not a good path to be on.

Conversely, as you might expect, if Intel sales are _increasing_ then the opposite, virtuous cycle holds.

So essentially, Intel's fortune is tied to the Intel_sales : (global_sales / number_of_leading_edge_non-Intel_fabs) ratio.

And with regards to that, two huge things happened in the marketplace recently: (1) mobile chip sales explosion, (2) GlobalFoundries exiting leading process race.

If Intel hadn't been screwed by a process engineering miss, longer term trends would still have hit them hard.

CountSessine wrote at 2020-11-06 20:06:05:

This is the case with every foundry, though. It’s the reason GloFo isn’t competitive anymore, for example. What’s more interesting is the physics reason that they tripped up the last generation - what did TSMC do right that Intel did wrong? What bets were made? Which ones paid off?

ethbr0 wrote at 2020-11-07 04:53:35:

I feel like the business models are slightly different when you're a contract foundry vs integrated though.

The latter depends on your market share. The former only depends on the total market.

Pet_Ant wrote at 2020-11-06 21:47:38:

I believe the issue was that Intel was leaning harder on EUV trying to make it and burying the competition instead of a more cautious approach by TSMC. Zen 3 is finally using some EUV layers whereas I believe Intel already wanted to use EUV heavily in their "10"nm process.

wmf wrote at 2020-11-07 04:53:07:

Nope, Intel 10nm does not use EUV. Zen 3 is made on TSMC N7 (not N7+ as rumored) which also does not use EUV.

jandrese wrote at 2020-11-06 18:42:12:

I think the idea is that the open fab shops just have a hell of a lot more work than Intel and greater economies of scale. Making zillions of mobile chips means big money even if the profit margin per chip is smaller. This in turn means more R&D and eventually they overtake the company that only fabs their own desktop processors and chipsets.

In the long run Intel screwed themselves over by not leasing out fab time to other companies. They put themselves in a niche in an industry that is naturally dominated by the largest player. And it's extra embarrassing that they did so because they knew very well how important it was to be the biggest--they were for a few decades!

Maybe Intel could have held on longer if they had a successful mobile chip to stuff into billions of smartphones, but their mobile efforts were short lived and seemed to be treated with disdain by the management. The first product kind of sucked and instead of sticking with it and improving they just threw in the towel, both on mobile processor and the baseband chip. An embarrassing misstep for a company as big as Intel.

trhway wrote at 2020-11-07 02:18:00:

>In the long run Intel screwed themselves over by not leasing out fab time to other companies.

sounds like Google who also treated their cloud/fabric computing as a competitive advantage to be kept to themselves and as result they missed the cloud business.

>their mobile efforts were short lived and seemed to be treated with disdain by the management.

classic. Low margin high volume future usually can't survive in the shadow of the high margin cash cow of yesterday that is still being milked.

simonh wrote at 2020-11-06 18:30:00:

I think the point is if TSMC develops a new node they will find customers for it, and they compete with other foundries on node.

With intel their foundries only have internal customers, who can only go to their internal foundries. There’s no competitive pressure on the foundries, and if they are ready early that’s wasted capacity. So so capacity and technology planning is based on a common road map to meet the needs of their slowest (er, I mean only) customer.

noch wrote at 2020-11-06 18:27:21:

> Is there good coverage of how Intel became so uncompetitive?

François Piednoël, performance architect at Intel for 20 years, recently gave a presentation that covers a lot of the reasons behind Intel's decline in his presentation "How to Fix Intel"

https://www.youtube.com/watch?v=fiKjzeLco6c

temac wrote at 2020-11-07 02:05:23:

I think he is a little overrated, and the only reason he is that much known is because it is actually rare to see former Intel engineers opening a Youtube channel with some Intel content. His take about how to fix Spectre made completely no sense to me, so either I missed something, or he is actually lacking on some technical subjects...

Anyway still some interesting insight about what he saw happened there.

The situation can be explained in a very boring way in any case. You had the tick-tock cycle, then Intel 10nm was broken, but they thought not anything they could not fix with one more year of process debugging, maybe 2 worst case. And they had so much advance that they could even have tolerated 3. Except 10nm basically never worked. And the new microarchitectures were designed for it... They switched to thinking what to do with 14+++++++++ far too late, and even then I admit the result is suboptimal.

SemiAccurate is... let's say very very opiniated (as usual), to be polite, and I don't think Rocket Lake will be that much a disaster, but also yes Zen 3 is solid, although not _that much_ magical and a bit overpriced for now (but I'm sure AMD will be able to adjust if really needed)

The final thing is: Intel had really an insane advance, and medium/high core count is not that much required right now in volume for consumers. I'm not sure about the situation in datacenters, could be more of a problem for Intel. But Intel has a level of market addressing that is insane and far above AMD. So it is not _that_ much a big deal for Intel, at least if they manage to come back on tracks in the coming years.

So did Intel became _that much_ uncompetitive? I don't really think so. Enthusiasts are just so a bit too much now that AMD _really_ is competitive. Choosing Zen 2 was still a (often excellent) compromise. Choosing Zen 3 over Intel would be a no-brainer if cheaper (and when you can!), and depending of what you do it can often be the good choice even at the current prices, or at least for some specific models.

jjtheblunt wrote at 2020-11-06 17:17:11:

hotdesking?

nxc18 wrote at 2020-11-06 17:25:42:

Several of the engineers I worked with a few years ago left Intel (for the company I work at) after they switched to hotdesking to save money on facilities.

Basically the concept was instead of engineers having an assigned office or desk, the office would have a large field of unassigned desks and generic equipment. Every day you would come in, reserve a desk, take any belongings from a locker, and set up shop. At the end of the day, you would tear down, lock up, and leave.

It is beautifully efficient from a top-down perspective, but it turns out employees like the consistency of a known work location each day. Its also nice to be able to put up a picture of family, from what I gather. I suspect the cost of personnel replacement dwarfs the saving of n% reduction in desk need.

nurspouse wrote at 2020-11-06 22:01:54:

> Several of the engineers I worked with a few years ago left Intel (for the company I work at) after they switched to hotdesking to save money on facilities.

Throwaway account as I work there, but I'd like to make some minor corrections to this: I don't think most of Intel ever switched to hotdesking. It was a trial program. In my building it was on one or two floors. I can't speak for all the locations, but most of the other buildings at our site did not have this at all. Unfortunately I was on that floor and hotdesked for 2 years.

I'm pretty sure none of the process/fab folks were hotdesked (I used to work in that area and stayed in touch with those folks). Nor did any of the circuit designers/architecture folks get hotdesked. After a few years, the trial was over and everyone reverted back to having their own permanent desk.

It's incredibly unlikely that this is a reason for Intel's decline.

edoceo wrote at 2020-11-06 23:04:36:

It's a symptom of the sickness.

nurspouse wrote at 2020-11-06 23:20:40:

Using cubicles a symptom of a sickness would mean that Google and FB are sicker. Intel has better workstations.

ndesaulniers wrote at 2020-11-07 02:08:12:

I have a Lenovo P920 at Google. Dual Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz for 72 virtual cores. And if I want more cores in the cloud, that's also an option. What does yours look like?

Google234 wrote at 2020-11-07 04:10:03:

Also, this is very ungoogly on your part. Flaunting some superficial thing like this is dumb. I’m hope you aren’t leak other more important pieces of info just to win internet arguments.

nurspouse wrote at 2020-11-07 02:53:29:

I was careless with my wording. By workstation, I was emphasizing the _station_ portion.

As an example, my current cube at Intel has walls all around (albeit short, but some buildings have walls taller than most people), 2 file storage cabinets with a fair number of drawers, a decent sized desk (enough for 2 people, not that we'd do that). Correspondingly, the cube is enough for 2 people. Also, I have a whiteboard.

Google234 wrote at 2020-11-07 04:06:55:

That’s not the normal work station at google. Only the chome, Linux, and maybe android team gets it since they need more cpu and memory to compile their projects.

ndesaulniers wrote at 2020-11-07 05:48:07:

Oh, I just have to move A LOT of protobufs.

creato wrote at 2020-11-07 02:35:45:

From the context, I think the point was about the whole "work"station, not just the computer. It sounds like intel has cubicles, while google has smaller desks with no dividers.

rwmj wrote at 2020-11-06 17:30:30:

The worst part of hot-desking in my experience is you always seem to end up next to some guy from marketing who is on their phone all day, and as a result you get absolutely no thinking time.

lostlogin wrote at 2020-11-06 20:05:34:

This sounds like hell.

Was there a reason it wasn’t split by department - eg sales, accounting, engineering etc?

rwmj wrote at 2020-11-06 21:18:11:

That's the point of hot-desking, there's a single big room and you pick a free desk.

JoeAltmaier wrote at 2020-11-06 21:22:17:

Got to wonder - why are you there at all? How is this one particle better from working at home? Clearly you're not there because of the environment (proximity to folks on your team etc)

newen wrote at 2020-11-06 17:39:56:

Same problem comes up with open offices. Some ass from another department constantly talking on the phone 9 to 5. I developed a Pavlovian twitch every time his phone rang.

djrogers wrote at 2020-11-06 20:04:46:

> Some ass from another department constantly talking on the phone 9 to 5.

Dude, that guy's job probably relied on him being on the phone all day - he's just doing it. It's not his fault that you can hear him, it's your office manager/designer.

Some of us really do have to spend a huge chunk of our day talking to other people on the phone, zoom, etc. There's n other way to get our jobs done, and if they don't give us private office space to do it in, then we have to do it in the open space they do give us.

derivagral wrote at 2020-11-06 20:23:43:

The counter-anecdote is my first job out of college where a VP would conduct personal calls in the open space office... loudly and emotionally.

Open offices usually at least have quieter spots if you get in early enough, in my experience.

ethbr0 wrote at 2020-11-06 18:37:27:

"Corporate Accounts Payable, Nina speaking. Just a moment!"

https://m.youtube.com/watch?v=oFwkL1WgD_0

spaetzleesser wrote at 2020-11-06 17:59:18:

That’s the ultimate sign of disrespect for workers as human beings. Cubes and open office are bad enough but hotdesking is the next step.

I would be miserable in such an environment. I need my keyboard and screens in a certain layout. Books need to be in a certain place. If that was taken away from me I don’t think I could live with that.

nurspouse wrote at 2020-11-06 22:03:38:

> Cubes and open office are bad enough but hotdesking is the next step.

It's not that simple. I worked with hotdesking at Intel. Those workstations, although still small, were bigger than the (assigned) ones I saw at Google and Facebook when I interviewed there, and Intel's one offered more privacy. Visiting those companies for an interview was depressing - finding out your crappy cube at Intel was still better than the ones at FAANG. Surreal.

HandstandMick wrote at 2020-11-07 00:26:49:

There are a lot of negative comments in this thread relating to hot desking. The perspective is interesting.

From my own experience at first I felt the same, but soon after add noise cancellation headphones (almost everyone has), less cords eg Bluetooth, usb-c monitors, and ability to shift adhoc into smaller teams and groups when required I find it becomes such a winner.

A lot of people seem setup or mark a desk as theirs, with books or custom Sun Station or whatever, and most people seem to respect this and get others needs.

Hot desking obviously works differently for different types of work and or personalities.

userbinator wrote at 2020-11-07 01:16:22:

_At the end of the day, you would tear down, lock up, and leave._

...so you would never be able to have a computer on for more than the length of your workday? That sounds insane --- I could never get any real work done that way, because so much time would be spent on "restoring state" to the way it was at the end of the previous day. I wouldn't call that efficient at all.

mark-r wrote at 2020-11-07 01:51:40:

Efficiency is your problem, not that of the office designer. If they can show that they've squeezed 20% more people into the same number of square feet, they're golden.

Laptops can be suspended rather than shut down, so maybe they're counting on that as the solution.

Gibbon1 wrote at 2020-11-06 18:20:04:

Trying to save money that way always makes me shake my head. Seriously what does 100 sqft of office space cost? $200/month? Maybe $300 when you include common space? So $3600/year. Or 3% of your engineers salary? Doesn't take a very big hit in productivity to erase any savings many times over.

mark-r wrote at 2020-11-07 01:54:33:

The savings in office cost are measurable, the lost productivity is not.

goatinaboat wrote at 2020-11-06 22:01:31:

_It is beautifully efficient from a top-down perspective, but it turns out employees like the consistency of a known work location each day_

I worked somewhere once that tried this open-plan hot-desking nonsense. The HR dept kept their nice private office with fixed desks however. They said it was because they had to deal with confidentiality. Whereas us dumb engineers developing the IP of the firm presumably didn’t...

jjtheblunt wrote at 2020-11-06 18:45:28:

oh, wow: that's how we had it at Sun in Chicagoland in 2000...it was novel and ok, but also felt unusual.

dporter wrote at 2020-11-06 17:34:08:

Treating desks like hotel rooms instead of apartments. Instead of having one desk that is "yours," you get assigned to any available desk every day. It seems like a terrible way to work, but it cuts down on the overall number of desks needed since large offices will never have 100% attendance.

spaetzleesser wrote at 2020-11-06 18:01:10:

That may work for people who constantly move around like sales people or managers. Interestingly managers keep their offices even though they rarely use them. For people like engineers i think it’s super important to have a steady work environment.

colinmhayes wrote at 2020-11-07 00:56:57:

Managing people involves intimate conversations that are suboptimal in open environments.

jjtheblunt wrote at 2020-11-06 18:46:16:

totally agree

hooch wrote at 2020-11-06 21:27:09:

I recall Intel intentionally took the foot off the accelerator in the midst of the GFC, and forfeited or skipped some next development to wait things out a while.

tyingq wrote at 2020-11-06 17:09:54:

Giving Jim Keller the boot is probably on Intel's list of regrets.

ethbr0 wrote at 2020-11-06 18:22:46:

Chip architecture isn't current Intel's problem, process architecture is.

There's some overlap between the 2, but 100% of Intel's current struggles can be chalked up to dysfunction on their process side.

Given the lead time of chip arch (2-3 years?), the chip side of the house is arriving at manufacturing start day, and the process and specs they'd been optimizing for just aren't available.

Until Intel's process catches up, other parts of the company have limited options. (TSMC!)

013a wrote at 2020-11-06 18:52:11:

Right, and that's my biggest fear.

The process side of this industry is in a really scary spot right now. TSMC is killing it. Nvidia is using Samsung to fab the RTX 3xxx chips, and there's some rumblings that low yields are a reason why those are in such short supply (not to mention, whenever Samsung releases a new phone only some regions get the Exynos chips, because historically they couldn't produce enough of them; Samsung used to have a pretty cool relationship with Qualcomm whereby Snapdragons were fabbed with Samsung, but more recently, they've moved to TSMC).

On the high performance side, everyone is moving to TSMC. If Samsung and Intel's fabs continue to exhibit issues at these smaller process node sizes, the monoculture is only going to get worse. We need Intel to get their act together, not just because it pushes better design innovation from AMD and Nvidia (not that they need it right now), but because their fabs are a critical, independent part of the software supply chain. They're the last western company with any form of high yield fabrication on high performance chips. At this point, we shouldn't just be worried about Intel's bottom line; we need to start being legitimately worried about national security (both in tactile cyber-warfare terms, and also more nebulous economic terms with regard to western manufacturing).

ethbr0 wrote at 2020-11-06 19:05:37:

The geopolitical angle is not to be forgotten either.

TSMC is majority-based in a country whose land and government is claimed by another, nearby, much larger country. A country willing to leverage international economic options to further that claim.

TSMC's global importance has direct implications on the world's appetite for intervening or preemptively selling arms against a hypothetical Taiwan Strait invasion.

ip26 wrote at 2020-11-06 21:41:10:

Which in turn, I figure, makes keeping TSMC one of the best in the world an absolute top priority of the ROC.

mensetmanusman wrote at 2020-11-07 03:00:30:

That’s an interesting point.

I’m sure it’s not lost on folks in the region the importance of TSMC when Xi is telling troops to “prepare for war”

Hopefully their plan to deal with the 30million extra men isn’t war (

https://www.chinadaily.com.cn/china/2017-02/13/content_28183...

)

AnthonyMouse wrote at 2020-11-06 23:12:06:

The interesting question is which company it makes the most sense to try to save. Intel seems to be plagued by various mismanagement and doesn't have a strong record as a contract fab for the other companies that may need a competitor to TSMC. If you're going to save somebody it may make as much sense to try to get one of the other players back into the state of the art game, like GlobalFoundries, which also operates several good (but not as good as TSMC) contract fabs in the US.

tyingq wrote at 2020-11-06 21:24:49:

Chiplets would have alleviated some of it.

UncleOxidant wrote at 2020-11-06 21:46:53:

Did he get the boot or just decide to leave on his own? It was my impression that he decided to leave to cut his losses since they weren't going to be able to be turned around.

cbozeman wrote at 2020-11-07 04:03:12:

Jim's child has leukemia, but most of Jim's work there, "was already done anyway".

I have a close friend, who is also a close friend of Jim's family. Apparently the arrogance of Intel management hindered his ability to actually put together the type of team that he did at AMD. I'm actually concerned about Intel now.

I thought Zen 3 would be, maybe 5-8% better than Intel's current offerings, and newer chips would be on parity with them, but this is just embarrassing.

Devils-Avocado wrote at 2020-11-06 23:21:50:

H1B migrant workers building empires inside the company instead of delivering good products.

formercoder wrote at 2020-11-06 23:08:12:

Hi, MBA here, not sure what "training" you're referring to. Certainly nothing I learned in school taught me to focus on quarterly results and ignore R&D. I learned to attempt to push ROIC above WACC in order to ensure firms are generating economic profits, I learned that we probably should capitalize R&D and include it in ROIC to show that it does impact value, among other things

kelnos wrote at 2020-11-07 02:10:08:

> _capitalize R&D_

That's the problem with "MBA thinking". You can't put a dollar value on R&D. You just can't. Anything you do will be a gross approximation, and because the number is so fuzzy, you'll be encouraged (even subconsciously) to fudge it around to make other things look better, which will undoubtedly lead you to undervalue R&D to the point where it negatively impacts your business, but you have no idea why, because you're solely focused on dollar figures and not what really matters. Y'know, like developing the ability to solve actual customer problems in an exceptional manner. This sort of thing is very difficult to quantify, and any approach that starts from a finance perspective is always going to be sub-optimal at best, but often just flat-out wrong.

I see so many people optimizing the things that are quantifiable at the expense of the things that aren't, when the loss of the qualitative things is what's killing their business. MBA types seem to never get this, and the form of your reply is basically a textbook example of that.

formercoder wrote at 2020-11-07 03:07:56:

What? Sure I can. It’s on the income statement. Then I can show that some firms see more of a return on that R&D spending than others, and that impacts how valuable they are.

It’s the same as how some firms get more return out of spending the same amount on factories as others. They are better factories.

dougmwne wrote at 2020-11-07 05:46:42:

I have to say, you're really walking right into this one. Your approach, that everything about the business can be quantified and then optimized is exactly why MBAs kill companies. The relationship between the employees and the company and the customers and the product is fundamentally emotional and therefore beyond quantification. R&D is fundamentally hopeful and creative and that future potential cannot be quantified either. Accounting is a fine management tool, especially for optimization of companies and products that already have the magic. Don't let those nice cognitive tools turn you into a paperclip maximizer.

AnthonyMouse wrote at 2020-11-07 04:48:40:

> What? Sure I can. It’s on the income statement.

It's on the income statement five years from now, not the one you have when you're making the decision today. The R&D paid for five years ago will commonly have been under different market conditions.

> Then I can show that some firms see more of a return on that R&D spending than others, and that impacts how valuable they are.

The question is, how do you cause your company to be the one getting more of a return?

Torkel wrote at 2020-11-06 23:12:57:

ROIC: return on invested capital

WACC: weighted average cost of capital

jimbob45 wrote at 2020-11-07 00:53:16:

I love this website for blindly downvoting an MBA. Proud of y’all.

On the other hand, attempting to quantify R&D seems like a fool’s errand. Maybe quantifying between different _paths_ of R&D to see which could pay off more is beneficial but cutting the R&D budget outright because it’s not profitable? Seems like you’re cutting away from your company’s future at that point.

jeffrallen wrote at 2020-11-06 17:37:57:

After Rocket Lake comes Desperation Lake and then Up Shit Creek. The latter comes only in a low cost version, without paddle.

herodoturtle wrote at 2020-11-06 21:30:18:

At which point Del Knox on Intel's door ;-)

henriquez wrote at 2020-11-06 16:52:52:

My main question would be why Intel would even bother releasing Rocket Lake if, balancing between higher IPC and lower clocks, the performance would be _lower_ than the 10 series chips. So I disagree with the article that this will be an unqualified disaster. It's quite likely that they will be a little faster, at least core for core. But it also seems like these are notebook chips hacked into a desktop socket and limited to just 8 cores.

That means the best case scenario for Intel would be (barely) scraping back their "single threaded gaming performance" crown while completely giving up against the multi-threaded performance of AMD's higher core count Zen 3 chips. The only way Rocket Lake would make any sense in the market would be if these are priced less than $400 (probably a lot less), and so Intel's margins will be much lower on what is likely to be a much larger die with more transistors.

I don't think it's possible to call this anything other than a pure desperation move.

ChuckMcM wrote at 2020-11-06 18:13:52:

You ask: _"Why Intel would even bother releasing Rocket Lake if, balancing between higher IPC and lower clocks, the performance would be _lower_ than the 10 series chips?"_

My understanding from the article is that the answer to that question is that Intel is unable to produce any quantity of chips at 10nm that makes (enough) money. The author surmises that rather than go a year with AMD essentially unchallenged in the market, Intel back ported a 10nm chip to a process that they have better margins on 14nm, so that they could tout "improvements" on an architectural basis while skipping over the question of improvements on a system basis.

For me, what is most telling is the 500 series chip set which seems uncharacteristic.

From a purely speculative point of view (that is code for pulling a wild ass guess out of my butt here) I could see this as having been the design they had done for Apple's next generation Macbooks before they lost to Apples' A14x chip. The factors that lead me to that guess are that it really looks like a 'point' product (specific changes, not a general set of family changes) to me that doesn't fit into thee PC ecosystem as well as other chip introductions (like IceLake) did with respect to their overall roadmap.

selectodude wrote at 2020-11-07 00:47:21:

14nm+++++++++++++ is still higher performance when power is less of an issue. Clocks higher. It’s a back port of a higher performance part to an older but more mature node.

skavi wrote at 2020-11-06 19:00:11:

Rocket Lake is desktop. Apple already uses 10nm Ice Lake in it’s smaller laptops, and 45W Tiger Lake (10nm) is expected to be released with Rocket Lake.

varispeed wrote at 2020-11-06 22:38:57:

Why Intel couldn't rent TMSC facility to produce their processors? Too proud?

wmf wrote at 2020-11-06 22:45:48:

TSMC's capacity for 2020-2021 is fully booked. Also, it would take Intel 1-2 years to redesign a processor for the TSMC process, but they thought their own 10nm was coming in less than a year so they never switched. And also pride.

colinmhayes wrote at 2020-11-07 01:00:25:

In addition to the TSMC being booked and designs being fab specific using TSMC is a horrible sign for TSMC, basically showing investors that they've totally given up on their multi billion dollar investment. It would be a total admit of defeat.

justinclift wrote at 2020-11-07 02:23:44:

> using TSMC is a horrible sign for TSMC

Do you mean "is a horrible sign for Intel"?

tpxl wrote at 2020-11-06 23:19:09:

The designs are fab specific.

drewg123 wrote at 2020-11-06 17:07:19:

I think one reason is that they need to release a desktop CPU with PCIe Gen4. And if they can't do it in 10nm, they have to backport it to 14nm just to stay competitive.

Amd has had Gen4 out in desktop CPUs for over a year now.

muro wrote at 2020-11-06 18:17:16:

They don't need to release one at all. They can skip PCIe 4 completely and go to 5. Their roadmap lists Sapphire Rapids in about a year (so in about 3 years on calendars everyone else uses).

phkahler wrote at 2020-11-06 17:52:31:

>> My main question would be why Intel would even bother releasing Rocket Lake...

Pure speculation, but maybe someone's bonus depended on delivering and this technically satisfies the requirement. It's not designs problem fab can't deliver the right node.

ogre_codes wrote at 2020-11-06 19:10:48:

This is just bad all around. Not just for intel, but for the entire industry. I always prefer companies doing well because their products are successful, not because their competitors fall down.

More and more, Apple's timing on their switch to in-house ARM designs seems perfectly timed.

vbezhenar wrote at 2020-11-06 22:35:34:

Apple should have switched to AMD instead. They would have almost perfect compatibility with Intel hardware and they would not need investing lots of money to develop chips.

Now they either need to halt some Mac lines or develop all kinds of CPUs, from mobile (which they probably can do, because they should be similar enough to iPhones) to server-grade (which they have no experience at all). And there's no way Mac Pros would sell enough to offset development costs. I just don't understand how they are going to manage that situation.

ogre_codes wrote at 2020-11-07 03:06:13:

> They would have almost perfect compatibility with Intel hardware and they would not need investing lots of money to develop chips.

With the ARM transition, Apple has perfect compatibility with the iPhone which is arguable more important for Apple than Intel compatibility.

Apple is already investing tons of money in CPU design, they are just doing it via proxy with Intel building the chips for them. Bringing it in-house means they can add the features they want to the CPU on the time frames that meet their needs.

Going with AMD just shifts who they outsource their chip design to, it doesn't give them the sort of control over their architecture they want.

The_rationalist wrote at 2020-11-07 03:31:52:

"

With the ARM transition, Apple has perfect compatibility with the iPhone which is arguable more important for Apple than Intel compatibility."

You can as much emulate x86 on arm than you can emulate arm on x86, so the point is moot

breakfastduck wrote at 2020-11-07 00:36:23:

They shouldn't.

The main point of switching to their own silicon is to unify their entire product range onto the same architecture.

Investing in a switch to AMD would be doing literally the opposite of what they're trying to achieve. Plus the shift of the big cloud players moving onto their own ARM chips means they're going to abandon x86 and move into a space that's already gaining traction in server grade.

'Now' they don't need to do anything but build the products. You act as if they only started working on the mac chips after they announced it. They'll have been planning this since the first A processors started performing really, really well.

varispeed wrote at 2020-11-06 22:41:57:

I on the other hand hope that Apple will stay away from x86. If they started co-op with AMD it would be a matter of time AMD would have to produce special versions of processors that only Apple could purchase and end users wouldn't be able to do any repairs and a lot of manufacturing power would have been wasted on Macs making it more expensive for other people.

AnthonyMouse wrote at 2020-11-06 23:22:35:

> it would be a matter of time AMD would have to produce special versions of processors that only Apple could purchase

How is that supposed to affect anybody else who is still buying the regular stuff, or be better for Apple customers than the same thing but with an architecture transition and the inability to natively virtualize the x64 editions of Windows and Linux?

> and a lot of manufacturing power would have been wasted on Macs making it more expensive for other people.

Apple and AMD are both TSMC customers. They come out of the same fabs.

varispeed wrote at 2020-11-06 23:51:56:

> How is that supposed to affect anybody else who is still buying the regular stuff, or be better for Apple customers than the same thing but with an architecture transition and the inability to natively virtualize the x64 editions of Windows and Linux?

For one, these CPUs would make the fabs busy, that means AMD would produce less CPUs for the "masses" and probably that would make it more expensive for everyone.

I think the inability to virtualize would kind of fit the walled garden Apple is going for.

> Apple and AMD are both TSMC customers. They come out of the same fabs.

It's not like you can produce different CPUs at the same time. It requires setup and probably particular fab is able to produce one type of CPU in a batch. The TSMC resources are finite as well.

Spooky23 wrote at 2020-11-07 05:00:19:

The industry is different. Cloud is a huge force and the mega tech companies literally have more money than they can spend.

We’re going to move towards AWS, Microsoft, Apple chips. It’s really hard for a company like Intel or AMD to serve hyper scale customers.

HelloNurse wrote at 2020-11-06 16:38:59:

An unusually unfavourable article, even by the standards of semiaccurate.com.

Can this new processor family be interpreted as something less terrible than "palpable desperation" and effectively giving up on the 10nm process? For example, prices might be aggressively low.

pizza234 wrote at 2020-11-06 16:48:01:

> For example, prices might be aggressively low.

Although this is possible, it would be a huge hit to Intel's image.

I think one can state that Zen 3 is higher performing than any Intel's architecture/model across the board. What happens when they'll release Rocket Lake? They'll introduce a new(er), slower architecture, to tackle... the budget segment?

I think that, from an engineering point of view, Intel is going to be in deep for a few years, with the hope that they'll manage to pull a new architecture in 3/4 years.

On the other hand, I've specifically written "engineering", because Intel still has copious amounts of "green persuasion", which shouldn't be underestimated.

eropple wrote at 2020-11-06 16:54:11:

This is one of the more oddly axe-grindy articles I've seen in a while. Poorly edited English and writing what feels like _angrily_ about some pieces of silicon--is this just the house style for this site?

Maybe somebody can provide some context.

rwmj wrote at 2020-11-06 17:41:16:

S|A is known for these ranty opinion articles - and dislike of Intel. On the other hand they do seem to have access to a lot of industry insiders and break important stories all the time.

sounds wrote at 2020-11-06 20:12:21:

I believe S|A's dislike of Intel and access to industry insiders are correlated.

Specifically I know a lot of industry insiders have a dislike of some of Intel's practices. When S|A talks about them, S|A knows to take the "ranty" tone, as a means of signalling up front to readers, "Hey, this isn't going to be a bland slide presentation..."

exmadscientist wrote at 2020-11-06 17:39:43:

Yes, this is Charlie's house style and has been for ages. He's always hyperbolic and his tone always favors some companies over others. Everything's either the best or the worst (usually the worst), and so on. It's pretty much exactly like reading a tabloid.

The counterbalance is that he's got one of the absolute best networks of moles in Silicon Valley and always seems to know things, often big things, before anyone else. Sometimes even before it's on anyone else's radar.

So if you're familiar enough with him to back out his style to see the underlying (likely) facts, which isn't too hard, you can learn a _lot_ about the big chip industry. And that's not just interesting to people like us, but very very valuable to a lot of people.

(Disclaimer: I haven't read SemiAccurate much since he paywalled it ~10 years ago, a move that I don't like at all but completely understand from the perspective of his business.)

dragontamer wrote at 2020-11-06 18:07:16:

I was curious and peaked behind the paywall years ago, when the PS4 / XBox One stuff was leaking. Charlie did good: he knew details of those systems a year in advance.

He's ranty and very aggressive with his writing. His best stuff is kept behind the paywall. It wasn't really worth the money to me (and I only subscribed once as a curiosity). But those "moles" he talks about are clearly the real deal.

bnt wrote at 2020-11-06 17:14:19:

Why does “poorly edited English” matter when the content seems to be on point? Not everyone on the Internet is a “native” English speaker. Content matters, not who and how they wrote it.

phaedryx wrote at 2020-11-06 17:33:32:

I tend to associate "poorly edited" with "on a passionate rant" and "edited" with "slow and thoughtful".

It might not be fair, but I do like to know the mindset the article is written from.

eropple wrote at 2020-11-06 22:32:35:

Doesn't matter how good somebody's points are when they metaphorically walk up to you half-dressed and screaming.

justin66 wrote at 2020-11-06 17:34:09:

This is typical of semiaccurate. It is awful.

intelisdead wrote at 2020-11-07 01:48:21:

Here are benchmarks of the top processors from each competitor:

Single Core:

AMD 5900X - 3643 points (+15% faster)

Intel 10900K - 3178 points

Multicore

AMD Ryzen Threadripper PRO 3995WX - 88673 points (+130% faster)

Intel Xeon Gold 6248R @ 3.00GHz - 38521 points

By the way, 5900X is not only faster than 10900K, but also consumes less power.

kelnos wrote at 2020-11-07 02:12:33:

What's the comparison when it comes to mobile chipsets? I would love to have a laptop (12"-14", light and thin) with an AMD CPU and an actually decent GPU, but they seem to be incredibly uncommon.

pengaru wrote at 2020-11-07 05:36:22:

Does the lenovo X13 AMD model not qualify?

justinclift wrote at 2020-11-07 02:27:49:

Maybe this?

https://www.anandtech.com/show/15762/the-acer-swift-3-sf314-...

Or if going up to 15" is acceptable:

https://www.anandtech.com/show/16153/the-acer-nitro-5-review...

causality0 wrote at 2020-11-07 00:34:27:

I'd be interested in an x86-64 processor that took the 1+3+4 approach of the Snapdragon 875. One big core with a super high clock rate and massive IPC, three smaller ones, and four that are smaller still. A desktop CPU with a single-core performance equal to half a normal 8-core chip would be an absolutely incredible tool for IPC-constrained applications like game console emulation.

wmf wrote at 2020-11-07 02:16:30:

_A desktop CPU with a single-core performance equal to half a normal 8-core chip_

This isn't possible. The cores are already as fast as they can be, so moving from 10 big to 1 big would only reduce performance.

MBCook wrote at 2020-11-07 02:40:36:

The bigger problem is you need operating system support. Linux/android has it. iOS has it. If iOS has it there’s a very good chance that macOS has it (we’ll probably find out on Tuesday).

What does Windows support heterogeneous cores at all? I don’t think it does. And until that happens, would you be able to sell a chip to the public?

wmf wrote at 2020-11-07 04:39:17:

Surface ARM laptops have big.little so there must be some kind of support but I don't have any confidence that it's good. It will take MS a while to get it refined. People are probably better off disabling the little cores so threads don't slow down by getting accidentally scheduled on them.

stagger87 wrote at 2020-11-06 18:08:32:

Oh hey, AVX-512 is finally coming to consumer desktop CPUs... like 6 years after Intel originally was planning?

topspin wrote at 2020-11-06 18:38:08:

One wonders whether the beating Intel is is taking will lead it forego _license-based downclocking_ with Rocket Lake.

stagger87 wrote at 2020-11-06 18:52:41:

I hope so, but they might not have a choice. I think the next AMD chips are rumored to have AVX-512 and being on a smaller process node will help immensely with heat. At 14nm and being a backport that may not be optimized for 14nm, heat management may be a problem and downclocking still required.

Despite that, it is still probably too late anyways. My industry has moved on to GPUs and FPGAs for real-time compute, but I remember when everyone was waiting for 512 after utilizing AVX-256 for some time. Although the thought of a 64-core threadripper with avx-512 at 5GHz on all cores is appealing.

The_rationalist wrote at 2020-11-07 03:34:29:

Source for the rumor?

Also, AVX512 can be on many workloads much faster than gpus.

wmf wrote at 2020-11-06 18:47:11:

The AVX512 downclocking was to prevent the chip from overheating; one does not just turn that off.

topspin wrote at 2020-11-06 19:44:23:

You don't improve thermal performance by backporting to less efficient nodes either.

api wrote at 2020-11-06 18:31:56:

The US needs to do whatever it takes to get TSMC to build not only top-tier fabs but centers of excellence here. As it stands TSMC on Taiwan, which is threatened by China, has damn near a monopoly on competitive fabrication. A single hit on Taiwan could terminally stall the entire global supply chain for top-tier chips.

rasz wrote at 2020-11-06 21:23:33:

Why would Taiwan let TSMC move its operations away and thus weaken Taiwans U.S. Defense Guarantees?

corty wrote at 2020-11-06 21:59:29:

It can be a mutually beneficial arrangement, perhaps letting TSMC US produce some defence-relevant silicon and providing a safe haven and nest-egg for Taiwans elites in case of invasion/unification.

hydroreadsstuff wrote at 2020-11-06 21:48:52:

Who would've thought that high-end chip manufacturing could be a (n enormous) bargaining chip.

redisman wrote at 2020-11-07 03:05:36:

Someone who can think long term I guess

octoberfranklin wrote at 2020-11-07 04:48:05:

The same people behind the Intel ME, AMD PSP, etc, etc.

Silicon backdoors are the new nukes.

redisman wrote at 2020-11-07 03:00:49:

Taiwans international relations are pretty delicate. I’m not sure if there is a lot that they want to change right now. Controlling the manufacturing of top end microchips is exactly the kind of card a small “neutral” country with a scary neighbor wants up their sleeve. Sure you could have a US embassy and recognize their sovereignty and put a military base and park some destroyers there but that would be a serious escalation with China.

octoberfranklin wrote at 2020-11-07 04:47:02:

That is impossible. TSMC will never build that fab.

TSMC is now the only reason Taiwan can count on the US coming to its defense when China invades. You can't put a price tag on that.

mensetmanusman wrote at 2020-11-07 03:04:28:

Think on decades time scale, electron based turing machines are asymptoting in design, there isn’t much space for someone to be ahead until we start making optical computers.

epicureanideal wrote at 2020-11-07 04:49:10:

Apple has a huge war chest. Let's hope they use it to advance the state of the art.

octoberfranklin wrote at 2020-11-07 04:49:20:

Keep dreaming. Electrons interact with each other, photons don't.

rossdavidh wrote at 2020-11-06 22:19:15:

Well Samsung has quite a bit of top-notch fabrication, in South Korea and elsewhere. Not that what you're suggesting wouldn't still be a good idea.

sadness2 wrote at 2020-11-06 23:18:49:

For gamers, I expect these will be priced such that Intel will offer a better price-to-performance ratio than AMD. I've seen so many industry pundits talking about the defeat of Intel for the last 2 generations, but gamers just quietly look the frames-per-second to $ ratio and keep buying Intel and the cheaper Intel-based motherboards. I think Rocket Lake will allow this trend to continue while Intel prepares Alder Lake.

FridgeSeal wrote at 2020-11-06 23:48:24:

Not sure how much that’ll hold, a lot of the gaming and pc subreddits and the communities there have been strongly favouring AMD chips in builds and advice. With Zen 3 chips I expect that to continue.

sadness2 wrote at 2020-11-07 03:55:11:

There's always a loud contingent of AMD fanboys, but then there's the silent majority who don't care and just buy on raw gaming performance per $, which is reflected in market share.

https://www.cpubenchmark.net/market_share.html

yborg wrote at 2020-11-07 00:11:07:

Intel mentally isn't anywhere near the rock bottom they would need to be to admit inferiority to AMD by actually underpricing them even though they easily have the margin to do so. This might even be the right strategy if they think they can come back in a couple of years and beat AMD, why damage the brand now?

sadness2 wrote at 2020-11-07 03:51:14:

This would not be a change in pricing strategy by Intel. Intel has consistently provided better price-to-performance ratio for gaming than AMD. Since AMD bumped their pricing with this launch, Intel arguably still holds this edge, especially if you account for overclocking and board prices.

https://cpu.userbenchmark.com/Compare/Intel-Core-i9-10900K-v...

https://cpu.userbenchmark.com/Compare/Intel-Core-i5-10600K-v...

https://cpu.userbenchmark.com/Compare/Intel-Core-i5-10600K-v...

varispeed wrote at 2020-11-06 22:37:25:

It seems like Intel wasted at least 5 years. They thought they are invincible and competition will never catch up, plus they thought if they change the model name and repackage to a new socket, people will still buy it. They however didn't consider the fact that people bought these processors, because there was nothing else and also new people are becoming teens and adults and they need computers as well. Maybe they also counted on brand loyalty? I think brand loyalty ends when you need to waste your time waiting for a task to complete.

I am so happy to pre-order 5950X. I will also buy new Threadripper when it gets released.