💾 Archived View for dioskouroi.xyz › thread › 29370875 captured on 2021-11-30 at 20:18:30. Gemini links have been rewritten to link to archived content

View Raw

More Information

-=-=-=-=-=-=-

The Yamaha DX7 synthesizer's clever exponential circuit, reverse-engineered

Author: picture

Score: 270

Comments: 65

Date: 2021-11-28 17:33:37

Web Link

________________________________________________________________________________

kens wrote at 2021-11-28 17:40:09:

Author here if anyone has questions about this classic synthesizer's internals.

raphlinus wrote at 2021-11-28 19:13:11:

The specific thing I've found least documented, yet most important for the distinctive percussive attacks of the DX7, is a random variation of the pitch envelope for the first few milliseconds of the note. That's almost short enough it could be done in the firmware, but I believe it might be in the hardware. It's not present in the msfa source, but might have been recovered by later Dexed authors (I haven't carefully looked at their code).

If you get to the envelope hardware, you'll find it's just as clever as the exponential and sine generators. There's some info at [1], but it doesn't capture every single thing I found - there are cases where there is a slight amount of additional noise in the amplitude, I'm not sure whether intentional to give more character or an unintentional artifact. That's also missing from the msfa source.

[1]:

https://github.com/google/music-synthesizer-for-android/blob...

ETA: Also see

https://levien.com/dx7-envelope.html

for a somewhat interactive JavaScript implementation of the envelope algorithm. This accurately emulates the envelope shape and quantization of the DX7 (ie it uses the same reduced number of bits to drive its state machine), but I do not claim it is bit-perfect.

analog31 wrote at 2021-11-28 19:23:32:

Interesting, hammond organs had a similar effect due to bad key contacts and leaky capacitors. Simulated by Korg of course.

mb_72 wrote at 2021-11-29 07:26:16:

Actually, the key-click occurred not for those reasons explicitly, but "...was caused by the audio signal being routed directly through the key contacts. As a key was depressed, the nine contacts under the key closed against their respective busbars at slightly different times and bounced as they closed. The sine waves from the constantly running generators would be connected at random points in their oscillation."

(from

https://www.dairiki.org/HammondWiki/KeyClick

)

This happened even in brand-new organs.

djmips wrote at 2021-11-29 04:39:45:

Do you have more information on this (Hammond) effect and how to simulate?

chuchurocka wrote at 2021-11-29 14:36:16:

https://www.dairiki.org/HammondWiki/KeyClick

is a pretty good starting point

kens wrote at 2021-11-28 19:19:39:

I haven't looked at the envelope chip, but I hope to examine it at some point.

odiroot wrote at 2021-11-28 19:33:51:

I've heard Yamaha needed to put a lot of R&D work into producing these digital FM chips.

I wonder if they (or any other manufacturer) ever attempted the same with purely analogue chips. I'm aware that FM has very narrow sweet spots and probably the analogue oscillator drift would make this idea totally impractical.

boomlinde wrote at 2021-11-29 01:13:22:

Analog FM is not so unusual, but it's usually "real" FM, e.g. a modulation of pitch or linear frequency, and it's not so easy to tame into consistent musical pitches.

"Yamaha style" FM is however more of a kind of phase modulation. The operators are sine tables driven by phase counters, and the outputs of the modulators are scaled and summed with the phase counter of the carrier before the sum goes into the carrier wavetable. So you modulate the phase irrespective of frequency, e.g.

    modulator(phase) = sin(phase * modulator_multiplier) * modulator_amplitude
  carrier(phase) = sin(phase * carrier_multiplier + modulator) * carrier_amplitude

I suppose this could be replicated in the analog domain. A phase counter could simply be a ramping sawtooth waveform. Summing modulator outputs would be as simple as mixing. The hardest part I think would be to replicate the wavetable. You'd need a waveshaper that basically does f(x) = sin(x). There is an oscillator on the market that I think does everything except the waveshaping in the analog domain:

https://wmdevices.com/products/phase-displacement-oscillator...

ssalazar wrote at 2021-11-28 21:38:48:

I do wonder if DCOs (much less susceptible to drift) would be more amenable to analog FM, though they were not common for musical instruments during the DX7's early R&D phase.

That said, it seems like music manufacturers were searching for a musically useful, cost-effective digital synthesizer around this time.

Early digital synths like the Synclavier and Fairlight CMI were prodigiously expensive, while analog synths in the DX7's price range were saddled with few/no polyphonic voices and/or limited single oscillator designs.

The DX7, with a varied tonal palette and 16 voices, must have seemed luxurious at the time.

TheOtherHobbes wrote at 2021-11-28 23:26:04:

You can't do analog FM with DCOs because they're numerically controlled. You'd have to modulate the frequency of the clock, which displaces the problem without really solving it, and also wouldn't work polyphonically.

This Synclavier and Fairlight CMI were hybrid designs with a digitally controlled clock per voice producing a variable rate sample clock driving memory and a DAC. The hardware was more expensive but the digital part was much simpler, because you basically just clocked the sample data straight into the DAC without having to do any interpolation or resampling.

The DX7 was the first successful fixed rate polyphonic DSP system. With some reworking, the FM subsystem could have been replaced with sample RAM. But RAM was still ridiculously expensive, so FM was a workable solution for expressive acoustic-like sounds that weren't sampled.

It absolutely was a game changer. I remember playing one for the first time and it wasn't just the novelty of the FM sounds or the 16 - 16! - voices. It also looked great, was easy to carry, was less than a quarter of the price of some of the bigger polysynths, and had an extremely playable velocity-sensitive keyboard.

When you put all of those together it completely eclipsed the older designs as a playable keyboard instrument.

jacquesm wrote at 2021-11-29 01:07:12:

And, not unimportantly, was built like a tank so if you were a musician playing gigs taking it with you without damaging it was possible without being ridiculously careful with it, as you would have had to do with other synths of the day. DX7's are incredible in this respect, to the point where 4 decades later many of the original series 1's are still playing.

devin wrote at 2021-11-29 01:42:44:

Owner of a DX7 here. The only thing that needs servicing is the CMOS battery. Incredible.

AutumnCurtain wrote at 2021-11-29 01:18:34:

I have a DX7IID that has been in constant use since it was built in the late 80s and aside from a single battery replacement it's still in perfect condition. They are extremely durable.

ajxs wrote at 2021-11-29 00:39:31:

I recently watched Trent Reznor's clip from the _'I dream of Wires'_ documentary, where he blames the DX7 for the decline in synthesiser culture. His hatred of the DX7 is pretty well known at this point. He complained that after its release the market for synthesisers shifted to accomodate consumers' want for 'bell', and 'electric piano' sounds, rather than those of his preferred analog synthesisers.

Would you agree that having an affordable synthesiser capable of making credible piano, and percussive sounds really opened the world of synthesisers up to a more diverse range of musicians? The DX7 first hit shelves before I was born, so I can't speak from my own experience here. I think Reznor's opinion on the matter sounds pretty ignorant.

danbolt wrote at 2021-11-29 02:49:30:

While I don’t know much about synthesizers, I love when someone makes an interesting or new tone with the YM2612. I can understand how synthesizer sounds became standardized post-DX7, but I’m surprised Trent doesn’t appreciate the experimentation and accessibly of playing with operators.

jacquesm wrote at 2021-11-29 01:08:07:

Gatekeeping at its finest. The DX7 democratized the synth, and for that reason alone it deserves some kind of award.

klodolph wrote at 2021-11-28 22:32:07:

The other factor is the sheer number of oscillators you'd need to get polyphony. The DX7 has 6 oscillators per voice. With 16 voices, that's 96 oscillators. Typical polyphonic analog synthesizers have two. You occasionally see three. Polyphony is often lower. 8 voices is fairly common, but you saw smaller units... Prophet 5 was 5-voice, and Oberheim's OB-X was available in 4, 6, or 8 voice versions.

The DX7 had somewhere between 9 and 12 times as many oscillators, depending on which of those lines you compare it to.

ajxs wrote at 2021-11-28 23:53:39:

This isn't quite FM, though Jean-Claude Risset from Bell Laboratories did some contemporary experiments using additive synthesis with analog oscillators to achieve similar complex tones. I'm guessing that Yamaha probably experimented with all kinds of interesting prototypes before developing the VLSI FM chips that would go into their earliest FM synthesisers. It was a very large investment for them as a company, so they likely tried all kinds of things first with existing technology they had on hand.

jacquesm wrote at 2021-11-29 01:09:17:

True FM would have been cost prohibitive, phase modulation sounds just the same to all intents and purposes without the associated extra technical complexity (and with two massive custom ASICs this was already quite a complex machine).

turdnagel wrote at 2021-11-28 19:28:18:

How good is Chipsynth OPS7’s emulation?

https://www.plogue.com/products/chipsynth-ops7.html

ajxs wrote at 2021-11-28 23:07:45:

It's arguably the most 'authentic' emulation available. The amount of work they've put into the project of emulating the DX7's various nuances is enormous.

yowlingcat wrote at 2021-11-28 18:10:14:

Wow, very cool! The DX7 is one of my favorite synthesizers of all time, and its spiritual successor (in software form), FM8 is the one I've used the most for the past decade and a half in my compositions. Thank you for the incredible work breaking things down to the gate level -- if only this were around during the time I was taking my analog + digital electronics and computer organization class, I may have done a different final project!

You allude that you'll get to this in your next post of the series, but here's my question: what are the biggest differences in tone between the DX7 and current emulators such as FM8 and Dexed (both of which I believe can read DX7 patches)? And if present, where do they come from?

Thanks again for this write-up on one of my favorite machines.

raphlinus wrote at 2021-11-28 19:08:11:

I believe you will find two main sources of differences in tone between the various DX7 emulators. One is that there are fairly major differences between the original DX7 and the DX7 II (I used the latter for the original engine now adapted in Dexed). The other is the analog filter on output, approximately a 16kHz lowpass filter designed to reduce artifacts from the DAC (this is replaced by a more general Moog-style filter in the Android implementation but present as an accurate emulation in Dexed).

I think Dexed is quite accurate, but this work will allow the authors to take it to the next level. I suspect most people won't be able to hear the difference, however.

Rodeoclash wrote at 2021-11-28 21:10:34:

Nothing to add but I wanted to say thank you for making Dexed, we've used it in a few synthwave-esque songs.

jacquesm wrote at 2021-11-28 21:39:31:

This is what we were aiming for so I'm super happy to see confirmation of that :) Ken is a true wizard.

kens wrote at 2021-11-28 18:29:16:

Strangely, I haven't used any of the emulators or a DX7, so I can't comment on the differences in tone.

fab1an wrote at 2021-11-28 19:04:40:

Ha, that would have been my question as well...I've often come across the idea that digital synth emulations are necessarily "perfect" and indistinguishable from the original, but is that really true in your opinion, from a first principles standpoint? If not, which component would make the emulation most difficult?

kens wrote at 2021-11-28 19:17:00:

If you look at it from that perspective, there are two factors. First, are the digital values identical? This is something my research can help with, using exactly the same exponential values (bit width, rounding, etc.) Second, a digital synth produces an analog output, so even if the digital values are perfect, the digital-to-analog conversion is going to produce its own distortion, filtering, etc, which can be pretty substantial.

Also see raphlinus's answer.

pantulis wrote at 2021-11-28 18:22:17:

I'd like to add to the list Korg's own FM implementation MOD-7

kbob wrote at 2021-11-29 15:06:39:

Are there resources for delta encoding transcendental functions in general? A fast, compact tanh() function would be great for making ladder filters in an FPGA.

ChuckMcM wrote at 2021-11-28 20:07:22:

Another great write-up Ken, I'm wondering how the DX-11 differed (it was introduced after the DX7 and I've got a TX-11 (tone module)). The logarithm trick looks like it would fit easily in a small FPGA too!

S_A_P wrote at 2021-11-28 23:37:45:

The DX-7 has 6 operators per voice and 16 note polyphony. The DX-11 is a 4 operator synth but with 8 voice poly, but is multitimbral, where the DX-7 is monotimbral. I think both have their advantages and most of the time 4 operators are enough.

I’ve been looking for a Yamaha FS-1 which was the 1990s return to FM synthesis for Yamaha. It’s a rare-ish classic that I wish Yamaha would reissue.

ajxs wrote at 2021-11-28 23:03:56:

I haven't peered under the hood of the DX11, I do have a TX81Z though which is the rack-mounted equivalent. I'm pretty sure the fundamental design would be more-or-less identical. There definitely would have been many small improvements in the design between the DX7, and the DX11. Aside from the DX11 supporting different waveforms (actually truncated sine waves), people have said that the large difference in sound between the successive generations of Yamaha's FM synths comes from using a more abridged sine table. The shorter the table, the more thin, and metallic the sound.

knob wrote at 2021-11-29 01:13:05:

This was fascinating to read. Thank you for taking the time to post it up!!

hellbannedguy wrote at 2021-11-28 20:34:06:

I don't have a question for this syntheiser, but do you know much about a Hammond CMS-103.

Did Hammond have their own computer boards, or did they use a version of Yamaha?

EarlKing wrote at 2021-11-28 20:39:20:

It's really starting to scare me how many of my personal interests manage to crop up as articles here.

SavantIdiot wrote at 2021-11-28 21:16:07:

We're not as unique as we like to believe. I bet there are probably just ~400 different archetypal people that read HN.

EarlKing wrote at 2021-11-29 04:09:47:

Well, yes, but it can get eerie when I suddenly start seeing articles aimed at a _very specific machine_ I happen to have recently begun playing^H^H^Hworking with (i.e. the DX7).

Hmmmm....

I REALLY LOVE THE ROLAND D-50. GOSH, IT SURE WOULD BE NICE IF THERE WAS AN OPEN SOURCE EMULATOR FOR IT...

...and now we wait.

ajxs wrote at 2021-11-29 12:34:01:

To this day I don't think anyone (outside of Roland) really knows how the 'LA synthesis' in the D-50 actually works. Unlike Yamaha's work on FM synthesis, Roland didn't file any patents for their LA synthesis technique.

A really interesting fact I found out when I did my own research on the DX7's development was that Roland’s technical R&D director Tadao Kikumoto spent over a year researching Yamaha's FM patents to try and find a way to implement FM without infringement, and even longer trying to find prior art to invalidate the patents themselves. In the end, it was apparently this research that eventually led to the D-50.

EarlKing wrote at 2021-11-29 14:23:07:

Actually, over the years, a fair bit of information has been published over exactly how LA synthesis works. They never exactly hid the details. The problem, of course, is that the details are scattered over dozens of magazine articles, and interspersed with half-truths repeated by well-meaning (but simply wrong) gearheads.

If you're really masochistic there's also the D-50 plugin that Roland published not too long ago. One could always disassemble and/or instrument that to ascertain the exact details... not that it would be entirely necessary since at least one company did, by all accounts, come up with a passable replica of the D-50 as a VST, but immediately got hit by Roland's lawyers because they made the mistake of using the attack samples from the D-50's dumped ROMs.

TL;DR Someone will do it eventually.

SavantIdiot wrote at 2021-11-29 05:24:49:

> I REALLY LOVE THE ROLAND D-50.

Hahahaha!

I did my best work on an SY77 ... /waits/ ... and an Ensoniq EPS ... /crickets/

S_A_P wrote at 2021-11-28 23:41:43:

Yeah, I’m willing to bet that most folks here are either into music/audio, photography, video production, or building gear and software that supports these endeavors in some way. I know a lot of people who code, but few of them visit this site. The ones that do fall into one of those categories.

SavantIdiot wrote at 2021-11-29 00:20:31:

I think I've observed a strong correlation between programmers with hobbies in the real world that involve making things. Carpentry is a biggie. So is photography and cooking. Also musical performance (guitar/drums/bass) but to a lesser extent.

Hmm.. seems like hobbies that involve lots (and lots) of gear are key. :)

austinthetaco wrote at 2021-11-29 08:45:03:

Your circle of friends who code is either small or young. This site was foundationally mostly coders for a very very long time. Often times technical articles and codebases would pop up and large discussions had. Sad to see that's the upcoming perception.

dave_sid wrote at 2021-11-28 21:27:05:

So cool an article about synths is at the top of HN.

geephroh wrote at 2021-11-29 01:14:23:

Thanks to the OP for the memories. Worked and scrimped for the DX100 when I was in high school. That keyboard, my buddy's CZ-101, his brother's TR-505, plus copious hair mousse were the recipe for some of the best and worst Depeche Mode covers of all time.

throwawaysea wrote at 2021-11-28 21:35:10:

Can someone explain what a synthesizer is to a non musician? It looks like a keyboard to me. Do the two terms mean something different? Also, in today’s world, do these need to exist in the same form? That is, can’t all these sounds simply be digitally produced rather than relying on circuitry? Why aren’t all new keyboards (or those other things DJs have on stage) simply software, maybe with a custom input device for easier live use?

ajuc wrote at 2021-11-28 22:20:13:

Synthesizer is the part that creates the sound, keyboard is the part that tells synthesizer what note to play, when, for how long, and possibly at what volume. Sometimes people call the whole thing keyboard cause its the most distinctive part of it, but you can use the same keyboard connected to different synthesizers and synthesizers can exist without keyboards.

What's important about synthesizers is the fact that for the first time in history it allowed musicians to control the "character" of sound gradually in new dimensions (the parameters that let you distinguish the same note played on piano, flute, guitar, violin, etc.), creating sounds that were impossible previously and even changing the character of the sound in real-time as another dimension of artistic expression. It's like you played a long note on violin and it morphed slowly over time into a flute and then some instrument that doesn't exist. You couldn't do it before and all these new possibilities and constraints changed music.

Also the particular UI of some synthesizers allowed easy exploration of these new dimensions and that's important too. It's one thing to be able to play any waveform you want (you can do that by editing .wav file in hex editor), it's another to have several knobs and sliders and hear the differences in real-time when you tinker with them.

We can simulate all of this in software but not 100% perfectly.

ssalazar wrote at 2021-11-28 22:16:27:

> That is, can’t all these sounds simply be digitally produced rather than relying on circuitry?

Even for an almost fully digital synth like the DX7 a bit-accurate emulation is difficult (there is a rich discussion of this in one of the other threads here).

For analog synthesizers, there is a lot of character and nuance in the individual circuits that are difficult to capture in a digital model, across all possible configurations.

In the mix of a full song though, its not often that people can notice the difference.

Aliasing--the introduction of extra unwanted and generally non-harmonic frequencies--is also really hard to avoid in digital systems.

For more esoteric instruments like Eurorack modules or Moog modular synthesizers, the physical interface is an integral part of the instrument--software versions of these exist but are obviously very different to interact with using a mouse or touchscreen.

> Why aren’t all new keyboards (or those other things DJs have on stage) simply software, maybe with a custom input device for easier live use?

For live use this is _very_ common nowadays, though physical single purpose keyboards (even fully digital ones) still are used widely.

In a fast moving stage show you may not want to mess around with the complexity of a full personal computer setup.

In modern operating systems, even with 100s of processes competing for CPU time, real-time audio may rarely drop out (the result of which is audible clicks and pops), but even this is too risky for a big professional live show.

adgjlsfhk1 wrote at 2021-11-29 02:27:25:

I honestly dont understand why none of the main consumer operating systems are optimized for low latency. outside servers, I feel like I would gladly take an OS that made my computer 2x slower, but never had a pause of more than half a ms or so.

thulle wrote at 2021-11-30 10:39:49:

A while after OpenZFS implememted ZSTD compression I started using it for backups, opting for the highest level of compression and scheduling it very early in the morning so it'd run when I wasn't by the PC when it ran.

Occasionally I'd be awake around then anyway, and I would experience 5-30second UI-freezes.

After fiddling around a bit I decided to try the MuQSS CPU scheduler and was blown away. Not only was the scheduled backup unnoticeable, on top of that I could throw a 16 thread compilation job on my 8core/16thread CPU and I'd not notice the system being under load. I might've lost a few frames per secone when occasionally playing some FPS games, but possibly lowered throughput be damned, this was bliss.

Unfortunately it seems to not be updated anymore. So I tried the newcomers on the block, BMQ and PDS. With BMQ I would start dropping iSCSI connections since my PC wouldn't manage to reply to a ping in five seconds. PDS fit better. While MuQSS could handle a load of 40-45 before I started noticing UI-latency, PDS tops off at around 20. And when it starts to stutter, it's worse than with MuQSS, but since it's not that often that I'm at those loads I'm quite OK with the current situation.

All this with untuned schedulers, they might behave very differently with some tuning.

jerrysievert wrote at 2021-11-29 03:30:24:

half a ms is still 2 samples missed:

44000 / 1sec / 1000ms

hopefully we have better latency than that!

jacquesm wrote at 2021-11-29 04:21:57:

Half a millisecond is 20 samples missed. 44000 * .0005 = 22

But that's not what he meant, he meant that non-real time (soft real time, not even hard real time) OSs pause pretty frequently for relatively long periods. You might not miss any samples at all due to hardware buffering, but the effect on UI is tremendous.

jacquesm wrote at 2021-11-29 04:20:08:

Count me in.

jerrysievert wrote at 2021-11-29 03:27:49:

> Why aren’t all new keyboards (or those other things DJs have on stage) simply software, maybe with a custom input device for easier live use?

there are some sounds that rely on certain electronic oddities from weird components that cannot easily be modeled in software. some Russian transistors that are used for the polivoks filters are some that I can think of off the top of my head.

personally, I've built a ton of software simulations of hardware equipment, but I still have a tx81z sitting around, and a rack full of eurorack gear. there are just some things I can't get quite right, and some weird things that I doubt I could ever reproduce[1].

we can't accurately model a SID chip in software or fpga due to its weird quirkiness under electrical load, some of the less sane chips are even harder. that's why there's such a demand for weird things that use older chips[2].

1.

https://www.nonlinearcircuits.com/modules/p/brain-custard

2.

https://www.youtube.com/watch?v=NrZZiTpK3f0&t=112s

recursive wrote at 2021-11-28 22:01:30:

The synthesizer is the part that makes the waveform. Some of them have keyboards attached. Some don't. Some performers do indeed use a MIDI controller and VST plugins on a laptop.

The main problem with this is that software runs on computers, and sufficiently portable computers are generally just not reliable enough for a lot of live performances.

klodolph wrote at 2021-11-28 22:28:40:

> Can someone explain what a synthesizer is to a non musician?

Keyboard = input device, usually generates MIDI. Synthesizer = creates sounds from scratch, usually generates audio from MIDI.

Some keyboards are not synthesizers (sometimes called a "master keyboard") and you have to plug them into something in order to get sound. Some synthesizers are not keyboards, and you have to plug something into them to control them. For example, the DX7 is both a synthesizer and a keyboard. The TX-802 is a synthesizer but not a keyboard... it is kind of like two DX7s in a 2U rack-mount unit without a keyboard. The Akai MPK249 is a keyboard but not a synthesizer. You can buy a TX-802 and an Akai MPK249 and plug them into each other, and it's kind of like having a DX7.

> Also, in today’s world, do these need to exist in the same form? That is, can’t all these sounds simply be digitally produced rather than relying on circuitry?

Circuit emulation has varying degrees of accuracy. I like to think of digital synthesizers as computers that don't ever need software updates, and are therefore more reliable than software running on a computer. They also often have purpose-built UIs (knobs, buttons, sliders) which are critical to some people using them.

If you are going to make a custom input device, why not just make the custom input device and the synthesizer one single package? This is called a "sound module" -- something that makes sounds but does not have a keyboard attached. They come in both rack-mount and desktop versions.

Bxbo wrote at 2021-11-29 04:28:55:

Fourier, 200 years ago showed you can create a curve of any shape you can imagine by adding Sine Waves. The synth applies that to music and speech.

But the concept is applied in hundreds of fields as you move from curve, to planes to volumes.

The key magic is the sine wave which unlike other curves on differentiation any number of times always produces a sine way due it's deep connection to anything in nature that Repeats.

tasty_freeze wrote at 2021-11-29 04:45:48:

Sine waves are convenient, but aren't at all necessary for a fourier transformation. The requirements are that you have a set of functions which are all pairwise orthogonal which also form a complete basis set.

At least, that is what I recall from my EE classes 35 years ago.

jacquesm wrote at 2021-11-28 21:40:26:

A synthesizer is a device that creates waveforms of a musical nature that do not have a natural equivalent.

throwawaysea wrote at 2021-11-28 22:15:09:

Is it not possible to have a fully digital (built in software) synthesizer?

klodolph wrote at 2021-11-28 22:43:25:

It is definitely possible... it is also easy to end up with software synthesizers that use up large chunks of your available CPU power, or even synthesizers that require more CPU than you have available. It is also easy to end up with a beloved software synthesizer that stops working because you updated your computer's OS. I have sound modules from the 1980s that still work exactly as they did almost 40 years ago. I can't say that about software I used in the 1980s.

A lot of modern synthesizers or sound modules are basically just software running on CPUs, DSPs, or even FPGAs.

Gigachad wrote at 2021-11-29 00:23:53:

That is what is used for modern music production in software. It's a somewhat new development and with too many in use at once, you will have to prerender/cache the sound as your computer may not be able to handle it all live.

But previously it was all analogue electronics. Which are extremely difficult to perfectly replicate in software and even models of the same synth would sound slightly different due to manufacturing differences/defects.

jacquesm wrote at 2021-11-28 22:44:05:

Sure, but that's still a device.

airstrike wrote at 2021-11-28 23:53:04:

Since the article didn't appear to include any samples, I resorted to Goodolegoogle and found this:

https://www.youtube.com/watch?v=U7Xnr0m-elw