💾 Archived View for dioskouroi.xyz › thread › 24940624 captured on 2020-10-31 at 01:03:30. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
A random suggestion to make for nicer, more interesting music: the article takes pitches from a table of equal tempered note frequencies relative to A440:
For example, here’s a familiar riff “E B D E D B A B”. It uses only 4 notes - D and E from one octave and A + B from the other, lower octave. If we look at the note frequency table, the pitches of those notes in octaves 5 and 4 would be 659.2Hz (E), 587.3Hz (D), 440Hz (A) and 493.8Hz (B).
Instead of equal tempered pitches (which are generated by repeatedly multiplying by the 12th root of 2), use pitches that form whole number ratios with respect to the root of your key.
A decent 12-note chromatic scale would be something like 1:1, 16:15, 9:8, 6:5, 5:4, 4:3, 45:32, 3:2, 8:5, 5:3, 15:8, and 2:1. So, for instance if your root is C=256 hz, you'd have 256 hz for C, (256_16)/15 for C#, (256_9)/8 for D, and so on. This is a just intonation (JI) scale. An advantage of JI is that the musical intervals blend with each other better, whereas the advantage of equal temperament (ET) is that you can use any note as the root and change keys on the fly without causing problems. If you're doing electronic music, though, there's a lot less reason to stick with ET: you can always just multiply your whole tuning table by a constant if you want to switch key.
JI scales can be made with any ratios: the above maps well to most conventional music, but there's no rule that you have to limit yourself to 12 steps per octave. Using the harmonic series is another option, or adding in ratios that have a 7 in them.
I've had lots of fun experimenting with JI, but I think you're greatly understating the difficulty of using a scale like that for conventional music! Basic chords (like ii) are horribly out of tune due to the syntonic comma, and while you can retune things as you go (like a singer or string player would), it takes a lot of finesse -- the IV ii V I progression is tricky, and if you're not careful you create a comma pump, transposing things down just a little bit by the end.
In practice, 5/4 major thirds are at the very low end of acceptible major thirds, with the Pythagorean 81/64 at the top. I find slightly sharper major thirds to sounds better generally [for western common practice harmony].
An interesting system, which I wish I've had more time experimenting with before suggesting it, is 55EDO, using a system where each whole tone is broken into nine equal parts; a five part interval is a major semitone and a four part interval a minor semitone. The major scale is made from whole tones and two major semitones. A C# is slightly flatter than a Db in this system, and you only have to worry about enharmonics (old meaning: futzing with the small distances between similar pitches) in complicated chord progressions.
Not suggesting not to try JI of course, I'd recommend experimenting with it (including the suggested 7-limit JI as well)! Scales in the end can only be judged in the context of what you want to do with them -- they certainly all have tradeoffs.
Out of curiosity, how do you go about auditioning these systems in your own practice? Are you a performer playing an instrument by ear? Are you sequencing sounds in a DAW? Or are you tuning your sounds in a programming environment similar to the linked article?
I ask because I'm an electronic musician working in a DAW (Ableton mostly) and am trying to find the best workflow to start exploring these concepts. Ideally, there would be an interface for switching between tuning systems that's as easy as drawing in a time-signature change, but micro-tuning seems to be a low priority for most DAWs. The only one I know to even begin integrating alternate tuning systems is Logic, and those settings are tucked away deep in the preferences.
Does anyone here have a resource for starting to familiarize oneself with how to integrate alternate tunings into their composition? The simpler the better, I suppose. As soon as one sets their foot into the world of alternate tunings you're usually flooded with 40+ tunings. As someone who's worked in Equal Temperament their whole life I would like to pick ONE system and really drill down on it until I get a grip on how to integrate these tunings into programs that are built for ET.
> comma pump
Well this is a fun rabbit hole!
https://en.xen.wiki/w/Comma_pump
Maybe a decade ago, I read a story here on HN where somebody was analyzing a singer's audio to detect how close their pitch was, coming to the conclusion that the root was spot on but other notes were slightly sharp or flat.
Someone in the comments pointed out that the analysis was done assuming even temperament, and the singer was actually totally nailing just intonation.
It's one of those anecdotes that has stuck with me since as a reminder to try to not get tunnel vision on a specific metric.
This is a great article! I've been interested in music software and hardware for decades and my lockdown hobby has been finally spending the time to really dig into it. It is both very challenging and very rewarding.
If you want to go a little deeper than the article, here's a few more notes:
The way the author generates the sawtooth and square waves is "analytically". That means treating them like a function of time and simply calculating the waveform position each point in time. As you can see, it's really simple. Unfortunately, it also introduces a ton of aliasing and will sound pretty harsh, especially at higher pitches. If you're familiar with nearest-neighbor sampling in image resizing, think about how nasty and chunky the resulting image looks when you scale it down using nearest neighbor. Analytic waveforms do the equivalent thing for audio.
Fixing that problem is surprisingly hard. It's a lot like texture rendering in games where there's a bunch of filtering, tricks, and hacks you can do to make things look smooth without burning a ton of CPU cycles.
---
The clever trick the author uses to simulate strings is:
https://en.wikipedia.org/wiki/Karplus%E2%80%93Strong_string_...
---
The low-pass filter they show is a simple first-order 6 dB/octave digital low-pass filter. Filtering is fundamental to electronic music. The majority of synthesizers you hear in electronic music use "subtractive synthesis" which means to start with the simple sawtooth and square waves introduced early in the article and use filters to tame their harsh overtones. I find the math behind filter design, especially the moving filters used for synths, really difficult, but also interesting.
Small nitpick: CD quality is in fact 44.1khz. The nyquist theorem essentially says you need to use a sampling rate twice as high as the highest frequencies you want to record and reproduce. The article gets the gist of this right, just the cd quality numbers wrong. Izotope has a really good article that goes into digital to analog conversion, sampling rates, and the history behind them.
https://www.izotope.com/en/learn/digital-audio-basics-sample...
Also, this
https://www.youtube.com/watch?v=cIQ9IXSUzuM
This somewhat related talk about functional composition (in lisp) is also quite interesting:
https://www.youtube.com/watch?v=jyNqHsN3pEc
I think I've seen that one. I also like this presentation by extempore's author [1]. It is pretty cool in that it shows how to model instruments out of simple waves and "attack, decay, sustain and release" envelope modeling. Pretty quickly he comes up with something that kind of starts sounding like real instruments (squinting a bit... :-).
1:
https://www.youtube.com/watch?v=phYHOUICe7Q
A Sorenson lecture I haven't seen yet! Thank you!
Sorenson is low key one of the most innovative minds in the scene. Extempore is such a blast to work with.
My first hacking as it were was loading the Apple ][ game Lemonade and then writing my own programs. Because Lemonade loaded a bit of code that played music and I could figure out how to play notes and durations once the magic code was loaded but not how to do the magic code myself. It took quite a while and a book on 6502 assembly and my high-school's Franklin Apple clones (Franklin had a built in assembler in the ROM) to figure out how to make that speaker beep actually play notes. My first great hack was finally being able to write a program that played music on the other Bell & Howell Black Apple machines. Then I got an Amiga and that musical note playing became trivial. Article seems a bit bogus by not going down to speaker-on/speaker-off and delay loops.
Back when I was kid playing with a C64, it was the DATA section at the end of BASIC programs that blew my mind. If you programmed line 10 wrong with a typo or other syntax sin, then the entire thing would fail. If you had a typo in the DATA section that was (now I know as) Hex, then the program would not fail but glitch, and it was typically with the sound/music of the game. I had no idea what Hex was, but I did realize that it was only 0-9A-F. That made the confusion of 0(zero) and O(upper-case 'oh') less of a problem during transcription. I never understood it enough to modify any of it in a useful manner, but it definitely helped avoid hunt&peck typos for a kid
That reminds me of the evolution of magazine code listings, which eventually came with per-line CRC values. If I remember correctly, there was a mod for Atari 800 BASIC where it would spit out the CRC after you typed each line of code.
That's actually where I was getting my BASIC code from to transcribe. I don't remember exactly which magazine, but in my fuzzy memory, I think it was Byte.
The "code poem" at the beginning of the post reminds me of the "Bit Shift Variations in C-Minor" [1] by Robert S K Miles (chiptune music in 214 bytes of C; featured in the computerphile video "Code Golf & the Bitshift Variations").
[1]
http://txti.es/bitshiftvariationsincminor
Thank you! For weeks now I've been thinking about how handy it'd be if I could audibly 'listen' to the output of a command. I had no idea it was as easy as piping it to aplay.
`tail -f output.txt | aplay`
I know this is a lame takeaway, but I'm just so happy to learn about this.
This is most decidedly _not_ lame! I'm sitting there looking at the code poem trying to figure out how the sound happens, it looks like it's just writing bytes to a file. I figured there was some driver or something but "sound driver from file" is not super googleable.
_aplay_ was the missing link.