💾 Archived View for dioskouroi.xyz › thread › 29368549 captured on 2021-11-30 at 20:18:30. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
This looks like a cool project and I look forward to trying it later.
General comment: one reason that the R package ecosystem often feels creaky and brittle, I think, is that a lot of packages (like this one) are pretty far afield from R’s original intended uses of statistics and visualization. Here, we see the use of integers to denote pitches strung together as a vector to generate a sequence. That’s not precisely a misuse, but it’s definitely misaligned with the original vision, and trying to build things on top of that misaligned infrastructure is a potential vector of packages’ breaking and general irreproducibility.
R’s package ecosystem is creaky and brittle even for statistics and visualization. I don’t think “misaligned with the original vision” is really relevant. Like, it has nothing to do with someone making a package to work with music in R.
Not sure how any of this is really relevant here. This is a general R gripe.
> R’s package ecosystem is creaky and brittle even for statistics and visualization
Explain what you mean by this. It’s a big claim without any supporting evidence.
Sure, I thought this was a common experience, and I’d love to hear how you get around it.
The tidyverse is absolutely full of breaking changes. Two big ones off of the top of my head: dropping underscore functions like mutate_, and turning common column names into new function names. So in different organizations you might see two patterns:
* Monolith - everyone uses a set of R packages from 2017 or whatever (frozen in time either through MRAN or Nexus)
* Wild dockerization - one team’s internal tooling is incompatible with the other team’s, so working across teams you gotta rely on Docker a lot
Why? Cause there’s not really a good way to pin versions besides shrinkwrapping, at least in my experience. We tried Drake, but Drake itself had so many breaking changes that it gained a really bad rep (from what I hear from colleagues, haven’t used it myself). Tried renv but it was destroying some of our analyst’s setups (like, you just get a bomb icon in Rstudio). Didn’t troubleshoot it too much, kinda just gave up.
I don’t like forcing a specific vintage of packages, but I like everything else less.
For ad-hoc data science work R is always the first (and last) tool I reach for. It's just so fast and reliable to develop in, and with tidyverse, such a joy!! When working on python projects, I've seen _very_ experienced python people (gold for the python tag on stack overflow) get stuck on some frustrating and mundane dependency installation issue and spend an entire day with attempt after attempt to install and get on with their analysis.
By contrast, R has the occasional but rare installation issue, and almost all the common ones are solvable in a few minutes with a re-install.
I think for large software projects where dependencies could be an issue, especially long term, R might not be the best choice.. for now anyway.
I feel some of your pain on the tidyverse front, but for that reason I stick mostly with fairly common functions and avoid exotic new ones. Even if I do venture into new ones, I comment them so I know precisely what's going on so in the event of a refactor a few years' down the track I'll be fairly safe. With R there's usually 10 ways to do the same thing, so you're never completely stuck, you'll just get an error which typically has some sensible way to handle.
To spin a positive light on the deprecations, most are replaced with something nicer, so although you have the pain of updating old code, it _forces_ you to be aware of the new approach, which is a silver lining of sorts.
R is my go-to for ad-hoc data science as well. And you’re right, deprecations are generally replaced with something nicer. From the perspective of a language used for ad-hoc analysis, these are all great features. Hadn’t thought of it that way.
Which further emphasizes the idea of: R might not be the right tool for production processes. I used to fight that concept, but in the past year or so I’ve come to embrace it.
It's not that difficult to fix package versions for any prod deployment, as long as you deploy the projects as R packages. You can fix the versions in the ``DESCRIPTION`` files, which are easy to manage through, e.g., Docker and ``devtools::install_version``. Essentially, not a huge difference from ``requirements.txt`` that we're used to.
I think what you mention about breaking packages is not exclusive to R, that's just the result of community driven, open software development. You have to take care of reproducibility in any programming environment, but this doesn't have to do with the entire package eco system. Especially the tidyverse is very exclusive in that, since it is very fast paced and constantly evolving with breaking a lot of old code.
Many popular packages like data.table rarely break after years of version upgrades. But relying on that in any kind of programming language is asking for trouble...
We're using renv since about a year at our workplace and haven't had any major issues occur. Maybe try packrat? Or try conda with R. Otherwise using docker should eliminate most reproducibility issues
Breaking changes in packages is not exclusive to R, you’re right. But dependency issues kinda is?
Let’s say two competent analysts wrote some code 5 years ago and you need to reproduce it today, one in R and the other in Python. The Python code probably includes a `pip freeze` or some equivalent, but the author of the R code had no equivalent. There’s no good agreed upon solution. And I don’t think I could reasonably fault that R analyst.
Packrat seems to be falling out of favor, I should read more about conda with R.
(The way I’d solve this problem in R is with an MRAN snapshot from 5 years back in a Docker image, but that’s basically accepting I won’t be able to integrate it with more recently written R)
I think renv (rather than packrat) is the current go-to for package management, although I haven't used it yet.
Ok, this is not what I thought you meant. I don't know how anyone can solve this problem in a constantly growing and evolving open-source ecosystem though.
Well the very first step would be: native R support for installing specific package versions, and sharing that with others. There’s too many third party solutions that come and go over the years. And there’s no guarantee that what you select today will be understood, or work, tomorrow.
There is `renv` for that
Which is a third party solution. A few years ago you may have said packrat. A few years from now you may say some other thing.
The idea behind a native R solution is that it is far less likely to be deprecated.
Isn't this fundamentally describing an arpeggiator?
Using Reaper I made a simple arpeggiator extension: it takes input from one track for the chords and from another track for the notes, and it maps the notes from the notes track to the available notes from the chords track. It's like a "snap to scale" but it snaps to the current chord.
It's much better than a normal arpeggiator plugin IMHO because you don't have to program patterns in the UI of a plugin; you can do anything in the notes track and it will always work.
The limitation, as with any arpeggiator, is that it can't play notes outside of the current chord (but it's possible to have "chords" with many notes on the chords track to allow more freedom).
One way that I use it is I write a chord progression, then I play over it using the extension, and then I adjust the recording by moving notes to more interesting positions. This song was made this way:
https://open.spotify.com/track/5TxVfIf9JUAhCEL3O5cWXT
(It's not perfect and the sound is kind of bad, but it's a start).
Edit: your stuff is beautiful btw.
Less than an arpegiator, I think it's closer to 1/f or fractal iteration.
I've spent the last few days working out a similar hypothesis using a "fractal sequencer" module for my eurorack rig. (
https://www.qubitelectronix.com/shop/bloom
)
Essential idea is it takes your sequence of notes, creates a bunch of branches that are essentially folds, and then iterates through them. The track linked is two sequences iterating the 1st 3rd 6th 8th of a scale across two different oscillators creating a kind of bleep-bloopy counterpoint in the background, and then adding a manual three note lead with some drum breaks and drops to tie it together. I wanted to see how different fractal randomness sounded from intentional complexity. (
https://soundcloud.com/n-gram-music/beatrice
) It's still sounds random'ish because it plods, where a real composer would add intention, emphasis, and ornamentation, but it's still within the domain of clearly musical.
I'd say it's based on minimalism and Arvo Part's tintinabuli method, but I think one should be an actual serious musician before making lofty claims like that. (I am not) However, this idea of whether there is a big difference between 1/f fractal complexity in a consonant set of notes, and actual human _intent_ is interesting. I suspect Bach (and before him, Vivaldi) had an underlying system that was based on fractal iterations of each theme, which is what sent me down this (rather nutty) path, but the journey continues to amuse.
I've been doing one-hour time limit song writing challenges and in those I need to write accompaniment fast. I gotta say, this generation is a really close reduction of what I actually do -- come up with some accompaniment figure over a chord or two, then transpose it and tweak chord tones for the next measure. Maybe add a part or do something different if it's a section change. IMO this post is a great example of how to mix variation and repetition to balance interest and familiarity.
Even if the generation itself has limitations it seems like a great springboard for manual touches like dissonance tweaking, or new sections, or fills / transitions -- arguably the cool creative part after all anyway! There's a lot of repetition in music, and when writing, it's hard to decouple it from points of interest. A workflow that enables such is very exciting.
(Taking this opportunity to share my multiplayer piano roll editor:
https://yuxshao.github.io/ptcollab/
)
I’ve been working on and IDE for music composition. Launching soon
.
Will it have a tracker interface?
No, an advanced piano roll.
Cool! Machine learning analysis of music seems to largely be at the audio or midi level, and not really at the symbolic level. What kind of learning techniques would be applicable to structures like musical meter?
MIDI is semi-symbolic.
It's still on the physical time scale and presents music as a linear sequence. I believe symbolic music representation is more of a graph than a list
MIDI happens at the note level, and the controller level for parameter management. MIDI has no concept of chords or chord languages.
system('vlc Lorde-Melodrama.avi')