I've been writing multi-threaded code for a long time. And for a lot of that I've suspected that the threads&mutexes model is too difficult for humans to reason about.
I'm currently trying to upskill from this pthreads view of hardware to thread pools, work stealing, async/coroutines, all that good stuff. This is similar to the way I try to use higher-level abstractions like folds and maps instead of raw recursion. I first heard of concurrency primitives like this in the .Net framework, which I only briefly dabbled with. However, it's integral to WinRT too now and that impresses me more personally. I hadn't really grasped the implications of standard ISO C++20 supporting coroutines until now. Finally, the recent release of OCaml 5 has yet another very similar way to use the many cores in my computer.
Many projects seem to be convering on a similar model:
Ocaml is probably the simplest presentation of this.
I'm amused that coroutines are making such a comeback. I remember in college thinking they were a curiousity from the 70s/80s and although nice and simple to reason about, unlikely to be relevant. Wrong again.
A technology that I did think would help at that time was concurrent logic programming. That didn't work out, there are bits of it like the Andorra model still in Ciao Prolog but even there, it's all POSIX-like threads. And last time I tried I couldn't get the KLIC implementation of KL1 to compile, I think it's 32-bit only? Hypothetically, Prolog would be a good, productive competitor to neural networks writing code but I don't expect this to happen.