________________________________________________________________________________
If anyone wants to look through Chris Lattner original Swift Concurrency Manifesto. He goes into the bigger picture.
https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9...
Can anyone fill me in on how this compares to Java's Project Loom approach where they have said a hard no to coloring functions. What are the advantages and disadvantages of async/await vs introducing a green thread like approach for problems?
Is async/await more suitable for user interfaces?
In Loom we can make some trade offs around our knowledge of the Java stack and the standard library.
1. We know that no JVM frame contains a raw pointer to anywhere else in the stack.
2. We know which stack frames are JVM frames, and which are native frames.
3. We know that there are unlikely to be native frames in the portion of the stack between the point where we yield control and the point we want to yield to.
4. We can change the standard library to check if we are in our new lightweight thread or not and act appropriately.
Knowing these we can avoid function coloring and push the complex management of green threads into the standard library, and reuse existing task scheduler code to manage these virtual threads, and we can work to make thread local variables etc work seamlessly with this new concurrency abstraction.
This puts us in a very different design space. We can move a portion of the stack to the heap when a thread is unmounted, and we can change things about the GC and other internal systems to make them aware of this new thing that can be in the heap.
This type of approach would be much harder in a language like swift that tends to coexist with C and other languages that use raw pointers or a non-moving GC, so I think the question is not which is the better approach but which is the better approach within your language ecosystem.
If Loom ever comes out :p, at this point I am starting to wonder if this is a case of "the right thing" against "worse is better" and maybe we just will never see the light of Loom.
You can download loom and use it right now. I am pretty sure it will one day be released.
Implementing async/await is only a compiler change, so it can be implemented rather easily on top of any languages, the compiler transforms the code to a state machine.
co-routine (Java Loom's one, Go one, Scheme one, etc) requires to be able to serialize and deserialize a part of the stack (move the stack on the the heap and vice versa), so it's hard/impossible to implement with non managed runtimes which like in C or objective C.
In C, you can declare an address/pointer to an address on stack (using &), but with a co-routine mechanism, the addresses of parts of the stack are not constant.
If you have a managed runtime, you rewrite those kind of pointers when you copy parts of the stack back and forth.
With one stack per coroutine you do not have to move stack frames. It has different tradeoffs of course.
async/await is more flexible in some sense. If you run the tasks on a thread pool, it's mostly equivalent to green threads.
But if you run the tasks on a single thread, you get cooperative multi-tasking. You can access mutable shared state without using locks, as long as you don't use "await" while the shared state is in an inconsistent state.
For user interfaces this is huge advantage: you can run all your code on the UI thread; but the UI stays responsive while awaiting tasks.
Also, here's an interesting use of async/await: software hyperthreading:
https://isocpp.org/blog/2019/09/cppcon-2018-nano-coroutines-...
Loom's virtual threads allow you to do the same with a pluggable scheduler.
>For user interfaces this is huge advantage: you can run all your code on the UI thread; but the UI stays responsive while awaiting tasks.
... assuming any other tasks/coroutines running on the UI thread are being cooperative and not doing bad things like waiting on synchronous functions or otherwise hogging the UI thread too much. Done right, it's a big performance and maintainability win over multithreading, but when done poorly, it can result in large variance in latency/responsiveness of any individual task.
async/await is really just another formulation of linked stack frames, where control is explicitly yielded. At the end of the day, the thread stack state needs to be reified somehow, and all of these approaches are equivalent in power. The only difference is that now you have a type system that delineates between functions that can and cannot be shuffled between kernel threads.
The benefits that Go (and potentially Loom) provide are with the scheduler. When Go code calls into other Go code, it's fast because preemption points can be inserted by the compiler. The goroutine is parked when it is blocked (on I/O or some foreign function), and in the slow path this logic is executed on a new kernel thread.
Although the Go approach involves more overhead in the slow path, wrangling blocking code to work with the scheduler has cross-cutting implications for library design. I.e, I don't have to worry that a library I import that does file I/O will pin my goroutine to a blocked thread by using a blocking syscall.
Does Swift's concurrency plan include thread-safety guarantees? I would like to see a higher level language than Rust that includes similar thread-safety guarantee "Thread safety isn't just documentation; it's law".
Yes, that’s _The Second Phase._
Oh great, so the eliminating data-races effectively means it's also thread safe.
I’m so excited to see them embracing actors. I don’t use Swift and likely won’t in the future but I really hope this decision bleeds over into other mainstream languages.
If all private state is managed by serial queue, doesn't it mean you can easily deadlock yourself? Will swift statically reject deadlocks on reentrance?
Queues are a dynamic concept; which queue you’re on is a runtime property. For this reason you may be familiar with assertions to check if a function is running on some queue.
Actors however are a static concept, we know at compile time which actor is local, and if we have the right one active. So the check about whether you’re on right actor happens at compile time.
You can think of it as if queues are part of the type system, and the compiler can work out statically what queue is used by any code, and so it can label an entire call tree’s queues by control flow analysis.
Because of this you wouldn’t dispatch onto the same queue twice and deadlock. Rather, the compiler would see that the correct actor is local already in the call tree and there’s nothing to do so the dispatch is elided. It would only “switch queues” if it needs a new one.
Besides the deadlock issue, the other advantage of this is it gets optimized out if you have several calls with the same actor.
Yikes. Swift is becoming horribly complex and opaque. Yet another set of @shoehornedKeywords... Colored functions...
Let me ramble a bit here.
Apple’s platforms had always put users before developers, that’s why they were so successful. They’ve built the best user experience by far using nothing but a simple Smalltalk-esque language from the 80s. Look at their frameworks: CoreAudio, CoreAnimation, UIKit... Tremendously powerful, and the best in their league - _in terms of possibilities, not necessarily developer experience!_ Others were shoving garbage-collected virtual machines in their devices, piling abstraction over abstraction, “fluent APIs” (remember those?); meanwhile Apple built an empire with this quirky dynamic language (plus some C++ under the hood), where you had to do manual reference counting as late as 2012! Developers were _livid_. “How am I supposed to program in this weird language?” “AutoLayout? Why can’t it be just like CSS or something??”
But Apple didn’t give a shit, they knew their platform, their frameworks were the best, period. And whiny devs had to adapt to gain access to the richest cohort of users in the computing landscape.
With Swift and SwiftUI we seem to be heading towards a different future, and I’m yet to be convinced that it’s the right one.
Glad to see a mention of CoreAudio for once in a public forum.
No matter the value for money or compatibility, I will never leave macOS for windows because of it. I bought a hugely powerful windows PC in 2018 and the audio performance is awful compared to a 2011 macbook and my 2019 macbook is on another level.
Windows lack of attention to audio drivers is frankly staggering. CoreAudio 'just works'.
Autolayout vs CSS is just a trade off on what layout algo you want. Haversine layout systems tend to be more computationally inefficient than flexbox actually, but more powerful. Yes the original programming API was horrible, but that was a quirk about how apple does API review, and they were mostly expecting you to use IB to do your autolayouts, and stuff like SnapKit put a sane API on top of it soon after.
TBH Obj-C once you looked past it's ugly syntax is a better developer experience than swift today if your making large-ish apps. Today the debugger is still buggy, build times are still way slower and swift code bases create stutters in unresponsiveness inside Xcode to this day. And it was way worse back in the swift v1-3 days.
TBH I think swift could solve a lot of their build speed issues if they removed type inference beyond a very basic set and brought back fine file grained importing like you have with C and Java. You still get %95 the benefits with a couple of features that most IDEs like IntelliJ with java are shown to be NBD.
Stuff like Flutter & Kotlin Multiplatform might make a lot of this stuff moot although, and swift will be regulated as an iOS compatibility layer and a nicer looking C++
Funny, I started with iOS development in 2008. I haven't done much recently, but laying things out with UIKit always felt pretty sensible to me. CSS, not so much. CSS was always the "WTF" of development. Maybe it's better now with Flexbox.
> With Swift and SwiftUI we seem to be heading towards a different future, and I’m yet to be convinced that it’s the right one.
i have some feelings myself, but they are badly formulated so i wont say them here...
but im curious, specificaly why do you feel this way?
> They’ve built the best user experience by far...
...in your opinion. In the opinions of people who manage IT for the vast, vast majority of businesses around the world, where most of the serious software users exist, that title belongs to Microsoft Windows.
Apple is so overtly anti-consumer that I just can’t take opinions like this seriously. And Swift is a trash language with a shitty developer experience just like Obj-c was before it. They couldn’t even get strings right. Nobody wants it outside of Apple’s little bubble.
> And whiny devs had to adapt to gain access to the richest cohort of users in the computing landscape
> With Swift and SwiftUI we seem to be heading towards a different future, and I’m yet to be convinced that it’s the right one.
The more things change, the more they stay the same...
What makes you think "They’ve built the best user experience by far"
I can't even say "best" experience let alone "by far".
Apple is notoriously anti consumer on everything from hardware to software.
I don't know what you are talking about. They had a fun signup screen when you first boot your iPhone. I have 0 ideas what you are talking about for macOS.
I think their primary reason for success was their advertising/marketing.
could you imagine having a function with many arguments and trying to find the async.
internal func refreshPlayers(firstParameter: String, secondParameter: Int, thirdParameters: Float) async {
}
these small mistakes are starting to add up with Swift. They should really nip these things in the butt instead of adding upon the inconsistencies. Its better to make bold decisions now that you know is right than to change them 10 years from now when everyone is already use to them, which is what some older languages are dealing with now.
Why not just have it be async func refreshPlayers() { }
That's like saying: Could you imagine having a function with many arguments and trying to find the return type?
Swift already uses the space at the end of the function declaration for things like throw and generic constraints. I personally don't see an issue with where it is other than I also write a lot of JavaScript and the context switching between languages might take a couple seconds.
There's no reason for "throws" to be there neither, that's what I mean by these small mistakes adding up.
If there was an argument that actually made sense, I'd understand, but there is none.
this is the argument: (
https://forums.swift.org/t/swift-concurrency-roadmap/41611/9
)
The fact that you personally dislike something does not make it a mistake.
A fact sorely unappreciated by a lot of engineers.
Well you can instantly tell its an async func, and it reads like english when reading it. Why do you think this would be a mistake? I'm just looking for a decent reasoning.
Nip in the bud, not butt. It means to cut off the bud (i.e. flower bud) before it grows into something larger.
I meant what I said lol
I agree that it feels like the language maintainers are backed into corners and cannot correct old mistakes.
Which feels strange coming from Apple. Google showed up to use this with Go, write a tool that updates the code from version "x" to version "y" instead of being beholden to source compatibility issues in situations like this.
Nip in the bud.
All languages end up with simple concurrency primitives such as async/await.
No one takes the next steps and introduces the high-level primitives you actually need to work with actors and concurrency in a sane manner: monitors, messages, supervisor trees. Erlang has been around for thirty years, people.
FTA: _”This is a common pattern: a class with a private queue and some properties that should only be accessed on the queue. We replace this manual queue management with an actor class
[…]
Things to note about this example:
- Declaring a class to be an actor is similar to giving a class a private queue and synchronizing all access to its private state through that queue.
- Because this synchronization is now understood by the compiler, you cannot forget to use the queue to protect state: the compiler will ensure that you are running on the queue in the class's methods, and it will prevent you from accessing the state outside those methods.
- Because the compiler is responsible for doing this, it can be smarter about optimizing away synchronization, like when a method starts by calling an async function on a different actor.”_
The article also links to
-
https://forums.swift.org/t/concurrency-actors-actor-isolatio...
(pitch for implementing actors)
-
https://github.com/DougGregor/swift-evolution/blob/actors/pr...
, which links to
https://github.com/DougGregor/swift-evolution/blob/actors/pr...
(proposal for implementing actors)
I don’t know Erlang well, but what’s missing?
Monitors, linking, supervision trees, vm-level introspection into the state of the actors, distribution primitives that make actor identity nonlocal across clusters, actor cancellation (like kill -KILL), graceful actor shutdown, sane id serialization (how easy is it for me to serialize an actor Id, put it on a kafka queue and have it come back in a response so I can route the response back to the actor) etc, etc, etc.
additionally if you go to Elixir, they've implemented async/await on top of the not-really-"actors" of the erlang VM in its standard library, and it's super easy to use and understand. Arguably easier than async/yield/await like exists in most languages that use async as a coroutine serialization primitive.
async/await are not primitives. Mutexes, semaphores, atomic counts etc... those are true primitives in multithreading and they have been around forever (since the 70s at least).
I feel the Swift lang. design is like amateur hour at its best, trying to reinvent the wheel, but still end up where it started, but at worse overrall usability. Just re-arranging chairs. 8 years later, and still Objective-c + GCD combo is better at multithreading.
In comparison: Java had decent multithreading support since version 1.1,(one year later after its release) and it had NIO by JDK 1.4 and full modern multithreading by JDK 1.5.
Swift is a couple of years behind because it is trying too hard to be cool and different.
> worse overrall usability
You can definitely find iOS/macOS developers who prefer objective-c, but they're in the minority. The vast majority would say that Swift is way, way more usable than Objective-C. Obj-C still has some advantages (dynamism being a big one) but for most tasks Swift makes developing both easier and more safe.
Coming from several years of writing Obj-C, Swift has certainly been a net benefit for me. Writing it well does require a bit of a shift in one's way of thinking (writing "Obj-C in Swift" is a recipe for pain) but I find that once that hurdle is cleared it's a very productive language to work with.
> async/await are not primitives. Mutexes, semaphores, atomic counts etc... those are true primitives in multithreading
async/await, a way to model concurrency, and mutexes/semaphore/etc, a way to safely share data, belong to separate categories and one does not preclude the usage of the other, especially if your coroutines are allowed to run on different threads.
I don’t think they meant one precludes the other, and “modeling concurrency” definitely is a “sharing data” problem. In other words, you would have to build a nicer concurrency abstraction out of a lower level primitive at some point.
Really?
https://docs.microsoft.com/en-us/dotnet/standard/parallel-pr...
https://dotnet.github.io/orleans/
https://microsoft.github.io/coyote/
OK, OP was somewhat remiss in saying "no-one". But you have, I think, missed his point.
Instead, substitute "few mainstream language designers" and it stands up. By mainstream I mean Java, Javascript/Typescript, C#, C, C++, Python and such. Most have introduced async/await. None has meaningfully gone beyond that as far as I'm aware. Working with Erlang's concurrency model is a refreshingly simple, consistent mental model compared to the mismash of concurrency features provided by the mainstream. In Erlang, it's as simple as:
1. Do these things need to happen concurrently?
No: regular functions.
Yes: spawn regular functions.
Compare that to the mainstream:
1. Do these things need to run concurrently?
No: regular functions.
Yes: are there only a few, and/or do I need strong isolation?
Yes: use OS-level processes
No: do I want the OS to take care of scheduling / preemption?
Yes: use threads
No: use async/await
Is there a chance that my async operations will be scheduled across multiple OS threads?
No: get speed boost from no scheduling overhead, but remember to yield if there's any long-running actions.
Yes: build my own likely-buggy, half-baked scheduler
Oh, and as a bonus: run back up the entire call stack to make all functions that call mine async.
And that's before we get to error handling. I'd take Erlang supervision trees _every day_ over trying to figure out which nested async callback function generated an exception.
Most of my links as for .NET frameworks that went behind async/await.
One of them (Orleans) is used to power Halo's backend.
Really. Only one of them is barely beyond experimental (Pony). The rest are experimental projects.
Today I learned that Halo is an experimental game.
Because it's not a language feature it's an entire runtime / paradigm.
No idea why you're getting downvoted, but you're right. ONe of the reasons why Erlang allows for monitors, and supervision trees and many other niceties are precisely because the VM is built that way: Processes are isolated. Even if one process suddenly dies, the VM will take care of cleanup and will notify any monitoring processes, etc.