💾 Archived View for dioskouroi.xyz › thread › 29429489 captured on 2021-12-05 at 23:47:19. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2021-12-04)
-=-=-=-=-=-=-
________________________________________________________________________________
Somewhat related is MirageOS, a microkernel library for producing applications that run directly in a hypervisor, like Xen. So no virtual environment necessary. And thanks to the memory safety of OCaml (given certain assumptions about the style of programming, because any sufficiently versatile language comes with footguns) you don’t even really need the virtual memory system and other conveniences of a modern operating system.
From the FAQs:
Does this Work for My Mac M1 or M2?
No one has reported trying to run this on a Mac M1 or M2 yet but since these run ARM we don't feel that it is a good laptop to be using if you are deploying to X86 servers. At best you will experience slowness as the machines will need to emulate a different architecture. This isn't something we expect software updates to fix and both Docker and VMWare state the same thing. Even if you wish to deploy to ARM servers we don't feel that the M1s and M2s are going to be helpful as they are very different from most commodity ARM servers.
Erm... they're not Mac people then...
EDIT: I was probably being a bit grumpy - read the full replies below for further context...
We use macs. :) I'm typing on one right now.
There are 2 large problems with macs for shipping to x86 servers in the cloud (our main target):
* Different file format: elf vs mach-o - this is why many devs that use mac rely on things like docker or vagrant.
* x86 vs arm: We do support ARM to a degree right now but the vast majority of our end users are deploying to x86 vms.
The problem here of course is that ops produces machine images that are ran on a hypervisor so this works great on an x86 mac (for dev/test) but is very slow on apple silicon cause of the translation involved.
Looks like one of our users has been playing around with it though so YMMV:
https://github.com/imarsman/dockerops
> I'm typing on one right now
Is it a Mac M2? :-)
> The problem here of course is that ops produces machine images that are ran on a hypervisor so this works great on an x86 mac (for dev/test) but is very slow on apple silicon cause of the translation involved
But presumably most people will be doing this step in a CI system for deploying to production right? I certainly don't build anything locally on my Mac that I then deploy to a production server. Nor do I expect anything I do build and run locally to run like it will on my production servers (in terms of performance).
I get that it might not be an _optimal_ experience to develop on an M1 but the wording of your FAQ is a little offputting as it currently stands - given that it advises that ARM Macs are likely not suitable, but says that no one has actually tried yet... while simultaneously referring to a Mac chip (M2) that doesn't exist :-)
The website and project look great. I'm just giving you my honest first impressions!
Nope.
Appreciate the feedback. The community site could definitely use massive amounts of documentation and it hasn't been updated recently either.
I'd agree anyone that is actually taking something to prod will be using a ci/cd server of some kind, but in terms of just monkeying around or using as a dev laptop the M1s don't have the same ergonomics.
We're not against M1s at all. If enough people want that support it can be added - whether it is native ARM builds or binary translated X86 - it is mostly just a word of warning on expectations.
Ok cool, thanks for engaging and clarifying. You've undone my first impressions :-)
I may take a closer look!
Is that a bad thing?
I probably should have clarified:
There's no such thing as a "Mac M2" for a start. (Yet)
But I also found this paragraph a bit odd, because the machines I use locally never match the architecture of what I'm deploying to. In fact (in my view) it's largely irrelevant.
I develop on Macs and deploy to Linux and Windows servers, with a mixture of ARM/Intel. I don't quite understand this sentiment. Not to mention the fact that they start off by saying that no one has even tried it yet (presumably to run Nanos in a VM on an M1 Mac).
It just seems a bit uninformed and unnecessarily opinionated. As a Mac user it puts me off digging much deeper into the project. Maybe that's the wrong takeaway, but that was my first reaction.
A couple of other interesting articles from the nanos team by way of the /r/unikernel subreddit:
https://nanovms.com/dev/tutorials/debugging-nanos-unikernels...
https://nanovms.com/dev/tutorials/finding-memory-management-...
https://nanovms.com/dev/tutorials/profiling-and-tracing-nano...
It would be great if the picture demo running the node application included timestamps. The landing page keeps using "fast" to mean "bandwidth" but without any mention of latency - my primary question is how long it takes to boot the kernel and start launching the userland process (i.e. cold start time) but there's no mention of that.
We've seen boot times in the 60s and 70s of ms but have put absolutely no work into optimizing that. We could drive that down substantially for payloads like virtual network functions.
I should point out that boot time is highly dependent on two things: infrastructure of choice and application payload. For instance, your typical rails or JVM payload might take longer to init then actual boot time. Similarly booting on Azure can be different than booting under firecracker on your own hardware.
See also
https://github.com/direktiv/vorteil
The above also wrote a Uni kernel but it seems they abandoned that as was too large a problem and just link and package now
Mmm. Fascinating ... am wondering already how well the JVM would run on it. Also maybe this could the solution I've been looking for for "Just enough OS for Virtualbox" and have it run on baremetal. VBox has one of the best VM management UIs in my opinion.
There are several pre-built JVM packages and VBox and OCI works.
Whats the difference to OSv?
https://github.com/cloudius-systems/osv
See
where there is a comparison table but it's not complete or out of date, because OSv website says:
"OSv supports many managed language runtimes including unmodified JVM, Python 2 and 3, Node.JS, Ruby, Erlang as well as languages compiling directly to native machine code like Golang and Rust"
This seems like a big deal on the Nanos side:
"Another big difference is that Nanos keeps the kernel/user boundary. In our testing removing the large process to process context switching that general purpose operating systems still removes quite a lot of the perceived cost in other systems. We keep the internal kernel <> user switch for security purposes. Without it page protections are basically useless as an attacker can adjust the permissions themselves with privileged instructions."
This would seem to me that they are slower than OSv.
they look pretty similar from the outside
nanos is recently written for this particular use case and uses lwip for networking
osv looks like FreeBSD with some machinery around it to package single applications and run them on boot
>osv looks like FreeBSD with some machinery around it to package single applications and run them on boot
Just because ZFS? Because everything else is not from BSD's, it's a unikernel made to run on a hypervisor made to run linux-bin's, i really don't see a difference....or any plus to use nanos.
I was just looking at the source, there are big hunks of freebsd kernel in there, but I don't know the real functional decomposition.
didn't suggest that nanos was better - if it does indeed use big hunks of freebsd code, that's had considerably more cooking time than nanos.
OSv describes the project as BSD code with a Linux compatibility layer.
ZFS is optional.
ah, the classic blind download this script and run, trust us guiz
You can also build from source here:
https://github.com/nanovms/nanos
&&
https://github.com/nanovms/ops
.
There are also packages available through AUR/homebrew and the like:
.
The script is only there facilitate the 'install' such as ensuring you have qemu installed locally or assessing whether you have kvm/hvf rights/etc.
Also, I don't think this is documented yet but you can target various PRs/builds with ops via this way:
ops run /bin/ls --nanos-version d632de2
I mean... if we go this way why even have a kernel? Why not just have a single application with everything that is needed to run it as VM?
The scope of "everything that is needed to run" is a lot higher than might appear and since it is common code that is applicable to every app that would be deployed it is packaged as a 'kernel'. Something has to talk to the network, to the disk, keep track of time, etc. You might be surprised at how much code is involved in merely writing a byte to disk, efficiently and in a manner that a random webapp might use.
One very common misconception about unikernels is that they don't have kernels which every single implementation out there has one - it just might be smaller or less-featured than others.
So, at least in our view, it's not about having a 'small' kernel it is more about the architecture.
You can have libraries that can implement device driver functionality and talk directly to devices. Actually there are some (DPDK - Data Plane Development Kit and SPDK - Storage Performance Development Kit, for example).
> The scope of "everything that is needed to run" is a lot higher than might appear
Having written one algotrading framework with full kernel bypass which required me to account for every single piece of kernel functionality in use by the application (mostly to eliminate its use) I actually think it is the opposite. Most applications do not need a lot from the kernel to function and what they are using could be supplied as library.
Main reasons to have kernel -- protect shared resources and impose security constraints -- are not present when you intend to only have one application in the system.
Whether code is packaged a a library or inside a base kernel is definitely open to interpretation/design. We, for instance have the concept of a 'klib' which is code we don't want packaged in the base kernel but is optional and can be included at build-time. For instance deploying to Azure requires a cloud-init call to check-in to the meta-data server to tell it the instance has booted - not something you want to put inside every image if you are only deploying to say Google Cloud. Likewise, we have another klib that provides APM functionality but checks-in to a proprietary server so clearly not everyone wants that inside their image either.
However, there is a _lot_ more than just networking/storage drivers and some of it is very common code. Page table management for instance. Do you have 2,3,4 page table levels? How do you allocate memory to the app? IOAPIC vs PIC? Do you support calls the way GLIBC wants? RTC vs TSC vs PIT? I'm not saying any of these can't be librarized but they are most definitely not choices I would expose to the vast majority of end-users to choose at build-time for.
This is the approach of IncludeOS. You include some headers and a library, compile and you can launch your application as a VM.
There are advantages to an approach like Nanos or OSv in that development is easier and you have better compatibility.
You mean, run an 'application' on a 'server'. Maybe we should call it.. an application server.
And have it 'operate' it directly on hardware with only a minimal 'system' layer for common operations.
I feel old.