💾 Archived View for gemini.circumlunar.space › users › kraileth › neunix › 2020 › centos_killed.gmi captured on 2022-06-11 at 21:33:13. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2021-12-05)
-=-=-=-=-=-=-
Thursday, 10. December 2020
[This article has been bi-posted to Gemini and the Web]
On December 8 2020 a Red Hat employee on the CentOS Governing Board announced that CentOS would continue only as CentOS Stream. For the classical CentOS 8 the curtain will fall at the end of 2021 already - instead of 2029 as communicated before! But thanks to "Stream" the brand will not simply go away but remain and add to the confusion. CentOS Stream is a rolling-release distribution taking a middle grounds between the cutting-edge development happening in Fedora and the stability of RHEL.
Many users feel betrayed by this action and companies who have deployed CentOS 8 in production trusting in the 10 years of support are facing hard times. Even worse for those companies who have a product based on CentOS (which is a pretty popular choice e.g. for Business Telephone Systems and many other things) or who offer a product targeted only at RHEL or CentOS. For those the new situation is nothing but a nightmare come true.
IBM-controlled Red Hat obviously hopes that many users will now go and buy RHEL. While this is certainly technically an option I would not suggest going down that road. Throwing money at a company that just killed a community-driven distribution 8 years early? Sorry Red Hat but nope. And Oracle is never an option, either!
The reaction from the community has been overwhelmingly negative. Many people announced that they will migrate to Debian or Ubuntu LTS, some will consider SLES. Few said that they will consider FreeBSD now - being a happy FreeBSD user and advocate this is of course something I like to read. And I’d like to help people who want to go in that direction. A couple of weeks ago I announced that I'd write a free ebook called _The Penguin's Guide to Daemonland: An introduction to FreeBSD for Linux users_. It is meant as a resource for somewhat experienced Linux users who are new to FreeBSD (get the very early draft here - if anybody is interested in this feedback is welcome).
http://www.elderlinux.org/temp/draft_penguin_guide.pdf
But this post is not about BSD, because there simply are cases where you want (or need) a Linux system. And when it comes to stability, CentOS was simply a very good choice that's very hard to replace with something else. Fortunately Rocky Linux was announced - an effort by the original founder of CentOS who wants to basically repeat what he once did. I wish the project good luck. However I'd also like to take the chance as an admin and hobby distro tinkerer to discuss what CentOS actually stood for and if we could even accomplish something better! Heresy? Not so much. There’s always room for improvement. Let's at least talk about it.
The name CentOS (which stood for _Community Enterprise Operating System_) basically states that it's possible to provide an enterprise OS that is community-built. We all have an idea what a community is and while a lot could be written about how communities work and such I'd rather focus on the other term for now. What exactly is enterprise? It helps to make a distinction of the various grades of software. Here's my take on several levels of how software can be graded:
__Hobbyist__ software is something that is written by one person or a couple of people basically for fun. It may or may not work, collaboration may or may not be desired and it can vanish from the net any day when the mood strikes the decision maker. While this sounds pretty bad that is not necessarily the case. Using a nifty new window manager on your desktop is probably ok. If the project is cancelled tomorrow it just means that you won't get any more bug fixes or new features and you can easily return to your previous WM. But you certainly don't want to use such software in your product(s).
__Semi-professional__ software is developed by one person or a team that is rather serious about the project and aiming for professionalism (but commonly falling short due to limitations of time and resources). Usually the software will at least have releases that follow a versioning scheme, source tarballs will not be re-rolled (re-using the same version number). There will be at least some tests and documentation. Patches as well as feedback and reporting issues are almost certainly welcome. The software will be properly licensed, come with things like change logs and release notes. If the project ends you won't know because the repo on GitHub was deleted but because at least an announcement was made.
__Professionally__ developed software does what the former paragraph talked about and more. It has a more complex structure with multiple people having commit rights so that a single person e.g. on holiday when a severe bug is found doesn't mean nobody can fix things for days. The software has good test coverage and uses CI. There are several branches with at least the previous one still receiving bug fixes while the main development takes place in a newer branch. Useful documentation is available and there is a table with dates that show when support ends for which version of the software. There's some form of support available. Such software is most often developed or at least sponsored by companies.
__Enterprise__ products take the whole thing to another level. Enterprise software means that high-quality support options (most likely in various languages) are available. It means that the software is tested extensively and has a long life cycle (long-term support, LTS). There is probably a list of hardware on which the software is guaranteed to work well. And it's usually not exactly cheap.
"__Mission critical__" software has very special requirements. For example it could be that it has to be written in Spark (a very strict Ada dialect) which means that formal verification is possible. Most of us don't work in the medical or aerospace industries where lives and very expansive equipment may be at stake and fortunately we don't have to give such hard guarantees. But without going deeper into the topic I wanted to at least mention that there's something beyond enterprise.
Having a community-run project that falls into the _professional_ category is quite an accomplishment even with some corporate backing. Enterprise-grade projects seem to be very much without reach of what a community is able to do. Under the right circumstances however it is doable. CentOS was such a project: They didn't need to pay highly skilled professionals to patch the kernel or to backport fixes into applications and such. Thanks to the GPL Red Hat is forced to keep the source to the tools they ship open. The project can build upon the paid work of others and create a community-built distribution.
Whenever this is not the case the only option is probably to start as professional as possible and create something that becomes so useful to companies that they decide to fund the effort. Then you go enterprise. Other ways a project can go are aiming to become "Freemium" with a free core and a paid premium product or asking for donations from the community. Neither way is easy and even the most thoughtful planning cannot guarantee success as there's quite a bit of luck required, too. Still, good planning is essential as it can certainly shift the odds in favor of success.
Another interesting problem is: How to create a community of both skilled and dedicated supporters of your project? Or really: Any community at all? How do you let people who might be interested know about your project in the first place? It requires a passionate person to start something, a person that can convince others that not only the idea is worthwhile but also that the goal is in fact achievable and thus worth the time and effort that needs to be invested.
As mentioned above, I'm a FreeBSD user. One of the things that I've really come to appreciate on *BSD and miss on Linux is the concept of a _base system_ (Gentoo being FreeBSD-inspired kind of has "system" and "world", though). It’s the components of the actual operating system (kernel and basic userland) developed together in one repository and shaped in a way to be a perfect match. Better yet, it allows for a clean separation of the OS and software from third party packages whereas on Linux the OS components are simply packages, too. On FreeBSD third party software has its own location in the filesystem: The operating system configuration is in /etc, the binaries are in /{s}bin and /usr/{s}bin, the libraries in /lib and /usr/lib, but anything installed from a package lives in /usr/local. There’s /usr/local/etc, /usr/local/bin, /usr/local/lib and so on.
Coming from a Linux background this is a bit strange initially but soon feels completely natural. And this separation brings benefits with it like a much, much more solid upgrade process! I maintain servers that were setup as FreeBSD 5.x in the mid 2000's and still happily serve their purpose today with version 12.x. OS (and hardware) has been upgraded multiple times but the systems were never reinstalled. This is an important aspect of an enterprise OS if you ask me. I wonder if we could achieve it on Linux, too?
Doing things the BSD way would mean to download the source packages for the kernel, glibc and all the other libraries and tools that would form a rather minimal OS, extract them and put the code in a common repository. Then a Makefile would be written and added that can build all the OS components in order with a single command line (e.g. "make buildworld buildkernel"). Things like ALFS (Automated Linux From Scratch) already exist in the Linux world, so building a complete Linux system isn't something completely new or revolutionary.
As vulnerabilities are found fixes are committed to the source repo. It can then be cloned and re-built. Ideally we'd have our own tool "os-update" which will do differential updates to bring the OS to the newest patch level. My suggestion would be to combine the components of a RHEL release - e.g. kernel 4.18, glibc 2.28, etc. following RHEL 8. So more or less like CentOS but more minimal and focused on the base system. That system would NOT include any package manager! This is due it being intended as a building block for a complete distribution which adds a package manager (be it rpm/dnf, dpkg/apt, pacman or something else) on top and consumes a stable OS with a known set of components and versions, allowing the distributors to focus on a good selection of packages (and giving them the freedom to select the means of package management).
Just to give it a working title, I'm going to call this BaSyL (Base System Linux). For the OS version I would prefer something like the year of the corresponding RHEL release, eg. BaSyL 2019-p0 for RHEL 8. The patch level increases whenever security updates need to be applied.
The other part of creating an enterprise-like distribution is the packages. Let's call it the SUP-R (Stable Unix-like Package Repo) for now. So you've installed BaSyL 2019 and need packages. There's the SUP-R 2019 repo that you can use: It contains the software that RHEL ships. Ideally a team forms that will create and maintain a SUP-R 2024 repo after five years allowing to optionally switch to newer package versions. The special thing here would be that this SUP-R 2024 package set would be available for both BaSyL 2019 and BaSyL 2025 (provided that's when the next version is released). That way BaSyL 2019 users that already made the switch to SUP-R 2024 can say on that package set when updating the OS and don't have to do both at the same time! A package repo change requires to consult the update documentation for your programs: E.g. Apache HTTPd, PostgreSQL database, etc. Most likely there are migration steps like changing configuration and such.
Maintaining LTS package sets is not an easy task, though. However there are great tools available to us today which makes this somewhat easier. Still it would require either a really enthusiastic community or some corporate backing. Especially the task of selecting software versions that work well together is a pretty big thing. It would probably make sense to keep the package count low to start with (quality over quantity) and think of test cases that we can build to ensure common workloads can be simulated beforehand.
In addition to experienced admins for software evaluation, package maintainers and programmers for some patch backporting, also people writing docs (both in general and for version migration) would be needed. Yes, this thing is potentially huge - much, much more involved than doing BaSyL alone.
I'd love to see a continuation of a community-built enterprise Linux distro worth the title. But at the same time getting to know BSD in addition to Linux made me think a little different about certain things. De-coupling the OS and the packaging efforts could open up interesting possibilities. At the same time it could even help both projects succeed: Other operating systems might also like (optional) enterprise package sets. And since they’d be installed into a different prefix (e.g. /usr/supr) they would not even conflict with native packages in e.g. Debian or any other glibc-based distribution.
If a portable means of packaging was chosen it would potentially also be interesting to the BSDs and Open-Solaris derivatives. And the more potential consumers the larger the group of people who could have the motivation to make something like this come true.
This article is meant to be giving food for thought. Interested in talking about going new ways? Please share your thoughts!