Here I'm republishing an old blog post of mine originally from August 2017. The article has been slightly improved.
Very few people will argue against the statement that Unix-like operating systems conquered the (professional) world due to a whole lot of strong points - one of which is package management. Whenever you take a look at another *nix OS or even just another Linux distro, one of the first things (if not the first!) is to get familiar with how package management works there. You want to be able to install and uninstall programs after all, right?
If you're looking for another article on using jails on a custom-built OPNsense BSD router, please bear with me. We're getting there. To make our jails useful we will use packages. And while you can safely expect any BSD or Linux user to understand that topic pretty well, products like OPNsense are also popular with people who are Windows users. So while this is not exactly a follow-up article on the BSD router series, I'm working towards it. Should you not care for how that package management stuff all came to be, just skip this post.
There's this myth that Slackware Linux has no package manager, which is not true. However Slackware's package management lacks _automatic dependency resolving_. That's a very different thing but probably the reason for the confusion. But what is package management and what is dependency resolving? We'll get to that in a minute.
To be honest, it's not very likely today to encounter a *nix system that doesn't provide some form of package manager. If you have such a system at hand, you're probably doing _Linux from Scratch_ (a "distribution" meant to learn the nuts and bolts of Linux systems by building everything yourself) or have manually installed a Linux system and deliberately left out the package manager. Both are special cases. Well, or you have a fresh install of FreeBSD. But we'll talk about FreeBSD's modern package manager in detail in the next post.
Even Microsoft has included _Pkgmgr.exe_ since Windows Vista. While it goes by the name of “package manager”, it turns pale when compared to *nix package managers. It is a command-line tool that allows to install and uninstall packages, yes. But those are limited to operating system fixes and components from Microsoft. Nice try, but what Redmond offered in late 2006 is vastly inferior to what the *nix world had more than 10 years earlier.
There's the somewhat popular _Chocolatey_ package manager for Windows and Microsoft said that they'd finally include a package manager called "_one-get_" (apt-get anyone?) with Windows 10 (or was it "nu-get" or something?). I haven't read a lot about it on major tech sites, though, and thus have no idea if people are actually using it and if it's worth to try out (I would, but I disagree with Microsoft's EULA and thus I haven't had a Windows PC in roughly 10 years).
But how on earth are you expected to work with a *nix system when you cannot install any packages?
Unix began its life as an OS by programmers for programmers. Want to use a program on your box that is not part of your OS? Go get the source, compile and link it and then copy the executable to _/usr/local/whatever_. In times where you would have just some 100 MB of storage in total (or even less), this probably worked well enough. You simply couldn't go rampage and install unneeded software anyways, and sticking to the /usr/local scheme you separate optional stuff from the actual operating system.
More space became available however and software grew bigger and more complex. Unix got the ability to use libraries ("shared objects"), ELF executables, etc. To solve the task of building more complicated software easily, _make_ was developed: A tool that read a _Makefile_ which told it exactly what to do. Software began shipping not just with the source code but also with Makefiles. Provided that all dependencies existed on the system, it was quite simple to build software again.
Compilation process, invoked by make (PNG)
Makefiles also provide a facility called "targets" which made a single file support multiple actions. In addition to a simple _make_ statement that builds the program, it became common to add a target that allowed for _make install_ to copy the program files into their assumed place in the filesystem. Doing an update meant building a newer version and simply overwriting the files in place.
Make can do a lot more, though. Faster recompiles by to looking at the generated file's timestamp (and only rebuilding what has changed and needs to be rebuilt) and other features like this are not of particular interest for our topic. But they certainly helped with the quick adoption of make by most programmers. So the outcome for us is that we use _Makefiles_ instead of _compile scripts_.
Being able to rely on make to build (and install) software is much better than always having to invoke compiler, linker, etc. by hand. But that didn't mean that you could just type "make" on your system and expect it to work! You had to read the readme file first (which is still a good idea, BTW) to find out which dependencies you had to install beforehand. If those were not available, the compilation process would fail. And there was more trouble: Different implementations of core functionality in various operating systems made it next to impossible for the programmers to make their software work on multiple Unices. Introduction of the POSIX standard helped quite a bit but still operating systems had differences to take into account.
Configure script running (PNG)
Two of the answers to the dependency and portability problems were _autoconf_ and _metaconf_ (the latter is still used for building Perl where it originated). Autoconf is a tool used to generate _configure_ scripts. Such a script is run first after extracting the source tarball to inspect your operating system. It will check if all the needed dependencies are present and if core OS functionality meets the expectations of the software that is going to be built. This is a very complex matter - but thanks to the people who invested that tremendous effort in building those tools, actually building fairly portable software became much, much easier!
Back to _make_. So we're now in the comfortable situation that it's quite easy to build software (at least when you compare it to the dark times of the past). But what would you do if you want to get rid of some program that you installed previously? Your best bet might be to look closely at what _make install_ did and remove all the files that it installed. For simple programs this is probably not _that_ bad but for bigger software it becomes quite a pain.
Some programs also came with an _uninstall_ target for make however, which would delete all installed files again. That's quite nice, but there's a problem: After building and installing a program you would probably delete the source code. And having to unpack the sources again to uninstall the software is quite some effort if you didn't keep it around. Especially since you probably need the source for exactly the same version as newer versions might install more or other files, too!
This is the point where _package management_ comes to the rescue.
So how does package management work? Well, let's look at _packages_ first. Imagine you just built version 1.0.2 of the program _foo_. You probably ran _./configure_ and then _make_. The compilation process succeeded and you could now issue _make install_ to install the program on your system. The package building process is somewhat similar - the biggest difference is that the _install destination_ was changed! Thanks to the modifications, make wouldn't put the executable into _/usr/local/bin_, the manpages into _/usr/local/man_, etc. Instead make would then put the binaries e.g. into the directory _/usr/obj/foo-1.0.2/usr/local/bin_ and the manpages into _/usr/obj/foo-1.0.2/usr/local/man_.
Slackware: Installing tmux with installpkg (PNG)
Since this location is not in the system's _PATH_, it's not of much use on this machine. But we wanted to create a package and not just install the software, right? As a next step, the contents of _/usr/obj/foo-1.0.2/_ could be packaged up nicely into a tarball. Now if you distribute that tarball to other systems running the same OS version, you can simply untar the contents to / and achieve the same result as running _make install_ after an unmodified build. The benefit is obvious: You don't have to compile the program on each and every machine!
So far for _primitive_ package usage. Advancing to actual package _management_, you would include a list of files and some metadata into the tarball. Then you wouldn't extract packages by hand but leave that to the package manager. Why? Because it would not only extract all the needed files. It would also record the installation in its package database and keep the file list around in case it's needed again (e.g. to uninstall the package).
Slackware: Uninstalling tmux and extracting the package to look inside (PNG)
Installing using a package manager means that you can query it for a list of installed packages on a system. This is _much_ more convenient than say _ls /usr/local_, especially if you want to know _which version_ of some package is installed! And since the package manager keeps the list of files installed by a package around, it can also take care of a clean uninstall without leaving you wondering if you missed something when you deleted stuff manually. Oh, and it will be able to lend you a hand in upgrading software, too!
That's about what Slackware's package management does: It enables you to install, uninstall and update packages. Period.
But what about programs that require _dependencies_ to run? If you install them from a package you never ran _configure_ and thus might not have the dependency installed, right? Right. In that case the program won't run. As simple as that. This is the time to _ldd_ the program executable to get a list of all libraries it is dynamically linked against. Note which ones are missing on your system, find out which other packages provide them and install those, too.
Arch Linux: Pacman handles dependencies automatically (PNG)
If you know your way around, this works ok. If not... Well, while there are a lot of libraries where you can guess from the name which packages they would likely belong to, there are others, too. Happy hunting! Got frustrated already? Keep saying to yourself that you're learning fast the hard way. This might ease the pain. Or go and use a package management system that provides dependency handling!
Here's an example: You want to install BASH on a *nix system that provides the old bourne shell (/bin/sh) only. The package manager will look at the packaging information and see: BASH requires _readline_ to be installed. Then the package manager will look at the package information for that package and find out: Readline requires _ncurses_ to be present. Finally it will look at the ncurses package and nod: No further dependencies. It will then offer you to install ncurses, readline _and_ BASH for you. Much easier, eh?
Arch Linux: Xterm and all dependencies downloaded and installed (PNG)
A lot of people claim that the _Red Hat Package Manager_ (RPM) and Debian's _dpkg_ are examples of the earliest package managers. While both of them are so old that using them directly is in fact inconvenient enough to justify the existence of another program that allows to use them indirectly (_yum_ / _dnf_ and e.g. _apt-get_), this is not true.
_PMS_ (short for "package management system") is generally regarded to be the first (albeit primitive) package manager. Version 1.0 was ready in mid 1994 and used on the _Bogus Linux_ distribution. With a few intermediate steps this lead to the first incarnation of RPM, Red Hat's well-known package manager which first shipped with Red Hat Linux 2.0 in late 1995.
FreeBSD 1.0 (released in late 1993) already came with what is called the _ports tree_: A very convenient package building framework using _make_. It included version 0.5 of _pkg_install_, the pkg_* tools that would later become part of the OS! I'll cover the ports tree in some detail in a later article because it's still used to build packages on FreeBSD today.
Part of a Makefile from a FreeBSD port (PNG)
Version 2.0-RELEASE (late 1994) shipped the pkg_* tools. They consisted of a set of tools like _pkg_add_ to install a package, _pkg_info_ to show installed packages, _pkg_delete_ to delete packages and _pkg_create_ to create packages.
FreeBSD's pkg_add got support for using remote repositories in version 3.1-RELEASE (early 1999). But those tools were really showing their age when they were put to rest with 10.0-RELEASE (early 2014). A replacement has been developed in form of the much more modern solution initially called _pkg-ng_ or simply _pkg_. Again that will be covered in another post (the next one actually).
With the ports tree FreeBSD undoubtedly had the most sophisticated package building framework of that time. Still it's one of the most flexible ones and a bliss to work with compared to creating DEB or RPM packages... And since Bogus's PMS was started at least a month after pkg_install, it's even entirely possible that the first working package management tool was in fact another FreeBSD innovation.