Sunday, 15. January 2023

Exploring the CBSD virtual environment management framework - part 2: Setup

[This article has been bi-posted to Gemini and the Web]

In this article we're setting up CBSD so that we can start using it.

Part 0 of this series is a general discussion of what virtualization actually is and what you should know to make best use of what will be covered in the later parts. Part 1 covered an introduction of CBSD as well as the installation.

(November 2022) Exploring the CBSD virtual environment management framework - part 0: Virtualization overview

(December 2022) Exploring the CBSD virtual environment management framework - part 1: Introduction and installation

Setting up CBSD

Let's continue right were we left off last time. CBSD has been installed on a FreeBSD 13.1 test system that uses ZFS and has been upgraded to version _1.13.21_ in the meantime. I will go with _/cbsd_ as the path for the workdir.

You can do the setup interactively (the standard way) or non-interactively (which can be useful e.g. for automation). We're going for the former method here:

# env workdir=/cbsd /usr/local/cbsd/sudoexec/initenv
-------[CBSD v.13.1.21]-------
This is install/upgrade scripts for CBSD.
Don't forget to backup.
-----------------------------
Do you want prepare or upgrade hier environment for CBSD now?
[yes(1) or no(0)]

Since we're doing a fresh install here, there's no need to backup anything. For the prompt, you can either enter "yes" or simply "1" to proceed or "no" / "0" to quit without doing the setup. I'm a lazy person, so inputting one character instead of two or even three is a natural choice. I'll mostly go with the longer form here, though. Of course I want to continue, that's why I came here after all!

yes
>>> Installing or upgrading
[Stage 1: account & dir hier]
* Check hier and permission...
* write directory id: jaildatadir
* write directory id: jailsysdir
* write directory id: jailrcconfdir
* write directory id: dbdir
[Stage 2: build tools]
Shall I add the cbsd user into /usr/local/etc/sudoers.d/cbsd_sudoers sudo file to obtain root privileges for most of the cbsd commands?
[yes(1) or no(0)]

The _initenv_ script offers to put a sudo configuration file fragment into sudoers.d which will allow the CBSD subcommands to run with the required elevated privileges. Its contents looks like this:

Defaults env_keep += "workdir DIALOG NOCOLOR CBSD_RNODE"
Cmnd_Alias CBSD_CMD = /usr/local/cbsd/sudoexec/*
cbsd ALL=(ALL) NOPASSWD:SETENV: CBSD_CMD

This is not _strictly_ necessary but still a convenience feature that you most likely want to make use of. If choose not to configure sudo for CBSD, whenever you want the tool to do something that requires root privileges (which generally means: frequently!), you will be prompted for the root password. I prefer not to enter passwords all the time, so I always let the file be installed.

yes
[Stage 3: local settings]
Shall i modify the /etc/rc.conf to sets cbsd_workdir="/cbsd"?:
[yes(1) or no(0)]

The setup script offers to set the cbsd_workdir variable in rc.conf to the workdir that you chose (/cbsd in my case). Should you do that? If you're just starting your CBSD journey, most certainly yes. Later if you use more complex setups with multiple workdirs (if ever!), you might still want to define the default workdir - or maybe not. If you don't find a good reason to not set it, just say "yes".

yes
/etc/rc.conf: cbsd_workdir: -> /cbsd
[Stage 4: update default skel resolv.conf]
[Stage 5: refreshing inventory]
nodename: CBSD Nodename for this host e.g. the hostname. Warning: this operation will recreate the ssh keys in /cbsd/.ssh dir: cbsddemo.advancebsd.net

CBSD wants to know the node name for its instance. It will default to the machine's hostname and that's usually a good choice. Feel free to change it to something else if you like chaos and confusion, though! This name will be used to create the SSH keys (which you can ignore when you run in single node mode) and the main SQLite database for the instance. In case you use something else than the hostname and plan on using clusters, make sure that all participating instances have different node names! If you want to accept the default, you may just press RETURN.

Empty inventory database created: /cbsd/var/db/inv.cbsddemo.advancebsd.net.sqlite
nodeip: Node management IPv4 address (used for node interconnection), e.g: 192.168.12.34

Now CBSD wants to know the management IP for this machine. It will default to the primary v4 address and this best guess usually is ok for simple networks. If it's not, you know because it's your network, right? For example interconnecting nodes on secondary NICs which are used only for management traffic is a good idea for a production-grade setup. If you are planning to run in single node mode, this does not really matter. Again just press RETURN to accept the default.

jnameserver: environment default DNS name-server (for jails resolv.conf), e.g.: 9.9.9.9,149.112.112.112,2620:fe::fe,2620:fe::9

CBSD manages the resolv.conf files inside the jails for you. To be able to do that, it needs to know your preferred DNS servers. It defaults to v4 and v6 quad9, but if you have other preferences (or cannot reach them from your network), pick others to use. Pressing RETURN will accept the proposed defaults.

nodeippool: (networks for jails)
Hint: use space as delimiter for multiple networks, e.g.: 10.0.0.0/16

If you want CBSD to be able to manage networking for you, it needs to be told which network(s) it may assign IP addresses from. Think about what you want your network of virtual environments to look like and what the consequences of your choices are. For your first home lab installation any free range may be fine. But when you are going for something more sophisticated, do some planning ahead. No seriously, getting a sheet of paper and a pencil before making a decision _will_ likely save you some pain and grief in the future. Of course you already knew that.

nat_enable: Enable NAT for RFC1918 networks?
[yes(1) or no(0)]

Do you want to use Network Address Translation for private IP ranges? If you don't have the luxury of many spare v4 addresses, you probably want to. Inform CBSD of your decision! I'll say yes here, if only to show which additional choices that opens.

yes
Which NAT framework do you want to use: [pf]
(type FW name, eg pf,ipfw,ipfilter, 'disable' or '0' to CBSD NAT, "exit" for break)

Select the means by which the NATing is to be performed. Choose one of FreeBSD's three supported firewalls. If you're setting up a new system with CBSD and do not have any firewall running, choose either (with pf being the recommended default). Otherwise use the one that's already active on your system. Use 'disable' or '0' if you intend to take care of NAT yourself and want CBSD not to interfere with your settings.

Set IP address or NIC as the aliasing NAT address or interface, e.g: 192.168.12.34

Now CBSD needs to now which address or NIC should be used as the NAT target. This will default to your primary IPv4 address.

Do you want to modify /boot/loader.conf to set pf_load=YES ?
[yes(1) or no(0)]

In case no firewall was active so far, CBSD will offer to add the respective line to loader.conf(5) so that the module gets loaded during system boot. You probably want to do this as it is required for the firewall to work.

yes
/boot/loader.conf: pf_load: -> YES
fbsdrepo: Use official FreeBSD repository? When no (0) the repository of CBSD is preferred (useful for stable=1) for fetching base/kernel?
[yes(1) or no(0)]

While there are several sources to create basejails from, the simplest method is to fetch the base system distribution set and extract it. CBSD can use either FreeBSD's official repository or the the CBSD one. If you choose the latter, it will allow you to use -STABLE versions of FreeBSD instead of only -RELEASE. If you don't plan to use -STABLE, the official mirrors may be faster for you, depending where you live (or your server is located).

yes
zfsfeat: You are running on a ZFS-based system. Enable ZFS feature?
[yes(1) or no(0)]

If CBSD detects that it's workdir is on a ZFS pool, it offers you to enable additional features. This will allow for the use of ZFS snapshots and clones. Honestly, I cannot think about any good reason to say no here!

yes
parallel: Parallel mode stop/start ?
(0 - no parallel or positive value (in seconds) as timeout for next parallel sequence) e.g: 5

Even servers that host jails or VMs have to be rebooted now and then after kernel updates were installed. If some or all of your jails / VMs have the autostart flag set, they will be started one by one after each other if you choose 0 here. This means that if some jail rc "hangs" and doesn't finish, the next in line will never be started. If you choose 5, after 5 seconds the next one will be started, even if the one before has not completed its start procedure, yet. If it finishes quicker than the defined timeout, CBSD will of course not wait the remaining seconds but fire up the next one.

5
stable: Use STABLE branch instead of RELEASE by default? Attention: only the CBSD repository has a binary base for STABLE branch ?
(STABLE_X instead of RELEASE_X_Y branch for base/kernel will be used), e.g.: 0 (use release)

If you decided to use the CBSD repository (and only then!) you can opt to use base system distributions like 13-STABLE instead of 13.1-RELEASE. Why would you want to do this? Well, if you're a long-time FreeBSD user, you certainly have a preference. You may like -STABLE because MFCs ("Merge from -CURRENT") bring in newer features from the development branch and are available before any new release is cut. It's more lightly tested, though. If you're new to FreeBSD, I'd recommend to stick to -RELEASE in general. When I'm working with obsolete versions (e.g. I need FreeBSD 11 for something special), I usually go with -STABLE because this branch may include additional fixes that will never be part of a further release anyway.

0
sqlreplica: Enable sqlite3 replication to remote nodes ?
(0 - no replica, 1 - try to replicate all local events to remote nodes) e.g: 1

This is exactly what it says: If enabled, it will try to replicate local events on other nodes. Since we're exploring CBSD in a single-node setup first, I'm going to disable this.

0
statsd_bhyve_enable: Configure CBSD statsd services for collect RACCT bhyve statistics? ?
(EXPERIMENTAL FEATURE)? e.g: 0

This is an experimental feature, so we will not enable it right now. But what is it actually about? I would suggest some reading of rctl(4), rctl(8) and rctl.conf(5) if you are interested in what RACCT/RCTL can do for you on FreeBSD (and then decide if you like living on the edge and enable it for CBSD).

0
statsd_jail_enable: Configure CBSD statsd services for collect RACCT jail statistics? ?
(EXPERIMENTAL FEATURE)? e.g: 0

Same thing but this time for jails.

0
statsd_hoster_enable: Configure CBSD statsd services for collect RACCT hoster statistics? ?
(EXPERIMENTAL FEATURE)? e.g: 0

One more stats collecting mechanism: Host machine data.

0
[Stage 6: authentication keys]
Generating public/private ed25519 key pair.
Your identification has been saved in /cbsd/.ssh/cd0c84dc9511e69c6a7fe2f15817346e.id_rsa
Your public key has been saved in /cbsd/.ssh/cd0c84dc9511e69c6a7fe2f15817346e.id_rsa.pub
The key fingerprint is:
SHA256:adiLZDHMTMZbYqn17KG2ghOkUbdvGJN8HI2PQ8gEq6k root@cbsddemo.advancebsd.net
The key's randomart image is:
+--[ED25519 256]--+
| .+.oo+ |
| ..+*X o |
| ..o O*X |
|.o. B *==. |
|o+ *++S. |
|o . .o=o.. |
|E o o... |
| o . . |
| . . |
+----[SHA256]-----+
[Stage 7: nodes]
[Stage 8: modules]
Installing module pkg.d cmd: pkg
Installing module bsdconf.d cmd: tzsetup
Installing module bsdconf.d cmd: ssh
Installing module bsdconf.d cmd: ftp
Installing module bsdconf.d cmd: adduser
Installing module bsdconf.d cmd: passwd
Installing module bsdconf.d cmd: service
Installing module bsdconf.d cmd: sysrc
Installing module bsdconf.d cmd: userlist
Installing module bsdconf.d cmd: grouplist
Installing module bsdconf.d cmd: adduser-tui
Installing module bsdconf.d cmd: pw
Installing module bsdconf.d cmd: cloudinit
Installing module zfsinstall.d cmd: zfsinstall
[Stage 9: cleanup]
* Remove obsolete files...
Configure RSYNC services for jail migration?
[yes(1) or no(0)]

CBSD has now created the SSH keypair for the node and installed its standard set of modules. Now it wants to know whether to configure the _rsync_ tool to allow for jail migration across nodes in one cluster. We're going for a single node installation right now, so this does not make sense in our case.

no
cbsdrsyncd_enable: -> YES
Do you want to enable RACCT feature for resource accounting?
[yes(1) or no(0)]

More about the experimental RACCT support. You may want to look into this one if you're interested in writing statistics to a SQlite DB and/or export metrics via Prometheus/VictoriaMetrics. Have a look at the 'share/grafana/CBSD_Jail_cluster_v0.0.json' file in your CBSD installation if you're interested in using those for building a dashboard!

no
Shall i modify the /etc/rc.conf to sets cbsdd_enable=YES ?
[yes(1) or no(0)]

The CBSD daemon is a topic in its own right and warrants more than a few lines here in the initial setup article. If you are deploying a clustered setup, you definitely want to enable this as it is responsible for keeping the node communication going (e.g. for knowing which nodes are active). It is useful in single node mode, too, though: There it allows for making certain actions non-blocking, allowing for concurrency. Again it is good to know and to keep in mind that this feature exists in case you're ever going to need it. However we're going to start with a simple setup here, so let's not enable it for now.

no
/etc/rc.conf: cbsdd_enable: -> NO
Shall i modify the /etc/rc.conf to sets rcshutdown_timeout="900"?
[yes(1) or no(0)]

as well as:

yes
/etc/rc.conf: rcshutdown_timeout: 90 -> 900
Shall i modify the /etc/sysctl.conf to sets kern.init_shutdown_timeout="900"?
[yes(1) or no(0)]

These two values are directly related. Take a look at the output and you'll see that CBSD is increasing FreeBSD defaults considerably (ten times in the example above). What is this and why does it do that? Safety. Let's say you need to run a service in one of your jails that has a known (but hard to fix) bug which prevents proper shutdown. With infinite patience, that jail would never shut down after you asked it to, because it would be forever waiting for said process to terminate. To avoid this, there is a timeout after which remaining processes will be killed forcefully (signal 9).

If a pretty busy (and large) database runs in one of your other jails however, you can see the problem with just killing processes after a somewhat short grace period. Wishing that you had given it a minute or two more to shut down properly is something pretty common among sysadmins when they have to fix that darn MySQL service afterwards. Next time you're smarter. Waiting a bit longer often actually saves you time (and unnecessary pain).

CBSDs defaults are pretty generous, but you might have to increase them further under special circumstances. More often you might want to tune them in the opposite direction. But this is completely up to you after considering your use case(s).

yes
kern.init_shutdown_timeout: 120 -> 900
[Stage X: upgrading]
>>> Done
Congratulations! First CBSD initialization complete!
Now your can run:
service cbsdd start
to run CBSD services.
For change initenv settings in next time, use:
cbsd initenv-tui
Also don't forget to execute:
cbsd initenv
every time when you upgrade CBSD version.
For an easy start:
cbsd help
General information:
cbsd summary
To start with jail:
cbsd jcreate --help
or: cbsd jconstruct-tui
To start with bhyve:
cbsd bcreate --help
or: cbsd bconstruct-tui
To start with XEN:
cbsd xcreate --help
or: cbsd xconstruct-tui
To start with QEMU/NVMM:
cbsd qcreate --help
or: cbsd qconstruct-tui
Enjoy CBSD!
preseedinit: Would you like a config for "cbsd init" preseed to be printed?
[yes(1) or no(0)]

And that's it! We've completed the initial post-install setup of CBSD. It's now ready to be used. The last question that it asks you is if you want a config to be printed. This means: "Do you want me to print the selections that you made in a way fit for a config file?" You would do this if you intend to setup more nodes, changing just a few options and using the config to do the initialization from that source instead of answering the questions interactively again. Let's do this now just to get an idea of what it looks like:

yes
---cut here ---
# cbsd initenv preseed file for cbsddemo.advancebsd.net host
# refer to the /usr/local/cbsd/share/initenv.conf
# for description.
#
nodeip="192.168.12.34"
jnameserver="9.9.9.9,149.112.112.112,2620:fe::fe,2620:fe::9"
nodeippool="10.0.0.0/16"
nat_enable="pf"
fbsdrepo="1"
zfsfeat="1"
parallel="5"
stable="0"
sqlreplica="0"
statsd_bhyve_enable="0"
statsd_jail_enable="0"
statsd_hoster_enable="0"
ipfw_enable="0"
nodename="cbsddemo.advancebsd.net"
racct="1"
natip="192.168.12.34"
initenv_modify_sudoers="0"
initenv_modify_rcconf_hostname=""
initenv_modify_rcconf_cbsd_workdir="1"
initenv_modify_rcconf_cbsd_enable="1"
initenv_modify_rcconf_rcshutdown_timeout="1"
initenv_modify_syctl_rcshutdown_timeout="1"
initenv_modify_rcconf_cbsdrsyncd_enable=""
initenv_modify_rcconf_cbsdrsyncd_flags=""
initenv_modify_cbsd_homedir="1"
workdir="/cbsd"
---end of cut---

If you were to use these settings again on a fresh system, paste the configuration snippet that CBSD printed for you into a file, say /tmp/cbsd-init.conf. Then instead of the command we started the interactive setup with run:

/usr/local/cbsd/sudoexec/initenv inter=0 /tmp/cbsd-init.conf

The result would be the same thing but you wouldn't have to answer all the questions again. This is particularly useful if you're provisioning your systems with Salt, Ansible, Puppet or the like. Just make that file a template and have your configuration management system fill in variables for hostname, IP addresses and so on.

What's next?

After doing the initial configuration, we're now ready to do some work. In the next article we're going to create a jail using the nice dialog menus (and do something with it).

(February 2023) Exploring the CBSD virtual environment management framework - part 3: Jails (I)

BACK TO NEUNIX INDEX