đž Archived View for 9til.de âş plan9-basics.gmi captured on 2024-05-26 at 14:23:38. Gemini links have been rewritten to link to archived content
âŹ ď¸ Previous capture (2020-10-31)
-=-=-=-=-=-=-
Plan 9 is a simple system that is fun to learn. Many things are more complex to describe than they are to use. It is much simpler to ride a bicycle than it is to understand how the mechanical engineering of the gears combines with the laws of physics to make riding it possible! It is easy to confuse different with bad. We all think that we know better than to do that, but its hard to avoid. Plan 9 was created by several of the most talented and intelligent system architects and programmers the world of computing has ever known, and we strongly recommend you take the time to read some of the papers about the design of the system (located in /sys/doc) and learn to read the manpages carefully.
A namespace is just how you label and organize information. In whatever OS you are used to, the primary namespace you work within is the filesystem saved on your hard drive. Another common namespace is the www* namespace of websites. Both of these systems illustrate how important namespaces are. A badly organized hard drive means you cant find your own files. The designers of Plan 9 decided to put a powerful new namespace paradigm at the heart of the system. There are several important principles you must know.
Learning to work with multiple independent namespaces is like the difference between riding in a train on a set of preset tracks, and flying in an airplane in three dimensions. You canât get lost if you follow the tracks, but you also donât have freedom of where to go.
Youâve probably heard of this classic UNIX principle. Well, the original UNIX guys saw the clarity of that abstraction become muddied over time. The standard unix graphics and networking systems donât quite fit. In Plan 9, all of the system components present a filesystem interface. The network is accessible via a set of filesystems mounted on /net and the GUI is structured as a filesystem where windows are represented as files. If this seems strange, remember that a âfilesystemâ is really just a very simple model for basic operations of transmitting and receiving data. The basic principles of a filesystem (named files containing data that can be read and written) are infinitely flexible. The specific problem of how a file-driven system interacts with networks leads us to
9p is the glue that ties everything together in Plan 9 and allows the system to distribute itself across a network. The major components of a Plan 9 system (and much additional software written for it) all use the 9p protocol.
This is a brief guide to working with a Plan 9 system. If you need a guide to the basics of controlling the interface, check out
the newbie guide [PDF 282 KiB]
The assumption is that you are working in an rc shell window in a Rio session. Much of this information is applicable to the standard initial setup of the user being Glenda, and divergences will be noted. Most very basic shell operations (ls, cd, cat, grep) behave similarly to any UNIX-family os. Use âlcâ for column-formatted ls.
Your total environment is determined by and composed of the set of namespaces accessible to the processes you control. The command
ns
shows the namespace structure of the process group (probably an rc shell in rio) it was forked from. The namespace is presented in the form of executable commands to construct it, one per line. To view the namespace of a different process, do
ps -a
to list all the running processes on the current cpu. The second column is the process ID (PID) of each process. Make note of the number of a different process with a low PID number and do
ns pid
to view the namespace of the process with that ID. This will most likely be different than the ns of the foreground shell you checked earlier. These bind/mount structures show how file paths invoked by the process will be interpreted by the kernel. In Plan 9, everything is a file â so the working environment of a process is defined by its namespace.
The base install from the Bell Labs CD, and the initial setup of the /9/grid VM node image, is as a âstandalone terminalâ. In the Plan 9 model, terminals are the âendpointsâ of the system, where users create their environment from available resources. A standalone terminal uses a local filesystem and processor to support the interface. Any terminal can also be used to connect to other network resources in addition to the local ones. A terminal setup assumes that the local physical hardware is owned by the user who logged in.
A CPU server is a different configuration that can support remote login and multiple simultaneous users along with authentication and possibly associated file/venti service. CPU servers are the heart of the system. Because a CPU server is designed to make all of its capabilities available via the network, a virtualized CPU server is functionally identical to a native install. The importance of CPU servers to the Plan 9 architecture is why the /9/grid node image is preconfigured to act as a CPU server with minimal additional setup.
9p is the filesystem protocol Plan 9 uses for just about everything. It was designed for use over networks, which is why Plan 9 can separate its components at will. The g/script toolkit is designed for indexing and making 9p connections to other grid nodes. Inferno also uses 9p but calls it âstyxâ. The basic Plan 9 command to connect to an external 9p filesystem is srv. An example:
srv tcp!9gridchan.org!7888 registry mount /srv/registry /n/registry
(you could do the same thing in one command with srv tcp!9gridchan.org!7888 registry /n/registry â that incorporates the mount as the final argument)
The srv command connects to a 9p filesystem (in this case identified with a tcp dialstring showing system name(or IP) and the port to connect to) and creates an entry for it in /srv, which can then be mounted anywhere in the namespace. /n is a conventional directory for attaching network services, and by default a mntgen is running to create mountpoints for services placed there. For serving 9p, the basic method is to use aux/listen1 to run exportfs listening to a given port. For instance:
aux/listen1 tcp!*!5555 /bin/exportfs -R -r /usr/glenda/tmp/public
will start a listener on port 5555 for a read-only export of glendaâs tmp/public.
The g/scripts are designed to provide a framework for dynamic, emergent grids by handling attaching and maintaining 9p connections and service registries. They are built on the basic Plan 9 tools and written as rc scripts that use a static filesystem to track and maintain state. They use a few special techniques such as making connections with recover instead of srv and sharing a portion of the namespace at /g/ using srvfs and a plumber to handle namespace modifications.
Plan 9 modularity and abstraction mean that on-disk data storage is provided by semi-independent modules called Fossil and Venti. Fossil is a file server for Plan 9; Venti is a data block storage server that has broad general utility in addition to its use in Plan 9. It is possible to run Plan 9 purely from fossil, or to use fossil+venti as an interdependent system with automatic archival snapshotting. Venti has powerful functionality but is also resource intensive in comparison to the rest of the Plan 9 components.
Using Venti as a general purpose data/file archival and sharing server is based on the .vac file tools. A file or file tree of any size can be uploaded to a Venti server and stored for retrieval with a tiny piece of data representing the fundamental index block of the data structure. To store:
vac -f OUTPUTFILENAME.vac -h ADDRESS.OF.VENTI FILEORDIR
To âexpandâ a .vac file by downloading from a venti:
unvac -h ADDRESS.OF.VENTI FILE
To make the contents of a .vac file visible at /n/vac but do not download and save locally:
vacfs -h ADDRESS.OF.VENTI FILE
You can save yourself the trouble of typing in the address every time by setting the environment variable venti=ADDRESS.OF.VENTI and then omitting the -h flag. venti.9gridchan.org is a small public venti for collaborative testing and development. The service registry at tcp!9gridchan.org!7888 should have some .vac scores for downloading from it, and the graffiti wall or a 9p fileshare can be used to publish the .vacs for any material you have uploaded.
The Fossil file server is controlled by connecting to a program called fossilcons. On the machine running fossil you can connect with
con /srv/fscons
to administer the fileserver and disconnect with ctrl-\ then q. To view your fossil configuration, try
fossil/conf /dev/sdC0/fossil
Administering Fossil and/or Venti is beyond the scope of this small intro, check man fossil, man fossilcons, man venti, man venti-fmt, and the wiki for information.
Plan 9 expands on the traditional hosts file with the network database system. Most users will only need to deal with the system on the level of adding their machine and commonly used network resoures to the /lib/ndb/local file. A CPU server, for instance, needs to at least define itself in a manner similar to the following:
ip=192.168.1.100 sys=macbeth dom=macbeth authdom=macbeth auth=macbeth
Other systems connecting to the CPU server will want to have similar information for it in their /lib/ndb/local as well. To refresh the ndb after editing it send a message to the connection server:
echo -n refresh >/net/cs
Connecting to a cpu machine defined in the ndb is as simple as making sure factotum is active, and issuing:
cpu -h SYSNAME
and entering correct user and password information for an account on that system.
Factotum is the userâs agent for passwords/keys. It can store information for many protocols, and can optionally be used with a saved âsecstoreâ of keypairs. You almost always want a factotum service mounted in your namespace. If you donât have a factotum active, start it with
auth/factotum
The other side of authentication services is provided by auth servers, usually integrated with CPU service. These services will usually be started in the cpurc by using auth/keyfs and starting auth/authsrv listening on port 567, with user/password information stored in a special nvram file/small partition â probably /dev/sdC0/nvram.
9fat:
provides access to a small FAT partition used during bootup by mounting it at /n/9fat. Here you will find the Plan 9 compiled kernels, the 9load bootloader, and the plan9.ini configuration file. Many important system configuration details are set here by default, such as that device and kernel to boot, the graphics mode, the type of mouse, and others.
The /9/grid node uses a bootup menu in plan9.ini to select which kernel is booted. If this selection was done manually, the user would choose with:
sdC0!9fat!9pccpuf
or the terminal kernel with:
sdC0!9fat!9pcf