💾 Archived View for gemini.complete.org › filespooler captured on 2024-07-09 at 00:59:31. Gemini links have been rewritten to link to archived content

View Raw

More Information

➡️ Next capture (2024-08-31)

-=-=-=-=-=-=-

Filespooler

What is Filespooler?

1.

Filespooler lets you request the remote execution of programs, including stdin and environment. It can use tools such as S3, Dropbox, Syncthing[1], NNCP[2], ssh, UUCP[3], USB drives, CDs, etc. as transport; basically, a filesystem is the network for Filespooler. Filespooler is particularly suited to distributed and Asynchronous Communication[4].

1: /syncthing/

2: /nncp/

3: /uucp/

4: /asynchronous-communication/

2.

Filespooler is a tool in the Unix tradition of "do one thing and do it well." It is designed to integrate nicely with decoders (to handle compressed or Encrypted[5] packets, for instance). It can send and receive packets by pipes. Its on-disk format is simple and is designed to interface well with other tools.

5: /encrypted/

3.

Filespooler is *strictly ordered* by default; that is, it executes jobs in the order they were created, even if they arrive out of order. However, it also supports looser operation modes for scenarios such as certain Many-To-One[6] setups.

6: /many-to-one-with-filespooler/

4.

Filespooler is an example of scalable Small Technology[7]:

7: /old-and-small-technology/

* The file format is lightweight; less than 100 bytes overhead in most cases.

* The queue format is lightweight; even a Raspberry Pi[8] could easily process thousands of different queues if needed.

* The main CLI tool, fspl, uses less than 10MB RAM on x86_64

8: /raspberry-pi/

5.

Filespooler processes packets as streams, and can easily accommodate multi-terabyte payloads.

6.

Filespooler is extremely versatile. In addition to the various transports it can easily work with, it can also easily work with encoders/decoders such as compressors and encryption tools. Basically, if you can pipe stuff to or from it, Filespooler can integrate with it. Thanks to its flexible design (it's the "find of command execution"), Filespooler also supports advanced queue topologies supporting One-To-Many[9], Many-To-One[10], Feeding Queues from Other Queues[11], Parallel Processing[12], etc. with its simple design -- all with very little effort.

9: /one-to-many-with-filespooler/

10: /many-to-one-with-filespooler/

11: /feeding-filespooler-queues-from-other-queues/

12: /parallel-processing-of-filespooler-queues/

Main Links

13: https://salsa.debian.org/jgoerzen/filespooler/

14: /filespooler-reference/

15: https://crates.io/crates/filespooler

Learning about Filespooler

16: /introduction-to-filespooler/

17: https://salsa.debian.org/jgoerzen/filespooler

18: https://salsa.debian.org/jgoerzen/filespooler/-/blob/main/doc/fspl.1.md

Once installed, learn about using it in different situations.

19: /using-filespooler-over-syncthing/

Installation

The Filespooler Reference[20] discusses this. You can install with Rust with a one-line command, but binaries are also available from the releases[21] page. They are built for these platforms:

20: /filespooler-reference/

21: https://salsa.debian.org/jgoerzen/filespooler/-/releases

22: /raspberry-pi/

23: /installing-debian-backports-on-raspberry-pi/

These binaries are built using the trusted infrastructure maintained by Debian, using the official Rust docker images, and the build logic contained within the Filespooler repo.

Integrations

Transports

These pages are introductions that explain how to use Filespooler with different transports:

24: /using-filespooler-over-syncthing/

25: /using-filespooler-over-nncp/

26: /using-filespooler-over-rclone-and-s3-rsync-net-etc/

27: /guidelines-for-writing-to-filespooler-queues-without-using-filespooler/

Encoders and Decoders

28: /compressing-filespooler-jobs/

Programs to Execute

29: /using-filespooler-for-backups/

30: /gitsync-nncp-over-filespooler/

Security

By default, Filespooler packets are un-encrypted and unsigned. But Filespooler is designed to integrate nicely with encryption tools, thanks to the `--decoder` option. Here are some examples.

31: /encrypting-filespooler-jobs-with-gpg/

32: /encrypting-filespooler-jobs-with-age/

33: /verifying-filespooler-job-integrity/

Management

34: /filespooler-in-cron-and-systemd/

35: /handling-filespooler-command-output/

Tips and Tricks

* Includes a conversation about non-sequence-based queue processing and accepting duplicate sequence numbers

36: /parallel-processing-of-filespooler-queues/

37: /feeding-filespooler-queues-from-other-queues/

38: /one-to-many-with-filespooler/

39: /many-to-one-with-filespooler/

40: /processing-multiple-commands-in-a-single-filespooler-queue/

41: /processing-filespooler-queues-without-filespooler/

42: /filespooler-append-only-queues/

43: /using-filespooler-without-queues-to-pass-more-metadata/

--------------------------------------------------------------------------------

Links to this note

44: /how-gapped-is-your-air/

Sometimes we want better-than-firewall security for things. For instance:

45: /an-asynchronous-rsync-with-dar/

In my writing about dar[46], I recently made that point that dar is a filesystem differ and patcher.

46: /dar/

47: /roundup-of-data-backup-and-archiving-tools/

Here is a comparison of various data backup and archiving tools. For background, see my blog post[48] in which I discuss the difference between backup and archiving. In a nutshell, backups are designed to recover from a disaster that you can fairly rapidly detect. Archives are designed to survive for many years, protecting against disaster not only impacting the original equipment but also the original person that created them. That blog post goes into a lot of detail on what makes a good backup or archiving tool.

48: https://changelog.complete.org/archives/10500-recommendations-for-tools-for-backing-up-and-archiving-to-removable-media

49: /building-an-asynchronous-internet-optional-instant-messaging-system/

I loaded up this title with buzzwords. The basic idea is that IM systems shouldn't have to only use the Internet. Why not let them be carried across LoRa radios, USB sticks, local Wifi networks, and yes, the Internet? I'll first discuss how, and then why.

50: /dead-usb-drives-are-fine-building-a-reliable-sneakernet/

"OK," you're probably thinking. "John, you talk a lot[51] about things like Gopher[52] and personal radios[53], and now you want to talk about building a reliable network out of... *USB drives*?"

51: /the-pc-internet-revolution-in-rural-america/

52: /gopher/

53: /the-joy-of-easy-personal-radio-frs-gmrs-and-motorola-dlr-dtr/

54: /using-filespooler-without-queues-to-pass-more-metadata/

One frustration people sometimes have with ssh or NNCP[55] is that they'd like to pass along a lot of metadata to the receiving end. Both ssh and nncp-exec allow you to pass along command-line parameters, but neither of them permit passing along more than that. What if you have a whole host of data to pass? Maybe a dozen things, some of them optional? It would be very nice if you could pass along the environment.

55: /nncp/

56: /dar/

dar is a Backup[57] and archiving tool. You can think of it as as more modern tar. It supports both streaming and random-access modes, supports correct incrementals (unlike GNU tar's incremental mode), Encryption[58], various forms of compression, even integrated rdiff deltas.

57: /backups/

58: /encrypted/

59: /gnupg-gpg/

GnuPG (also known by its command name, gpg) is a tool primarily for public key Encryption[60] and cryptographic authentication.

60: /encrypted/

61: /processing-filespooler-queues-without-filespooler/

All of the Filespooler[62] examples so far have focused on using `fspl queue-process` to process queue items.

62: /filespooler/

63: /guidelines-for-writing-to-filespooler-queues-without-using-filespooler/

Filespooler[64] provides the `fspl queue-write` command to easily add files to a queue. However, the design of Filespooler intentionally makes it easy to add files to the queue by some other command. For instance, Using Filespooler over Syncthing[65] has Syncthing do the final write, the nncp-file (but not the nncp-exec) method in Using Filespooler over NNCP[66] had NNCP do it, and so forth.

64: /filespooler/

65: /using-filespooler-over-syncthing/

66: /using-filespooler-over-nncp/

67: /gitsync-nncp/

gitsync-nncp is a tool for using Asynchronous Communication[68] tools such as NNCP[69] or Filespooler[70], or even (with some more work) Syncthing[71] to synchronize git[72] repositories.

68: /asynchronous-communication/

69: /nncp/

70: /filespooler/

71: /syncthing/

72: /git/

73: /many-to-one-with-filespooler/

Since Filespooler[74] is an ordered queue processor by default, it normally insists on a tight mapping between the sequence numbers in job files and execution order in a queue.

74: /filespooler/

75: /handling-filespooler-command-output/

By default, Filespooler[76] doesn't do anything special with the output from the commands that `fspl queue-process` executes. If they write to stdout or stderr, you'll see this on the controlling terminal or wherever you have piped or redirected it.

76: /filespooler/

77: /filespooler-in-cron-and-systemd/

Filespooler[78] is designed to work well in automated situations, including when started from cron or systemd. It is a fairly standard program in that way. I'll discuss a few thoughts here that may help you architect your system.

78: /filespooler/

79: /gitsync-nncp-over-filespooler/

You can use gitsync-nncp[80] (a tool for Asynchronous[81] syncing of git[82] repositories) atop Filespooler[83]. This page shows how. Please consult the links in this paragraph for background on gitsync-nncp and Filespooler.

80: /gitsync-nncp/

81: /asynchronous-communication/

82: /git/

83: /filespooler/

84: /using-filespooler-for-backups/

Filespooler[85] makes an *excellent* tool for handling Backups[86]. In fact, this was the case the prompted me to write it in the first place.

85: /filespooler/

86: /backups/

87: /processing-multiple-commands-in-a-single-filespooler-queue/

You'll notice that Filespooler[88]'s `fspl queue-process` command takes a single command. What if you want to permit the sender to select any of several commands to run?

88: /filespooler/

89: /compressing-filespooler-jobs/

Filespooler[90] has a powerful concept called a *decoder*. A decoder is a special command that any Filespooler command that reads a queue needs to use to decode the files within the queue. This concept is a generic one that can support compression, encryption, cryptographic authentication, and so forth.

90: /filespooler/

91: /filespooler-reference/

The reference documentation for Filespooler[92] is here:

92: /filespooler/

93: /introduction-to-filespooler/

It seems that lately I've written several shell implementations of a simple queue that enforces ordered execution of jobs that may arrive out of order. After writing this for the nth time in bash, I decided it was time to do it properly. But first, a word on the *why* of it all.

94: /one-to-many-with-filespooler/

In some cases, you may want to use Filespooler[95] to send the data from one machine to many others. An example of this could be using gitsync-nncp over Filespooler[96] where you would like to propagate the changes to many computers.

95: /filespooler/

96: /gitsync-nncp-over-filespooler/

97: /feeding-filespooler-queues-from-other-queues/

Sometimes with Filespooler[98], you may wish for your queue processing to effectively re-queue your jobs into other queues. Examples may be:

98: /filespooler/

99: /parallel-processing-of-filespooler-queues/

Filespooler[100] is designed around careful sequential processing of jobs. It doesn't have native support for parallel processing; those tasks may be best left to the queue managers that specialize in them. However, there are some strategies you can consider to achieve something of this effect even in Filespooler.

100: /filespooler/

101: /verifying-filespooler-job-integrity/

Sometimes, one wants to verify the integrity and authenticity of a Filespooler[102] job file before processing it.

102: /filespooler/

103: /encrypting-filespooler-jobs-with-age/

Like the process described in Encrypting Filespooler Jobs with GPG[104], Filespooler[105] can handle packets Encrypted[106] with Age (Encryption)[107]. Age may be easier than GnuPG in a number of cases, particularly because it can use a person's existing SSH keypairs for encryption.

104: /encrypting-filespooler-jobs-with-gpg/

105: /filespooler/

106: /encrypted/

107: /age-encryption/

108: /encrypting-filespooler-jobs-with-gpg/

Thanks to Filespooler[109]'s support for decoders, data for filespooler can be Encrypted[110] at rest and only decrypted when Filespooler needs to scan or process a queue.

109: /filespooler/

110: /encrypted/

111: /using-filespooler-over-nncp/

NNCP[112] is a powerful tool for building Asynchronous Communication[113] networks. It features end-to-end Encryption[114] as well as all sorts of other features; see my NNCP Concepts[115] page for some more ideas.

112: /nncp/

113: /asynchronous-communication/

114: /encrypted/

115: /nncp-concepts/

116: /using-filespooler-over-syncthing/

Filespooler[117] is a way to execute commands in strict order on a remote machine, and its communication method is by files. This is a perfect mix for Syncthing[118] (and others, but this page is about Filespooler and Syncthing).

117: /filespooler/

118: /syncthing/

119: /syncthing/

Syncthing is a serverless, peer-to-peer file synchronization tool. It is often compared to Dropbox. However, unlike Dropbox, there is no central server with Syncthing; your devices talk directly to each other to sync data. Syncthing has various effective methods for firewall traversal, including public relays for the worst case. All Syncthing traffic is fully encrypted and authenticated.

120: /old-and-small-technology/

Old technology is any tech that's, well... old.

121: /john-goerzen-s-software/

This page gives you references to software by John Goerzen[122].

122: /john-goerzen/

123: /the-pc-internet-revolution-in-rural-america/

Inspired by several others (such as Alex Schroeder's post[124] and Szczeżuja's prompt[125]), as well as a desire to get this down for my kids, I figure it's time to write a bit about living through the PC and Internet revolution where I did: outside a tiny town in rural Kansas. And, as I've been back in that same area for the past 15 years, I reflect some on the challenges that continue to play out.

124: https://alexschroeder.ch/wiki/2021-11-14_The_early_years_on_the_net

125: https://mastodon.online/@szczezuja/108902027541781265

More on www.complete.org

Homepage

Interesting Topics

How This Site is Built

About John Goerzen

Web version of this site

(c) 2022-2024 John Goerzen