💾 Archived View for thrig.me › tech › buffering.gmi captured on 2023-04-19 at 23:13:58. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-01-29)
-=-=-=-=-=-=-
To recap setbuf(3), there are three forms of stream buffering available in the standard library on unix: block, line, and unbuffered. Standard error is unbuffered. Standard out is line buffered if the output is to a terminal, or block buffered otherwise. Buffering is most apparent when the expected output does not appear, usually when one has switched from writing to the terminal to a file or pipe. This changes the buffering from line-based to block.
$ perl -E 'say "hi"; sleep 7' hi ^C $ perl -E 'say "hi"; sleep 7' > out & [1] 11686 $ cat out $ cat out $ sleep 10 ; cat out hi [1] + Done perl -E "say \"hi\"; sleep 7" > out $ rm out
Where was the output? Stuck in a buffer flushed only when the program exited. Some programs have flags to line or unbuffer their output (tcpdump -l); programming language support for what setbuf(3) and flush(3) provide varies from complete
(TCL) to utterly lacking (the shell). Other options include unbuffer(1) or stdbuf(1). Buffered output risks being lost should the buffer not be flushed--so why not unbuffer everything?
First, though, a look at lost output. Causes of lost output vary from the simple
$ python2 -c 'import time;print "ugh";time.sleep(99)' >out & [1] 6604 $ kill -9 $! $ cat out [1] + Killed python2 -c "import time;print \"ugh\";time.slee $
to issues more subtle
$ ruby -e 'puts "hi"; exec "echo hi"' hi hi $ ruby -e 'puts "hi"; exec "echo hi"' | grep hi hi $ ruby -e 'puts "hi"; STDOUT.flush; exec "echo hi"' | grep hi hi hi
The first command uses line-based buffering; the output is to the terminal. Data is lost in the seccond command as the block-based buffer piping to grep is not flushed prior to the exec. The third command avoids the data loss. Buffering issues can also occur across fork(2) where both the parent and child share the same file descriptors.
This program appears to be correct when output is to the terminal, but duplicates data under block-based buffing:
$ make buffork cc -O2 -pipe -o buffork buffork.c $ ./buffork buffered in 29001 after in 29001 after in 50790 $ ./buffork > out ; cat out buffered in 56870 after in 56870 buffered in 56870 after in 374 $ rm out
As with the ruby code, a fflush(3) prior to the fork will avoid this issue. Another design would be for the parent to emit nothing to standard output, or to only emit after the child processes have all been forked. Or to unbuffer standard output.
A single process can create the prior error should buffered and unbuffered system calls be mixed,
$ make mixbuf cc -O2 -pipe -o mixbuf mixbuf.c $ ./mixbuf duck duck goose $ ./mixbuf > out ; cat out goose duck duck $ rm out
The puts(3) call buffers while write(2) does not. Avoid this by not mixing buffered and unbuffered calls (hint: FILE * calls buffer, int fd ones do not).
With an input file containing shell commands,
$ cat input echo a echo bb echo ccc $ wc -c input 24 input
A process tracing tool may reveal differences in how shells read input:
$ ktrace -id sh < input > /dev/null $ kdump | fgrep -2 echo 8712 sh CALL read(0,0x8d1b6af858,0x200) 8712 sh GIO fd 0 read 24 bytes "echo a echo bb echo ccc " 8712 sh RET read 24/0x18 -- 8712 sh CALL read(0,0x8d1b6af858,0x200) 8712 sh GIO fd 0 read 17 bytes "echo bb echo ccc " 8712 sh RET read 17/0x11 -- 8712 sh CALL read(0,0x8d1b6af858,0x200) 8712 sh GIO fd 0 read 9 bytes "echo ccc " 8712 sh RET read 9
Note how the read(2) calls start at the beginning of the file or thereafter the beginning of each line. Now compare the reads when the input is unseekable:
$ perl -E 'say "echo $_" for qw(a bb ccc)' | ktrace -id sh >/dev/null $ kdump | perl -ne '/fd 0 read/ && do { chomp;$n=readline;print "$_ $n" }' 54769 sh GIO fd 0 read 1 bytes "e" 54769 sh GIO fd 0 read 1 bytes "c" 54769 sh GIO fd 0 read 1 bytes "h" 54769 sh GIO fd 0 read 1 bytes "o" 54769 sh GIO fd 0 read 1 bytes " " 54769 sh GIO fd 0 read 1 bytes "a" 54769 sh GIO fd 0 read 1 bytes " 54769 sh GIO fd 0 read 1 bytes "e" 54769 sh GIO fd 0 read 1 bytes "c" 54769 sh GIO fd 0 read 1 bytes "h" 54769 sh GIO fd 0 read 1 bytes "o" 54769 sh GIO fd 0 read 1 bytes " " 54769 sh GIO fd 0 read 1 bytes "b" 54769 sh GIO fd 0 read 1 bytes "b" ...
Here sh instead chugs along making one byte reads from standard input; seek is not possible. Unlike with a seekable file there is no way for the shell to efficiently read a chunk and then seek to the beginning of the next line.
stdbuf(1) advertises the ability to "run a command, with modified buffering operations for its standard streams." Sounds great! Language X or program Y lack options to adjust the buffering, and stdbuf claims that it can. So what's not to like? How is a parent process even able to influence how a subsequent child process handles buffering? Let's peek at the stdbuf source code...
... static void set_LD_PRELOAD (void) { ...
Support for this method may well depend on your feelings on monkey patching programs with new code that introduces new behavior. And there can be problems regardless; processes that maintain their own buffer will not be influenced by this, or LD_PRELOAD hopefully will not be allowed anywhere sudo is involved, nor hopefully will the changed behavior run afoul any security policies...
Another method is to use a pty like unbuffer(1) or similar do. This is perhaps less bad than LD_PRELOAD but does come with perhaps too much complexity. Opinions will vary here.
Anyways, the question "why not unbuffer everything?" was asked above, so we probably should answer that.
Inefficient code is quite easy to write; we need only remove buffering and operate byte for byte printing 9,999 "a" to standard output. Assembly is used to ensure minimal bloat--who knows whatall the standard library will drag in. This is for OpenBSD running on AMD64, which is a downside with tossing out all the portability (and bloat) that comes with C.
$ nasm -f elf64 maxcalls.txt -o maxcalls.o $ ld.bfd -m elf_x86_64_obsd -nopie maxcalls.o -o maxcalls $ ./maxcalls | wc -c 9999
And the same only using a minimum of syscalls.
$ ./mincalls | wc -c 9999
These two programs emit the same output, only maxcalls does it a great deal more slowly, as the delta time in the third column of the kdump output shows.
$ ktrace ./maxcalls >/dev/null $ kdump -TR | sed -n '$p' 89999 maxcalls 0.126733 CALL exit(0) $ ktrace ./mincalls >/dev/null $ kdump -TR | sed -n '$p' 58247 mincalls 0.000241 CALL exit(0)
Buffering avoids this inefficiency. These examples certainly cherrypick the best and worst conditions for buffering; as a practical matter unbuffered output may not be terrible provided the amount of data (say, a log record or a JSON blob) is not too small, especially where one aims to minimize data loss should a program or system fail. Hence standard error being unbuffered by default; debugging would not be assisted by an error message lost in a buffer.
tags #unix