💾 Archived View for thrig.me › blog › 2024 › 01 › 06 › clidev.gmi captured on 2024-05-26 at 15:12:41. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2024-02-05)

-=-=-=-=-=-=-

CLI Dev

gemini://lark.gay/posts/cli-opinions.gmi

https://clig.dev/

This guide doesn’t cover full-screen terminal programs like emacs and vim. Full-screen programs are niche projects—very few of us will ever be in the position to design one.

There's a gap in your toolkit if you don't think to ever write a full-screen terminal program. This may depend on the definition; if you define it to be something chonky like vim or emacs then probably yes most programmers will not design one. If instead the definition is any program that takes over the terminal, then those are much easier to write and there are good reasons to write such programs for various needs.

For example one might write a calculator (maybe an enhanced dc(1) that shows more context) or an XKCD comics metadata search program, or a wrapper around `grep -rl ...` that lets you review and fiddle with the list of files and invoke some other program with the results.

curses widgets (menus, lists, etc) for shell scripts

Probably there are other problems where a full-screen terminal program is a good fit, especially if you haven't been thinking "is a full-screen terminal program a good option here?" Others may instead reach for Tk or some other such GUI, which has different benefits and drawbacks. (Godot sounds like it is popular these days, but I mostly stick with the old stuff.)

A core tenet of the original UNIX philosophy is the idea that small, simple programs with clean interfaces can be combined to build larger systems.

This can result in slow performance as all the forks and interpretations get worked through again and again and again. For quick prototyping or discovery, sure, but if it becomes something you will run a lot, then it might be good to make the pipeline a bit larger or more efficient. For example, I found myself writing

    ... | sort | uniq -c | sort -nr

a lot, and replaced that pipeline with tally(1) that is about 158% faster. Even more faster might have been to move the logs to a database of some sort instead of poking at them with grep and whatnot, but that line can be hard to draw. (Maybe when you have a larger team where a system log aggregation and query service and maintaining all that makes sense.)

xkcd://1319

Another example is "how to get the IP address of an interface?" which on Linux might involve `ip --json ... | jq ...`. This involves quite a lot of code and more forks than writing a small getifaddrs(3) wrapper.

    $ localaddr wg0
    192.0.2.42
    $ cloc ~/src/scripts/network/localaddr.c
           1 text file.
           1 unique file.                              
           0 files ignored.

    github.com/AlDanial/cloc v 1.93  T=0.06 s (17.5 files/s, 2356.3 lines/s)
    -------------------------------------------------------------------------------
    Language                     files          blank        comment           code
    -------------------------------------------------------------------------------
    C                                1             16              6            113
    -------------------------------------------------------------------------------