💾 Archived View for going-flying.com › ~mernisse › 14.gmi captured on 2020-09-24 at 00:42:56. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
I've been running Linux in one form or another since some wacky 2.0 kernel version. I think that was around Slackware 3 something. One of the things that has come along with me, ever evolving is my .profile. I don't think I've ever gone through and talked about the various parts of it and the pain that caused them.
Before we start it is important to remember that this is shell agnostic-ish. I run Debian Linux (bash 5.0), OpenBSD (ksh) and macOS (bash 3.2) and this .profile has to run on them all. That said, unlike most of my shell scripts it isn't strictly POSIX because it doesn't have to run on dash(1).
# mernisse's standard bash/ksh .profile # # John Crichton: [on a suicide mission] How come I'm not afraid? # Ka D'Argo: Fear accompanies the possibility of death. Calm shepherds # its certainty. # John Crichton: I love hanging with you, man. #
Like all good shell scripts why not start with a quote. I still remember this moment in Farscape and it still gives me chills.
set -o vi
Back in like 2001 I was a sysadmin at a fortune 500 and worked on big Sun boxes running Solaris 2.6. I had not yet learned vi and one of the other admins snuck this in the global /etc/profile to screw with me. The funny thing is that once I learned it I never went back and it is flat out jarring when a shell doesn't respond to vi style inputs.
# do this early. Otherwise bad things might happen export HISTFILESIZE=131072 export HISTSIZE=131072 export HISTCONTROL=ignoredups
In my last job we had all our admin home directories sitting on a NetApp NFS server in our main data center in NY but the servers were spread out across North America. It is amazing what little weirdness you could run into with a large .profile, several hundred ms of network latency and a large number of shells thanks to screen[1]. We would randomly get corrupted history files about once every two weeks until we hoisted this nice and early.
# Set base path, will add more later. export PATH=/usr/local/bin:/usr/local/sbin:/bin:/usr/bin:/sbin:/usr/sbin # Mostly because I like funny sudo(8) and screen(1) messages. export NETHACKOPTIONS=gender:male,horsename:Trigger,dogname:Cujo,catname:Fluffy,number_pad,pickup_types:$,color,role:Monk,race:Human,time,showexp,pickup_burden:Unburdened,hilite_pet,DECgraphics _ostype="$(uname)"
It has been too long since I played nethack...
# __autoupdater - check the sha256 hash of the local .profile and the one # available in the git repository and update if they differ. __autoupdater() { local _newsum local _shaurl="https://ssl.ub3rgeek.net/profile.sha256" local _profurl="https://ssl.ub3rgeek.net/git/?p=misc.git;a=blob_plain;f=profile;hb=HEAD" _newsum="$(fetch_contents $_shaurl)" if [ -z "$_newsum" ]; then return fi if ! check_hash "$HOME/.profile" "$_newsum"; then echo "Automatically updating .profile" fetch_contents "$_profurl" "$HOME/.profile.new" if [ "$?" -gt 0 ]; then echo "failed." echo return fi if [ -s "$HOME/.profile.new" ] && "${0#-}" \ -n "$HOME/.profile.new"; then mv -f "$HOME/.profile.new" "$HOME/.profile" . "$HOME/.profile" fi echo fi }
Here we start to get into some fun. Yep, my dotfiles have an autoupdate built right into my .profile. For the longest time I tried to keep everything on NFS so I didn't have to worry about such things but once that became essentially impossible (macOS sucks for NFS home directories) I decided to implement this. It also got me to learn git hooks a bit since I needed a checksum file available on changes.
# __emit_remote_hostname - Using the Xterm escape codes, set the window # title to the hostname of the remote host (determined by looking to see # if I am in a screen(1) or tmux(1) session. __emit_remote_hostname() { if is_tmux_session; then return fi case $TERM in screen*|tmux*) echo -ne "\033k${HOSTNAME%%.*}\033\\" ;; *rxvt*|xterm*) echo -ne "\033]0;${HOSTNAME%%.*}\007" ;; *) return ;; esac }
This used to be a lot more of a thing, when I worked at my last job I was managing lots of systems all over the place and knowing which one I was on was super important. This just supports the various escape codes for the various terminal emulators I was using.
__get_inetaddr() { local _af="inet" local _defgw local _dest="default" local _ipaddr if [ -n "$1" ]; then _dest="$1" case $_dest in [0-9]{,3}\.[0-9]{,3}\.[0-9]{,3}\.[0-9]{,3}) : v4 addr ;; [a-f0-9]{,4}:*) : v6 addr _af="inet6" ;; *) echo "__get_inetaddr() dest arg must be an IP" return 1 ;; esac fi # Viscosity on OS X seems to add a 0/1 route instead of a default # route. Not 100% sure why this is. Try to detect it. case $_ostype in Darwin|OpenBSD) if [ "$_dest" = "default" ]; then _defgw="$(route -n get -$_af 0/1 2> /dev/null |\ awk '/interface:/ { print $2 }')" fi if [ -z "$_defgw" ]; then _defgw="$(route -n get -$_af $_dest \ 2> /dev/null |\ awk '/interface:/ { print $2 }')" fi ;; Linux) if [ "$_dest" = "default" ]; then # Sometimes on newer Linuxes (Debian Jessie # for example) you might have more than one # default route. # So get the Metric and use the lowest one. _defgw="$(route -n -A $_af | awk \ 'BEGIN { METRIC=65536 } /^0\.0\.0\.0/ { if (METRIC > $6) { METRIC=$6 IFACE=$8 } } END { print IFACE }')" else _defgw="$(ip -f $_af route get $_dest | \ awk '{ print $7 }')" fi ;; esac if [ -n "$_defgw" ]; then # using a shellvar in an awk re, there be quoting # dragons here, boys. _ipaddr="$(ifconfig ${_defgw} | awk \ '/'"${_af}"' / { print $2 }')" _ipaddr="${_ipaddr##*:}" if [ -n "$_ipaddr" ]; then echo "$_ipaddr" fi fi }
This grew over years. I need to know what IP address I am likely using on the system so I can setup a back channel. I don't use this much anymore but what it used to do was use this to be able to copy files and open applications on whatever machine I was sitting at. For example if someone sent me an e-mail with a pdf in it I had my mail reader (mutt) setup to call a wrapper script that would scp the attachment back to wherever I was coming from and then use ssh to call another wrapper to open the file in the appropriate app (basically a script that selected between xdg-open, gnome-open and macOS' open). I was very proud of this back in the day and it worked really well.
# __fixup_hostname - Some shells don't set $HOSTNAME, also add $DOMAINNAME # and $SHORTNAME for use elsewhere. __fixup_hostname() { local __hostname="$(hostname)" if [ "$__hostname" = "${__hostname%%.*}" ]; then # Not everyone has hostname -f, but usually # if they don't return FQDN, they do. __hostname="$(hostname -f)" fi export SHORTNAME="${__hostname%%.*}" export DOMAINNAME="${__hostname#*.}" if [ -z "$HOSTNAME" ]; then export HOSTNAME="$__hostname" fi } #__set_histfile - if we have a NFS homedir, set the histfile # to a per-machine-type histfile so we have less random histfile # clobbering due to multiple sessions. This came from lots of pain # at Frontier. __set_histfile() { local __fstype # Determine fstype of $HOME -- There needs to be a more portable # way of doing this. case $_ostype in Linux) __fstype="$(stat -fc "%T" $HOME)" ;; *) return ;; esac if [ -n "$__fstype" ] && [ "$__fstype" = "nfs" ]; then _shortname="$(echo $SHORTNAME | tr -d [:digit:])" export HISTFILE="$HOME/.$_shortname-history" else export HISTFILE="$HOME/.bash_history" fi }
More history file stuff, see the block at the top. This was to fight the same gremlins. We named our systems [role][##].city.state.domain.tld so this meant that if I was connecting to one of the mail servers (say mx12) it would store the shell history in a file specific to that server role (mx). It also helped with finding that command you ran a week ago in a 100k line history.
# add_smartcard - Add the PIV key to the current ssh-agent if available. # Requires a opensc compatible smartcard and associated libs and binaries. add_smartcard() { # Set to a string in the opensc-tool(1) -l output for your card. local _card_name="Yubikey" # set to the installed location of the opensc libraries. # on OSX with HomeBrew this is /usr/local/lib local _lib_dir="/usr/local/lib" if ! quiet_which opensc-tool; then return fi if [ -z "$SSH_AUTH_SOCK" ]; then return fi if opensc-tool -l | grep -q "$_card_name"; then if ssh-add -l | grep -q opensc-pkcs11; then return fi ssh-add -s "$_lib_dir/opensc-pkcs11.so" return fi # If card is no longer present, remove the key. if ssh-add -l | grep -q opensc-pkcs11; then ssh-add -e "$_lib_dir/opensc-pkcs11.so" > /dev/null fi }
This is some plumbing to let me use my Yubikey as a SSH authentication token. Mostly it exists to work around some whackyness with macOS.
# add_to_path - Add directories to $PATH if they exist. add_to_path() { if [ "$#" -eq 0 ]; then echo "usage add_to_path dir [dir] .. [dir]" return 127 fi while [ -n "$1" ]; do if [ -d "$1" ]; then export PATH="$PATH:$1" fi shift done }
At one point I noticed that my PATH variable was getting filled with duplicates and was out of order on some systems. It turns out that I was just doing the usual PATH=$PATH:blahblah all over the show and alongside the auto updater which re-sources the .profile upon update some real fun was happening. I was also doing a lot of conditional adds based on OS and hostname so instead I decided that I'd just always call add_to_path and have it be smart enough to not add a directory that doesn't exist.
# ssh to a host as the backdoor user. backdoor() { local remote_host if [ -z "$1" ]; then echo "Usage: backdoor hostname" return 1 fi remote_host="$1" shift if [ ! -f "$HOME/.ssh/backdoor-ssh-key" ]; then echo "Could not find private key file!" return 2 fi ssh -i "$HOME/.ssh/backdoor-ssh-key" backdoor@${remote_host} $@ }
Things you do not do often should be made as easy as possible because there is no way you are going to remember. This is one of those things. I use LDAP to store account information and in the rare case that I need to get into a system that has lost access to LDAP I have a local user with sudo(8) access that I can get into.
# check_hash - Compare a local file to a SHA-2 256 hash. check_hash() { if [ -z "$1" ] || [ -z "$2" ]; then echo "usage: check_hash file hash" return 127 fi local _sum local _file="$1" local _hash="$2" if [ ! -e "$_file" ]; then echo "check_hash: $_file does not exist or is unreadable." return 2 fi if quiet_which shasum; then _sum="$(shasum -a 256 $_file | awk '{ print $1 }')" elif quiet_which sha256sum; then _sum="$(sha256sum | awk '{ print $1 }')" elif quiet_which sha256; then _sum="$(sha256 $_file | awk '{ print $4 }')" else echo "check_hash: could not find suitable checksum program." return 2 fi if [ "$_hash" = "$_sum" ]; then return 0 fi return 1 } # check_host_alive - ping(1) or ping6(1) a host and determine if it # is alive. This can add up to 2 seconds of delay in execution per # call check_host_alive() { local _pingopts="-q -w 1 -c 1" if [ -z "$1" ]; then echo "usage: check_host_alive host" return 127 fi case $_ostype in Darwin) _pingopts="-q -t 1 -c 1" ;; esac # try ping(1) before ping6(1) if ping $_pingopts "$1" 2>&1 >/dev/null; then return 0 fi if ping6 $_pingopts "$1" 2>&1 >/dev/null; then return 0 fi return 1 }
These are part of the auto updater. Always make sure you check both IPv4 and IPv6 if you are testing for connectivity *to* a dual-homed host. You never know when you will end up on some poorly configured network.
# check_tmux_sessions - Emit information about tmux sessions on a host. check_tmux_sessions() { if is_tmux_session; then return fi if ! quiet_which tmux; then return fi local _sessions local _total=0 local _attached=0 _sessions="$(tmux list-sessions 2> /dev/null)" if [ -z "$_sessions" ]; then return fi _total="$(echo "$_sessions" | wc -l)" _attached="$(echo "$_sessions" | grep -c 'attached')" if [ -z "$_attached" ] || [ -z "$_total" ]; then return fi if [ "$_attached" -eq "$_total" ]; then return fi echo "Found $(( $_total - $_attached )) detached tmux sessions (of $_total)" }
I have a bad habit of leaving tmux (used to be screen but I switched to tmux ages ago) sessions all over the place. This lets me know so I can attach to an existing one instead of making a new one.
# dotfiles - manage my dotfiles. dotfiles() { # dotfiles to pull in format src:dst <whitespace> local _dotfiles=" ssh_config:.ssh/config tmux.conf:.tmux.conf gitconfig:.gitconfig githelpers:.githelpers " local _gitpath="git/?p=dotfiles.git;a=blob_plain;hb=HEAD;f=" local _newsum local _url="https://ssl.ub3rgeek.net" local fn local src for entry in $_dotfiles; do fn="${entry##*:}" src="${entry%%:*}" _newsum="$(fetch_contents ${_url}/${src}.sha256)" if [ -z "$_newsum" ]; then continue fi if ! check_hash "$HOME/$fn" "$_newsum"; then echo "Updating $HOME/$fn" fetch_contents "${_url}/${_gitpath}${src}" "$HOME/$fn" fi done } # fetch_contents - Emit the contents of a url to stdout, or to a file. fetch_contents() { if [ -z "$1" ]; then echo "usage fetch_contents url [file]" fi local _fetchcmd local _file="$2" local _ret local _url="$1" if [ -n "$_file" ] && [ -e "$_file" ]; then mv -- "$_file" "$_file.old" fi if quiet_which curl; then if [ -n "$_file" ]; then curl --fail --silent --location --output "$_file" \ "$_url" else curl --fail --silent --location "$_url" fi elif quiet_which wget; then if [ -n "$_file" ]; then wget --quiet --output-document="$_file" "$_url" else wget --quiet -O - "$_url" fi else return 2 fi _ret="$?" if [ -n "$_file" ] && [ "$_ret" -ne 0 ] && [ -e "$_file.old" ]; then mv -- "$_file.old" "$_file" fi return "$_ret" }
More of my auto updater.
# git_branch - Emit branch info for PS1. git_branch() { if ! quiet_which git; then return fi git status > /dev/null 2>&1 if [ $? = 128 ]; then return fi branch="$(git branch | awk '/^\*/ { print $2 }')" if [ "$branch" = "master" ]; then return fi echo -ne "$branch " } # git_clean - Emit colors for PS1 if I am in a gitdir. git_clean() { local branch="" local clean="" if ! quiet_which git; then echo -ne "\033[m" return fi git status > /dev/null 2>&1 if [ $? = 128 ]; then echo -ne "\033[m" return fi git status | grep -qE 'working (directory|tree) clean' if [ $? -gt 0 ]; then clean="\033[1;31m" else clean="\033[1;32m" fi echo -ne "$clean" }
These are run as part of my prompt, they check to see if I am in a git working copy, and get some information if I am. More info below.
# Test to see if we are in a screen(1) or tmux(1) session. is_tmux_session() { if [ -n "$STY" ] || [ -n "$TMUX" ]; then return 0 fi return 1 } # is_vm_host - test to see if this is a VM host. is_vm_host() { case $SHORTNAME in "gypsum"|"tardis"|"virt"*) return 0 ;; esac return 1 } # lab_prompt - emit color for PS1 if this is a lab system. lab_prompt() { if [ -f /etc/testing ]; then echo -ne '\033[1;35m' fi }
These get used later
# megacli - Wrapper for megacli(1) to suppress logging. megacli() { if ! quiet_which megacli; then return fi $(which megacli) "$@" -NoLog }
Again, if you use something rarely, try to not have to remember. In this case megacli more or less loves leaving log files in whatever $PWD you were in when you ran it and there is no need for that.
# prompt_magic - Emit the pretty dynamic shit on my prompt. prompt_magic() { if ! is_tmux_session; then echo "$(vol_size)$(vm_count)" fi }
Starting to build up the pieces for my prompt.
# quiet_which - Wrapper for which(1) that does not emit anything to stdout. quiet_which() { if [ -z "$1" ]; then return 1 fi which "$1" >/dev/null 2>&1 return $? }
Anything you do a million times should be a function.
# set_display - Use the variable set by ssh_wrapper to manage the # dropfile used by my mutt utils for remote display access. set_display() { if [ -n "$X_REMOTE_HOST" ]; then echo "$X_REMOTE_HOST" > ~/.ssh_remote_host_addr else if ! is_tmux_session && \ [ -f ~/.ssh_remote_host_addr ]; then rm -- ~/.ssh_remote_host_addr fi fi } # ssh_wrapper - Wrapper for ssh(1) that tries to set the X_REMOTE_HOST # environment variable to the IP address of the local host so that things # running on the remote can determine where I am coming from (for other # integration with things like X11 and mutt(1). ssh_wrapper() { local _ipaddr local _ssh_bin if ! quiet_which ssh; then return fi _ssh_bin="$(which ssh)" if [ -z "$X_REMOTE_HOST" ]; then _ipaddr=$(__get_inetaddr) if [ -n "$_ipaddr" ]; then export X_REMOTE_HOST="$_ipaddr" fi fi if [ -e "$HOME/.ssh/${SHORTNAME}-config" ]; then $_ssh_bin -Y -F "$HOME/.ssh/${SHORTNAME}-config" "$@" else $_ssh_bin "$@" fi if is_tmux_session; then tmux set-window-option automatic-rename on >/dev/null fi }
These are the big parts of the remote display backchannel stuff. ssh_wrapper gets aliased to ssh later on so it gets called whenever I type ssh <something> so it can set the X_REMOTE_HOST environment variable. It also supports per-host ssh configuration files which used to be a lot more important when I had to routinely connect to ancient systems. Once I have connected to a system with ssh_wrapper the copy of this .profile on the remote end calls set_display to convert the environment variable into a dropfile. The dropfile is needed so that 1) all shells on the system that I am running can see the IP address and 2) the value is updated with the latest location. Imagine I am connected to a system from a work laptop and then connect to the same system from home. I want to have the backchannel stuff open up on the system that I'm actually on not the one that I was previously on. This way each new login overwrites the value for the previous ones.
# Innanet radio shortcuts radio() { local _playcmd="mplayer -vo none -playlist" case $_ostype in Darwin) if [ -f /Applications/VLC.app/Contents/MacOS/VLC ]; then _playcmd="/Applications/VLC.app/Contents/MacOS/VLC" else _playcmd="open" fi ;; esac if [ -z "$1" ]; then echo "Usage: radio channel" return 127 fi case "$1" in # soma.fm "defcon") $_playcmd "http://somafm.com/defcon64.pls" ;; "metal") $_playcmd "http://somafm.com/metal64.pls" ;; "trance") $_playcmd "http://somafm.com/thetrip64.pls" ;; "police") $_playcmd "http://somafm.com/sf103364.pls" ;; "missionctl") $_playcmd "http://somafm.com/missioncontrol64.pls" ;; "space") $_playcmd "http://somafm.com/spacestation64.pls" ;; "doomed") $_playcmd "http://somafm.com/doomed64.pls" ;; # Mostly Elite: Dangerous stuff... "lave") $_playcmd "http://stream.laveradio.com:8421/stream" ;; "sidewinder") $_playcmd "http://radiosidewinder.out.airtime.pro:8000/radiosidewinder_b" ;; "eds") $_playcmd "http://streaming.radionomy.com/EDSRadio" ;; "orbital") $_playcmd "http://50.7.71.219:7594/" ;; "bluemars") $_playcmd "http://streams.echoesofbluemars.org:8000/bluemars" ;; "cryosleep") $_playcmd "http://streams.echoesofbluemars.org:8000/cryosleep" ;; "trucker") $_playcmd "http://bluford.torontocast.com:8447/hq" ;; "galnet") $_playcmd "http://streaming.radionomy.com/galnet" ;; "list") cat <<-EOM Supported Streams: defcon - somafm Defcon Radio metal - somafm Metal trance - somafm Trance Trip police - somafm SFPD Feed missonctl - somafm Mission Control space - somafm Space Station doomed - somafm Doomed lave - Lave Radio sidewinder - Radio Sidewinder eds - Elite Dangerous Station orbital - orbital.fm trucker - Hutton Orbital Radio galnet - GalNet Radio EOM ;; *) echo "$1 is not supported, add it or try again." ;; esac }
Pretty simple, open an Internet radio stream.
# vm_capacity, from jwm export LIBVIRT_DEFAULT_URI='qemu:///system' vm_capacity() { if ! quiet_which virsh; then return fi _get_system_memory() { sed -n ' /^MemTotal:[[:space:]]*/ { s/^MemTotal:[[:space:]]*\([0-9]\{1,\}\)[[:space:]]*[kK][bB][[:space:]]*$/\1/ p } ' /proc/meminfo } capacity_help() { cat - <<-EOF capacity [-ah] Display information about this hypervisor's memory and disk space capacity for VMs. -a Display capacity information for all VMs, not just running ones. -h emit this detailed help information EOF } local funcname=capacity local usage="$funcname(): usage: $funcname [-ahm]" local all_vms=0 local old_ifs OPTIND=1 while getopts :ahm arg; do case $arg in a) all_vms=1 ;; h) capacity_help return 0 ;; *) echo 1>&2 "$usage" return 2 ;; esac done if [ $OPTIND -gt 1 ]; then shift $(( $OPTIND - 1 )) fi if [ $# -gt 0 ]; then warn EINVAL "$funcname(): extra arguments: '$@'" echo 1>&2 "$usage" return 2 fi if [ $all_vms -eq 1 ]; then awk='$3 == "running" || $3 " " $4 == "shut off" {print $2}' else awk='$3 == "running" {print $2}' fi allocd_memory=0 total_du=0 total_stat=0 for domain in $(virsh -c qemu:///system list --all | awk "$awk"); do # Skip the header. if [ "$domain" = Name ]; then continue fi dom_memory=$(virsh -c qemu:///system dominfo "$domain" | sed -n ' /Used memory:/ { s/.*Used memory:[[:space:]]*\([0-9]\{1,\}\)[[:space:]]*KiB[[:space:]]*$/\1/ p } ') if [ -z "$dom_memory" ]; then echo 1>&2 "$funcname(): Unable to determine amount of allocated memory for VM $domain." return 1 fi dom_memory=$(($dom_memory / 1024)) allocd_memory=$(($allocd_memory + $dom_memory)) disk_images=$(virsh domblklist "$domain" | sed 1,2d | awk '{print $2}') old_ifs=$IFS IFS=' ' for image in $disk_images; do image_du=$(du -k "$image" | awk '{print $1}') if [ -z "$image_du" ]; then echo 1>&2 "$funcname(): Unable to determine du(1) size of disk image $image for VM $domain." return 1 fi total_du=$(($total_du + ($image_du / 1024))) image_stat=$(stat -c %s "$image") if [ -z "$image_stat" ]; then echo 1>&2 "funcname(): Unable to determine stat(1) size of disk image $image for VM $domain." return 1 fi total_stat=$(($total_stat + ($image_stat / 1024))) done IFS=$old_ifs done # Add half a gigabyte so we round up. It's better than # piping to bc(1) for floating point math. :-/ allocd_memory=$((($allocd_memory + 512) / 1024))g total_du=$((total_du / 1024))g total_stat=$((total_stat / 1024 / 1024))g if ! phys_mem=$(_get_system_memory); then echo 1>&2 "$funcname(): Unable to determine size of system memory." return 1 fi phys_mem=$(($phys_mem / 1024 / 1024))g echo "Memory: $allocd_memory of $phys_mem." echo "Disk: $total_du used of $total_stat allocated." } # vm_count - Emit the count of running and shutdown VMs. vm_count() { if ! is_vm_host; then return fi running="$(virsh -c qemu:///system list | grep -c 'running')" stopped="$(virsh -c qemu:///system list --all | grep -c 'shut off')" echo -e "$running/$stopped " } # vmls - List all registered VMs. vmls() { if ! is_vm_host; then return fi virsh -c qemu:///system list --all }
Some commands used to manage the kvm based VMs that I have.
# vol_size - Emit the freespace in /vol for PS1 vol_size() { if [ ! "$SHORTNAME" = "apollo" ]; then return fi for line in $(df -h /vol/media | awk '{ print $4 }'); do if [ "$line" = "Avail" ]; then continue fi echo -e "$line " done }
This gets used in my prompt as you will see.
if [ $RANDOM -ge 16384 ]; then if check_host_alive ssl.ub3rgeek.net; then __autoupdater dotfiles fi fi
This is where execution really starts. If I recall correctly $RANDOM is a number between 0 and 32768 so this is essentially a coin-flip to run the updater. If I can reach my git repository then we call the two parts of the update process. __autoupdater() updates .profile itself, including reloading it and dotfiles manage updating the config files since they do not need to be reloaded like .profile does.
__fixup_hostname __set_histfile
Again, gotta get that history file stuff handled fast otherwise it is more likely that you will end up with a truncated or empty history file.
export DEBFULLNAME="Matthew Ernisse" export DEBEMAIL="mernisse@ub3rgeek.net" export DEBSIGN_KEYID="4AE6BF32" export GIT_AUTHOR_NAME="Matthew Ernisse" export GIT_AUTHOR_EMAIL="matt@going-flying.com"
Setup identity information for various tools, git and debian package development specifically.
export PS1="\[\033[0m\$(lab_prompt)\]\h\[\033[0m\]@\t \ \$(prompt_magic)\ \[\033[0;36m\]\$(git_branch)\ \[\$(git_clean)\]\W\[\033[m\] >"
My magical prompt. This could likely be an entire glog entry of its own. The amount of tears and blood shed to get this thing to work reasonably well most of the time is volumonous. The big thing to remember is that any time you use non-printing characters (the ISO 6429 nee. ANSI color codes for example) you need to wrap them in '\[ \]' or else your shell will count them when calculating the cursor position. That means that readline support on your shell command line will be broken in very odd ways. Wildly frustrating. Also be sure you understand escaping here when calling functions as part of this.
alias ssh="ssh_wrapper" alias git_remote="git remote show origin" alias lvirsh="virsh -c qemu:///system"
Shell aliases. Again, infrequently used commands get their arguments automated and finally hook up the ssh_wrapper as described above.
case $TERM in uxrvt*256color) export TERM="xterm-256color" break ;; uxrvt*) export TERM="xterm-color" break ;; esac
Some terms set TERM to things that not all systems understand so reset them. This was a much bigger problem when I was using Linux as a desktop OS.
check_tmux_sessions #add_smartcard set_display add_to_path "$HOME/.bin" "$HOME/bin" "$HOME/stuff/scripts" # homebrew puts bash completion things here. if [ -d "/usr/local/etc/bash_completion.d" ]; then for file in $(find "/usr/local/etc/bash_completion.d" -type f); do . "$file" done unset file fi
Start calling things that I talked about above, and make sure any homebrew packages get their completion scripts loaded.
# new OpenBSD uses doas instead of sudo. if [ "$_ostype" == "OpenBSD" ] && quiet_which doas; then sudo() { doas $* } sudoedit() { doas vi $* } fi if ! quiet_which sudoedit; then sudoedit() { sudo -e $* } fi
Muscle memory is golden. Protect it.
# Silence the zsh garbage. I have zero desire to change shells. if [ "$_ostype" == "Darwin" ]; then export BASH_SILENCE_DEPRECATION_WARNING=1 fi
Self explanatory.
if ! is_tmux_session; then if quiet_which uptime; then uptime fi if quiet_which fortune; then echo fortune -s echo fi fi
Only do these if we aren't running inside of tmux or screen.
# profile.d stuff for local customizations. if [ -d "$HOME/.profile.d/" ]; then for fragment in $(find "$HOME/.profile.d/" -type f); do . "$fragment" done unset fragment fi
At one point this file got close to 100kb in size and most of it was crap wrapped in a giant case $HOSTNAME; in esac statement so I instead decided to split things out into per-host fragments and source them. The downside is that they don't get auto-updated but the upside is that it has kept the size down to a reasonable size.
export PROMPT_COMMAND="__emit_remote_hostname"
Finally, emit the current hostname just before sending the prompt. This ended up being late because if something earlier bombed then it would crap all over the title of my window.
Hopefully that was useful if not interesting. It's been a while since I dug into this thing and was fun reliving some of the memories. Sadly the git repository only goes back to 2013 when I built the auto updater stuff so I don't have commit messages before that but I have to say that what I do have seem to get rather. . . colorful.
🚀 © MMXX matt@going-flying.com