💾 Archived View for mizik.eu › feed.xml captured on 2022-03-01 at 15:34:06.
⬅️ Previous capture (2021-11-30)
-=-=-=-=-=-=-
<?xml version="1.0" encoding="UTF-8"?> <feed xmlns="http://www.w3.org/2005/Atom" > <generator uri="https://mizik.eu/picogen" version="0.1">Picogen</generator> <link href="gemini://mizik.eu/feed.xml" rel="self" type="application/atom+xml" /> <link href="gemini://mizik.eu/" rel="alternate" type="text/plain" /> <updated>2021-11-21T20:56:36.315313+01:00</updated> <id>gemini://mizik.eu/blog/feed.xml</id> <title>Blog</title> <entry> <title>Does my setup suck less than few years ago?</title> <link href="gemini://mizik.eu/blog/does-my-setup-suck-less-than-few-years-ago/" rel="alternate" type="text/plain" title="Does my setup suck less than few years ago?" /> <published>2021-11-21T00:00:00+01:00</published> <updated>2021-11-21T00:00:00+01:00</updated> <id>gemini://mizik.eu/blog/does-my-setup-suck-less-than-few-years-ago/</id> <content><![CDATA[I used to be xfce user for very long time. But I am suckless for several years now. Let's find out if it is good for me and/or if it may be good for you too. ### Xfce When I was at Uni, I was sporting custom alpha build of Compiz/Beryl. Mostly for fun, and of course, to see everybody going nuts when seeing how I rotate my 3D cube to switch between desktops :D But later on, I got too much work and I had to optimize for performance and not for showing off. I switched to Xfce and stayed for very long. As time went by, I found out I was continuously removing things. I removed wallpaper, transparency, window decorations, side panel, login screen. People were laughing that soon I won't have nothing to be shown, but I felt better. This went side by side with moving more to a terminal world. One day I found out I can use any WM/DM because I do not really rely on anything specific anymore. I tried i3 tiling WM for several times and always went back. I just didn't need tiling concept to be honest, because 90% of time, I have had only one window on fullscreen and what I strived for more, was to be able to get specific app on foreground without searching through alt+tab. I ended app creating shell wrappers for all of my apps I used on daily basis and put a keyboard shortcut on it. Wrapper looks for example like this:
wmctrl -xa Firefox || firefox-bin &
so basically what it does is checking if there already exist a window of specific app (in this case firefox), if yes, put it to foreground, if no, start fresh instance. I very rarely have multiple instances of some app running at one time, so this setup was great for me. I got 20+ apps made like this and started to switch to what I wanted instantly. ### slock Around this time I had some long shifts at work and sometimes needed some focus break. One time I decided to re-style my lock screen to some "h4x0r" mode, where you won't see yourself typing. I tried to style the default lock screen, but then found => https://tools.suckless.org/slock/ slock . Lock screen app written in pure C with 300 something lines made by community called suckless. I instantly fall in love with it, but break was already too long, so I didn't dive in to suckless world immediately. ### dmenu The only thing I missed every time I left xfce for something more lightweight was settings app. The easy way to switch monitors (I got many presentations those days), font sizes, plug and play devices and so on. So I decided to create DM/WM independent set of scripts for everything I used from xfce settings. There was always the same paradigm repeating. You have several choices you frequently use, and you want to pick between them. So what script should do is show the options and let user choose. Preferably using keyboard only and in some non-intrusive way. I found => https://tools.suckless.org/dmenu/ dmenu . Exactly what I wanted. I went through the man pages and realized it is a suckless app again and that it is the default app launcher for dwm, suckless window manager. You can guess what I did :) But first I finished my scripts. For example this one is to switch monitors:
choices="LAPTOP\nHDMI"
chosen=$(echo -e "$choices" | dmenu -i -p 'SWITCH DISPLAY TO: ')
case "$chosen" in
LAPTOP) xrandr --output VIRTUAL1 --off --output eDP1 --primary --mode 2560x1440 --pos 0x0 --rotate normal --output DP1 --off --output HDMI2 --off --output HDMI1 --off --output DP2 --off ;;
HDMI) xrandr --output VIRTUAL1 --off --output eDP1 --off --output DP2 --off --output HDMI2 --off --output HDMI1 --primary --preferred --pos 0x0 --rotate normal ;;
esac
then I set dmenu as my default app launcher in xfce and created new to-do item: "try dwm". ### dwm It took a year or two until I marked that to-do item as "complete". Mostly because => https://dwm.suckless.org/ dwm was not something I needed very much. I also didn't like the fact, that it has no config file by default. You are supposed to change configuration in config.h itself. This applies to all suckless apps by the way. But in case of something like window manager, you will start to notice. So I tried it several times, but went back at the end of the day. Real commitment to switch came continuously with my constant drive to minimize. What I wanted to achieve was to be able to run my setup (OS+WM+apps) on any hardware, old one that I can buy for little or no money, or possibly an opensource one, like the => https://mntre.com MNT Reform . You can not expect high performance specs from such devices. But I also didn't want to compromise the usability and speed. So I switched, read complete man pages, went through all the available patches and applied the ones that provided functionality I needed. I had everything I wanted from a WM in one weekend. It takes 11MB of RAM right now, and it is not even fresh start. Uptime on this laptop is 46 days. I also realized after some months, that I really don't need to change the config at all. I run dwm for several years now, and I think I changed the config two or three times after it stabilized at the beginning. ### st, tabbed, scroll Default terminal emulator in dwm is => https://st.suckless.org/ st . It stands for Simple Terminal. And simple it is for sure. It has no support for scrolling, del and backspace buttons doesn't do what you would expect from them, lines in TUI apps are not continuous, no tabs, no context menu. You can imagine. I tried hard with this one. Finally, I settled with => https://github.com/lukesmithxyz/st Luke Smith's fork of st , which by default has scrolling support, command output copying, or url launching. I only changed his color preferences back to default st colors, default font size, default cursor shape and fixed the delete button. Everything in config.h of course. Besides that I like it very much. Memory footprint is again ridiculously small as you can imagine. By the way, default ways to add scrolling support into st is either by using suckless => https://tools.suckless.org/scroll/ scroll utility, which is experimental and abandoned, or => https://st.suckless.org/patches/scrollback/ scrollback patch , which is a bit bloated and often not going well with other patches. Last but not least, if you want tabs support, you can use => https://tools.suckless.org/tabbed/ tabbed utility from suckless. It is very simple window wrapper providing general tab functionality for any app, not only st. ### surf Suckless web browser is the last one I use. I experiment with => https://surf.suckless.org/ surf for month or so. I am still trying to aggregate the minimal amount of functionality I need to switch over. I already have tab support through => https://tools.suckless.org/tabbed/ tabbed , => https://github.com/StevenBlack/hosts static adblocking through /etc/hosts, inverted colors using custom css added to ~/.surf/styles/default.css. Keyboard shortcuts are vim-like by default. The last thing I need is tagging links with characters, so I can browse through the links with keyboard only. Browser is webkit based, so it is possible, that some pages will not work, but it is definitely ok for standard web browsing. You will still probably need some mainstream browser as a backup. ### Summary My base system with running X server, WM and with all my daily apps opened (web browser, terminal, rss reader, music player, instant messenger, to-do app, email client and file manager) takes only 800 MB RAM give or take. And that's mind-blowing in current modern world, where even low-end laptops needs to be sold with 8GB RAM, because otherwise they won't be usable when using windows or ubuntu with mainstream set of GUI apps. I like my current setup very much, but 10 years ago, it wouldn't be for me yet. As it is not for most of the people. Too many compilations, patches, code changes and tinkering. But when set up, it is rock steady, fast, lightweight and never goes in your way.]]></content> <author> <name>Marián Mižik</name> </author> <category term="Linux, Minimalism" /> <summary> I used to be xfce user for very long time. But I am suckless for several years now. Let's find out if it is good for me and/or if it may be good for you too.</summary> </entry> <entry> <title>Two decades since first project delivery</title> <link href="gemini://mizik.eu/blog/two-decades-since-first-project-delivery/" rel="alternate" type="text/plain" title="Two decades since first project delivery" /> <published>2021-10-26T00:00:00+02:00</published> <updated>2021-10-26T00:00:00+02:00</updated> <id>gemini://mizik.eu/blog/two-decades-since-first-project-delivery/</id> <content><![CDATA[It was 10th of October 2001 when I delivered my first paid order. I was a teenager on high school and it wasn't very special for me that day. I was happy for the pocket money, but much more important was, that on the same day, I started dating my first girl ever and I was completely fallen in love :). Surprisingly, that relationship lasted 5 years, and it almost looked like it would be my only one, but years passed, rivers has flown and at the end of 2021, I am equally happy for both of these happenings. First made me a developer and second made me a stronger person. But this article is in the first place tribute to the most important people of those 2 decades. ### Genesis Computers for me was love at first sight. We couldn't afford one, but I got a book called "ABC about PC" and I read it maybe 3-4 times. It explained basic architecture, Von Neumann model, PC history, PC components and their histories. Fun fact is, that the author of the book was a university professor at the school where I later went and I even had him on some classes. But back to the story... So I was around 12 years old and my regular dialogs with parents were like... ME: I really want a PC mom. MOM: You know we can't afford it now. ME: But mom, it doesn't have to be new one. I am ok with some older model, it doesn't need to be even Pentium. 486 would cut it. I got even promised some nice price on DX2 at the shop downtown. It has shitty unstable VLB bus, but I am ok with it. MOM: No son... So what I did was spending my pocket money in the internet cafe bars, where instead of internet browsing, I brought a floppy disk, loaded my work and continue where I finished last time. Finally, after some months, I build my first own static website. And next year, when first good local provider of free web hosting emerged, I uploaded it and started my online presence, which was later enhanced by functionalities backed by PHP. Altogether, I got huge number of 3 programming jobs during the high school. Web pages for 2 local companies and one NGO. Few months after I started my Uni studies, I got permanent programming job and it never stopped since. ### People Apart from the book, I mostly got inspired by people. So let's tell the same story from another perspective. Through the relationships with friends-programmers. #### Vincent Name of the first one is Vincent. A classmate with an old computer and older brothers. Their home was the first place where I saw raw HTML and what it comes of it when rendered in a browser. I must have been very annoying visitor, always asking, always begging for more and I also wanted to play some games of course. So noone could blame them, that I wasn't invited that much :) But those couple of experiences were enough to spark my interest. I borrow a book about HTML from a library. I read it in 2 days and then I read it couple more times next weeks and after some months I even understood it completely. The result of this first encounter was my website I wrote about in previous chapter. When I started to be more proficient in programming, it was also easier to speak with Winnie and exchange ideas and knowledge. It was not one-sided anymore. Later on, we both were very enthusiastic about creating animations and games in Adobe Flash and its ActionScript, but then our paths split and I went deeper to programming and he went deeper to graphics and design. He is now a senior UX guy, and I am a code guy. #### George S. George was also a high school classmate of mine. He was more eager to go down the rabbit hole than Vincent when it comes to more advanced programming. We both started to play with PHP to bring some "magic" to our static web sites and later he introduced me to Java. George wasn't particularly helpful as a person, who would join you to solve your problems. He would let you sweat blood most of the time even if he would know how to help. What he was great at though, was opening new doors and telling what he saw behind it with upmost enthusiasm. So after getting somehow proficient in PHP, I bought 2 java books and started right away. One more reason why Java was such an eye opener for me was, that it was my 4th programming language after JavaScript, ActionScript and PHP. Therefore, I started to grasp some general programming concepts, best practices and design patterns without actually knowing those names or what they mean in broader context. What also greatly helped was the fact, that I finally had my brand-new PC around this time. It was AMD Duron 900Mhz beast with 256MB RAM and 30GB HDD :) #### Vladimir Next important person in my IT life was my University classmate Vlado. We later also became room mates and then flat mates and we were almost 30 when our paths finally split to different cities and places. 10 years with one person almost every day, that creates some special kind of family-like bonding. We coded a lot, and we made a lot together. We also crippled some of the services, machines and jobs on the way. Well, you can't make an omelette without breaking eggs. And god it was fun. Learning by trying, together with someone else, mostly with no deadlines and responsibility. We boldly went where we had never been before. Again and again. That was also the reason, why we haven't finished some jobs we took. But lesson learned and later I knew when to say yes to a new opportunity. We still work together from time to time, but exclusively on Linux administration stuff, though. #### Martin Z. Martin was first person I met, who actually understood programming on the fundamental level. It was natural for him to write nice, structured, best practice code. He was my tech-lead for 4 years in the one company and briefly also colleague in another. Most of the senior/advanced programming knowledge I know today has come directly or indirectly from him. Either from face2face lesson, or from code reviews he used to do for me, or from me studying his code and later into our relationship also from some suboptimal technical/human decisions he made during his struggle to keep the code base in best possible shape. Until this day, he would be my number one person to choose if I would be building coding dream team :). #### George M. A bright mind of another generation. He came fast, stayed briefly and left soon for both of us :) We have spent 2 years working together, then he left for a bigger world, but we stayed in regular contact. I tried to share all of my knowledge with him and many times it wasn't even IT related. He was able to grasp the concepts like noone else I ever knew. During those 4 years we know each other, he was able to maybe quadruple his skills. Although I don't think that "big world" helped him to get happier as a person, it is always pleasure to have some drink with him when he is around. It may look that our relationship was strongly one-sided but it's not true. The energy he brought pushed me hard to refresh my tech stack, habits, tools and reconsider new ideas. He came to my life at right time. And his leave made me struggle for quite some to regain some significant work drive. Definitely in top 3 dream team choice. #### Martin H. Last, but certainly not least, is my current colleague Martin. When he came to the job interview with me, he was still on high school. Completely different personality-wise than me, but still, he strongly reminded me myself in his age. It has always been pleasure to work with him. He already has his master degree done for a few years, so long time passed. We worked together on many big projects as lead developers and I enjoyed them, even in hard times, mostly thanks to his attitude, knowledge and great ideas. Over the years he became strong competitor for the No. 1 spot for my dream team. Who knows if the previous Martin Z. is not the number one only because of melancholic reasons... ### Summary So here we are. At the end of nostalgic journey. Thank a lot to all of you guys. It was and still is a pleasure. Who knows what will next 20 years bring. I personally hope for some good stuff :)]]></content> <author> <name>Marián Mižik</name> </author> <category term="Personal" /> <summary> It was 10th of October 2001 when I delivered my first paid order. I was a teenager on high school and it wasn't very special for me that day. I was happy for the pocket money, but much more important was, that on the same day, I started dating my first girl ever and I was completely fallen in love :). Surprisingly, that relationship lasted 5 years, and it almost looked like it would be my only one, but years passed, rivers has flown and at the end of 2021, I am equally happy for both of these happenings. First made me a developer and second made me a stronger person. But this article is in the first place tribute to the most important people of those 2 decades.</summary> </entry> <entry> <title>Strong vs Weak data linking</title> <link href="gemini://mizik.eu/blog/strong-vs-weak-data-linking/" rel="alternate" type="text/plain" title="Strong vs Weak data linking" /> <published>2021-09-18T00:00:00+02:00</published> <updated>2021-09-18T00:00:00+02:00</updated> <id>gemini://mizik.eu/blog/strong-vs-weak-data-linking/</id> <content><![CDATA[I have been using a customized zettelkasten method for my personal knowledge database since university, but recently I have deleted all strong (hard) links from the data and I like it. Here is why... ### Zettelkasten Zettelkasten is basically a card index. Like the ones you could find in most of the bureaus and doctor offices in the past. Every card has some unique identification, some data, and is stored in the drawer. The drawer keeps together cards with some common semantics. That can be a starting letter, or field of work, address, or whatever else. On top of this, zettelkasten has added strong links. So you are adding IDs of some related cards to the footnote of the other card. This helped people in the past to find more content without necessity of going through it all. ### Modern days In modern days, we have digitalized paper cards and organized them into the databases. Databases have this great ability to be queried based on the information that database tables contain. One additional data we can provide to our cards are weak links. In the world of social media, you would call them #tags. These keywords are adding additional semantic information to the data and of course, they can be queried too. ### My experience I started my zettelkasten in digital era, but before raise of major social networks. On the other hand, as an IT student, I was still familiar with the concept of weak linking. I also knew, I would be able to query my data with them, so I started adding tags since the beginning as a secondary linking mechanism. I used strong links (IDs of other cards in the footnote) as a primary one to do it the zettelkasten way, and also I thought it would be nice to have them, especially when they could be used as an interactive hyperlink (just like when reading stuff on the web) ### Staying in sync When you have some data collection, and you want to keep it up to date and relevant, you have to go through it from time to time (for me maybe once a year), and add/remove links and keywords, to keep maximum relevant links alive and without false positives. Thanks to this, you can get more precise results when using the dataset. And as we know from Pulp Fiction movie, it takes time and effort :) So I began to analyse how important it is for me to have both strong and weak links in my data. ### Refactoring I have around 10k cards these days. This is not much, but not that little too. My empiric analysis shown, that I almost exclusively use full-text search and tag search to get subset of relevant content and I get very accurate results. I almost never used to continue to relating cards via hyperlinks, especially because after the search I already have all relevant card titles in front of me on one screen, and I am able to seek to the stuff I am looking for faster this way, than to move through the hyperlinks down the rabbit holes. ### Caveats Relying on weak links only can cut you off from some types of content, that are semantically far from the subset you filtered, but in some cases, still relevant to be shown. I have had this problem from time to time. I remembered I have some other piece of information there and had to alter my query to find it. If I had much bigger set of cards (e.g. 100k), or my memory would be worse, I wouldn't get it. This problem can be worse if you would have much more data than me, data with many semantics, or your querying is not strong enough, like for example you don't know how to create complex queries, or your software is not able to query with logical operands like AND, OR and using parenthesis. Luckily, I don't have these issues. I would say, even without advanced queries, most of the people wouldn't encounter "missing cards" problem, if they have up to date and robust enough weak links. ### Summary Currently, I am confident, that setting up personal/work knowledge database using only weak linking is optimal for up to 15-20k cards if the data are homogenous enough and well maintained. You can save a couple of days a year of manual data and link optimizations. Also, your way of thinking about linking information between each other will be simplified, because you have to consider only one mechanism.]]></content> <author> <name>Marián Mižik</name> </author> <category term="Personal, Zettelkasten" /> <summary> I have been using a customized zettelkasten method for my personal knowledge database since university, but recently I have deleted all strong (hard) links from the data and I like it. Here is why...</summary> </entry> <entry> <title>Howto setup your personal XMPP server</title> <link href="gemini://mizik.eu/blog/how-to-setup-your-personal-xmpp-server/" rel="alternate" type="text/plain" title="Howto setup your personal XMPP server" /> <published>2021-08-04T00:00:00+02:00</published> <updated>2021-08-04T00:00:00+02:00</updated> <id>gemini://mizik.eu/blog/how-to-setup-your-personal-xmpp-server/</id> <content><![CDATA[There are several good reasons to have your own chat server instance. Some are philosophical like federalization of internet, some practical like keeping your data safe and only for yourself, some ethical like creating secure communication node for those who for any reason can not host their own. Or maybe you would like to know how the basic architectural patterns of client-server and server-server communication works. So let's dive in. ## Articles of this series => /blog/how-to-setup-your-personal-xmpp-server/index.gmi Howto setup your personal XMPP server => /blog/how-to-setup-your-personal-caldav-carddav-server/index.gmi Howto setup your personal CalDAV/CardDAV server => /blog/how-to-proxy-your-self-hosted-services-using-web-server/index.gmi Howto proxy your self-hosted services using web server => /blog/how-to-setup-and-secure-web-server/index.gmi Howto setup and secure web server => /blog/what-service-you-can-host-on-your-personal-linux-vps/index.gmi Services you can selfhost on you personal Linux VPS => /blog/how-to-secure-your-personal-linux-vps/index.gmi Howto secure your personal Linux VPS => /blog/how-to-setup-your-personal-linux-vps/index.gmi Howto setup your personal Linux VPS => /blog/why-setup-your-personal-linux-vps/index.gmi Why setup your personal Linux VPS ## Choosing the protocol Nowadays, you have three main open protocols with several implementations on both client and server side. IRC, Matrix and XMPP. I am going to oversimplify here, so those of you, who have some knowledge regarding these protocols, feel free to skip this section. For those who don't know IRC, it's like that 90s old school internet chat, where you would log in and hang out in some chat room. It supports one on one chats and there are even clients for mobile phones, but you are not able to share files, there is no delayed message delivery after you come back online, it does not support automated federation and so on. On the other hand, Matrix, or rather its main implementation Element (formerly Riot.IM) is much more robust and modern. It supports end-to-end encryption, file exchange, audio and video calls, it is HTTP based, and messages are instantly saved and redistributed inside federated servers. An open source Slack or Skype on steroids. But it requires PostgresSQL server, some web server as a reverse proxy, and it is very heavy on using space/database resources. Specially when you use it to federate with other servers. And this is why I like and choose XMPP. I don't need all the bells and whistles, and I really want it to be light on system resources. I need to chat and share files, I want it simple and working. I have my own instance for my family, I federate with the people from work, some other developers and friends, that do have accounts somewhere, or they host their own instances like I do.
apt install prosody
yum install prosody
pkg_add prosody
## Configuration => https://prosody.im/doc/configure Official documentation is great. When you kick off from default config, you only need to change the VirtuaHost line to your custom domain and create users using prosodyctl command line tool like so:
prosodyctl adduser mranderson@cooldomain.xyz
and server is now up and running. It is not a very usable setup though. So here are the steps for a good one: 1. If you have a firewall, you need to => https://prosody.im/doc/ports enable couple of ports 2. To support SSL, you need to get certificates for example from Let's Encrypt and then add their paths to the main config. Don't forget to add a certbot post_hook that will always copy the certs after the renewal procedure from /etc/letsencrypt/live to your specific location:
certificates = "certs"
https_certificate = "/etc/prosody/certs/cooldomain.xyz.crt"
https_key = "/etc/prosody/certs/cooldomain.xyz.key"
3. To support federation and to be sure most of the clients will be happy to connect to your server, you need to set up some DNS. First two enable clients and servers to discover port where you listen to. Another two are for additional XMPP components (modules) we will enable and provide. One for file upload/sharing and last one is proxy that will enable file transfers from behind the firewalls:
_xmpp-client._tcp 600 IN SRV 5 0 5222 cooldomain.xyz.
_xmpp-server._tcp 600 IN SRV 5 0 5269 cooldomain.xyz.
upload 600 IN A 45.77.54.222
proxy 600 IN A 45.77.54.222
4. Add/Enable default sane list of modules that need to be turned on to make it work nicely.
modules_enabled = {
"roster"; -- Allow users to have a roster
"saslauth"; -- Authentication for clients and servers
"tls"; -- Add support for secure TLS on c2s/s2s connections
"dialback"; -- s2s dialback support
"disco"; -- Automatic service discovery by clients
"carbons"; -- Deliver to all clients with the same account logged in
"pep"; -- Enables users to publish their avatar, mood, activity...
"private"; -- Private XML storage (for room bookmarks, etc.)
"blocklist"; -- Allow users to block communications with other users
"vcard4"; -- User profiles (stored in PEP)
"vcard_legacy"; -- Conversion between legacy vCard and PEP Avatar, vcard
"version"; -- Replies to server version requests
"uptime"; -- Report how long server has been running
"time"; -- Let others know the time here on this server
"ping"; -- Replies to XMPP pings with pongs
"mam"; -- Archive messages on server for delayed delivery
"csi_simple"; -- Simple Mobile optimizations
"bosh"; -- Enable BOSH clients, aka "Jabber over HTTP"
"websocket"; -- XMPP over WebSockets
"http_files"; -- Serve static files from a directory over HTTP
"http_upload"; -- enable files sharing between users
"groups"; -- Shared roster support
"smacks"; -- Keep chat alive when the network drops for a few seconds
"server_contact_info"; -- Publish contact information for this service
"proxy65"; -- Enables file transfer service for clients behind NAT
}
In my case (OpenBSD), modules "http_upload" and "smacks" were not in the default installation and I had to download them and copy to modules directory manually. If it is your case too, you can find all prosody modules => https://modules.prosody.im here . My path to modules dir is /usr/local/lib/prosody/modules/. If you don't know yours, just search filesystem for some core module like "mod_motd.lua" using "mlocate" package for example. 5. The last thing is to enable two Components we mentioned during DNS setup:
Component "upload.cooldomain.xyz" "http_upload"
http_upload_file_size_limit = 20971520
http_max_content_size = 31457280
consider_bosh_secure = true
Component "proxy.cooldomain.xyz" "proxy65"
proxy65_ports = { 5000 }
## Summary And here you go! Enjoy the self-hosted communication ride :) Full config resulting from this howto can be [downloaded here](gemini://mizik.eu/download/prosody.cfg.lua)]]></content> <author> <name>Marián Mižik</name> </author> <category term="VPS, Linux, Self-host" /> <summary> There are several good reasons to have your own chat server instance. Some are philosophical like federalization of internet, some practical like keeping your data safe and only for yourself, some ethical like creating secure communication node for those who for any reason can not host their own. Or maybe you would like to know how the basic architectural patterns of client-server and server-server communication works. So let's dive in.</summary> </entry> <entry> <title>Howto setup your personal CalDAV/CardDAV server</title> <link href="gemini://mizik.eu/blog/how-to-setup-your-personal-caldav-carddav-server/" rel="alternate" type="text/plain" title="Howto setup your personal CalDAV/CardDAV server" /> <published>2021-07-01T00:00:00+02:00</published> <updated>2021-07-01T00:00:00+02:00</updated> <id>gemini://mizik.eu/blog/how-to-setup-your-personal-caldav-carddav-server/</id> <content><![CDATA[Do you like to backup or share your calendar and contacts, but you don't want to rely on proprietary companies and solutions built into your phones? You don't like to share such information? You don't want to be restricted to specific number of calendars, events or contacts? You want to be sure your provider won't close the service and lock you out from your data? Then it is time to self-host your own CalDAV and CardDAV service! ## Articles of this series => /blog/how-to-setup-your-personal-xmpp-server/index.gmi Howto setup your personal XMPP server => /blog/how-to-setup-your-personal-caldav-carddav-server/index.gmi Howto setup your personal CalDAV/CardDAV server => /blog/how-to-proxy-your-self-hosted-services-using-web-server/index.gmi Howto proxy your self-hosted services using web server => /blog/how-to-setup-and-secure-web-server/index.gmi Howto setup and secure web server => /blog/what-service-you-can-host-on-your-personal-linux-vps/index.gmi Services you can selfhost on you personal Linux VPS => /blog/how-to-secure-your-personal-linux-vps/index.gmi Howto secure your personal Linux VPS => /blog/how-to-setup-your-personal-linux-vps/index.gmi Howto setup your personal Linux VPS => /blog/why-setup-your-personal-linux-vps/index.gmi Why setup your personal Linux VPS ## CalDAV & CardDAV CalDAV and CardDAV are protocols specified in => https://datatracker.ietf.org/doc/html/rfc4791 RFC4791 , => https://datatracker.ietf.org/doc/html/rfc6638 RFC6638 and => https://datatracker.ietf.org/doc/html/rfc6352 RFC6352 . As years passed, more RFCs came to fill the gaps. They are free to be implemented and provide ability to synchronize calendars, events, contacts and tasks between server and multiple clients (devices). They are supported by both Android and iOS devices and there is plenty of software for every major OSes (BSDs, Linux, Windows, MacOS) that can handle these protocols. ## Choosing the implementations I personally use => https://radicale.org/3.0.html Radicale on server, => https://github.com/pimutils/vdirsyncer Vdirsyncer with => https://github.com/pimutils/khal khal and => https://github.com/scheibler/khard/ khard on desktop and => https://www.davx5.com DAVx5 on Android. Check this => https://en.wikipedia.org/wiki/Comparison_of_CalDAV_and_CardDAV_implementations Wikipedia list for plethora of other options.
python3 -m pip install --upgrade radicale
## Configuration => https://radicale.org/3.0.html#tutorials/basic-configuration Official documentation is great. It takes you step by step through all standard scenarios like running => https://radicale.org/3.0.html#tutorials/running-as-a-service as a Systemd service , running => https://radicale.org/3.0.html#tutorials/reverse-proxy behind reverse proxy , or even as a => https://radicale.org/3.0.html#tutorials/wsgi-server WSGI service , which is my case. ## Summary Radicale instance on my OpenBSD machine, which is syncing 8 clients through both CalDAV and CardDAV, with several hundred contacts and several thousand calendar events, doesn't take more than 30MB RAM. It runs behind the web server, so I don't need to care about managing custom high ports on my firewall, or SSL certificates. Check more benefits [in my older article](gemini://mizik.eu/blog/how-to-proxy-your-self-hosted-services-using-web-server/) regarding this topic. During several years I use it in "production" I never had to restart or maintain it in any way. But I need to say, my scenario is very simple. One address book and one calendar for every person in my family plus one shared calendar. Give it a try and let me know if it works for you too :)]]></content> <author> <name>Marián Mižik</name> </author> <category term="VPS, Linux, Self-host" /> <summary> Do you like to backup or share your calendar and contacts, but you don't want to rely on proprietary companies and solutions built into your phones? You don't like to share such information? You don't want to be restricted to specific number of calendars, events or contacts? You want to be sure your provider won't close the service and lock you out from your data? Then it is time to self-host your own CalDAV and CardDAV service!</summary> </entry> <entry> <title>Stoicism in modern world</title> <link href="gemini://mizik.eu/blog/stoicism-in-modern-world/" rel="alternate" type="text/plain" title="Stoicism in modern world" /> <published>2021-06-06T00:00:00+02:00</published> <updated>2021-06-06T00:00:00+02:00</updated> <id>gemini://mizik.eu/blog/stoicism-in-modern-world/</id> <content><![CDATA[I practice Stoicism more or less successfully for more than a decade. I would like to share with you brief practical cheatsheet, that will show you what (I think) Stoicism is about in real world situations and how can it help you to be better person and live better life. ### My 9 rules Stoicism is a philosophy. A way of living. Created by ancient greeks, popularized by ancient romans and used throughout following centuries until these days. I don't want to talk history, or the academic/dogmatic stuff here, check the => https://en.wikipedia.org/wiki/Stoicism Wikipedia article for detailed overview. I just want to summarize my "real life scenario" stoic rules:
location / {
proxy_pass_header Server;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto "https";
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-By $server_addr:$server_port;
proxy_connect_timeout 300;
proxy_send_timeout 300;
send_timeout 300;
keepalive_timeout 300;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:13000;
}
Let's go over the configuration details to check what is going on:
proxy_set_header HTTP_AUTHORIZATION $http_authorization;
proxy_set_header Connection '';
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 24h;
most of them will be applicable in case of websocket communication too. The reason behind them is, that both technologies use long lasting open connections and are used as a persistent communication channel between client and server. Web server proxy should keep them open and mustn't apply any alterations or cache. ### Authentication If you want to add an authentication to your newly proxied self-hosted service, just add 2 more configuration options:
auth_basic "password is required";
auth_basic_user_file /etc/nginx/htpasswd-file-for-service;
Now you have enabled 'http basic' authentication. User will have to provide login and password to continue through the proxy. The 'htpasswd-file-for-service' is a plain text file with the login:password tuples. It should have htpasswd format. Generation of such file is easy, just call:
htpasswd -c /etc/nginx/htpasswd-file-for-service peter
server {
listen 80;
server_name servicex.mizik.sk;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name servicex.mizik.sk;
charset utf-8;
ssl on;
ssl_certificate /etc/letsencrypt/live/servicex.mizik.sk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/servicex.mizik.sk/privkey.pem;
ssl_dhparam /etc/nginx/ssl/dhparams.pem;
location / {
proxy_pass http://localhost:18000/;
proxy_pass_header Server;
proxy_set_header X-Script-Name /;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Remote-User $remote_user;
auth_basic "password for servicex is required";
auth_basic_user_file /etc/nginx/htpasswd-servicex;
}
}
Remember, that you can define multiple 'location' sections, therefore you can have 'location /' for static web page and 'location /comments' where you will proxy to some self-hosted commenting solution. This will make it nice and clean and also you will workaround cross site and CSP issues. ### Summary Using this simple setup you will get unified and standardized access to your self-hosted services. They will look the same from the outside until user will get through the authentication to the specific API of the service. Check other web server modules to find out which other features may be globally applied to you self-hosted service APIs.]]></content> <author> <name>Marián Mižik</name> </author> <category term="VPS, Linux, Self-host" /> <summary> Many services available for self-hosting provide promised functionality, but let you take care of security and/or authentication. These are the cases when web server comes to the rescue with its ability to create a layer between internet and your service, which will provide additional features like authentication, upgrade to https with valid certificate, DoS prevention using fail2ban, or ability to communicate with service using custom (sub)domain. These features were explained in the previous article.</summary> </entry> <entry> <title>Howto degoogle your Android phone</title> <link href="gemini://mizik.eu/blog/howto-degoogle-your-android-phone/" rel="alternate" type="text/plain" title="Howto degoogle your Android phone" /> <published>2021-01-18T00:00:00+01:00</published> <updated>2021-01-18T00:00:00+01:00</updated> <id>gemini://mizik.eu/blog/howto-degoogle-your-android-phone/</id> <content><![CDATA[There are many reasons to degoogle your phone and there are 2 main ways how to do it. The hardcore way and the second one, for the sake of this article, can be called the "gracefull degradation way". Both of them end with your stock android OS replaced by a custom ROM that will lack all of the the Google apps and background services. ### Why should I bother? Degoogle is a process made of multiple steps which require some effort, maybe some money, but definitely time and attention. Degoogling your phone is no exception. You will have to get used to new apps, maybe mourn some nice features. You will be more incompatible with the mainstream people that will send you links to Google services and you wouldn't be able to log in, or use native app to handle the link. They will search for you on Messenger and/or Twitter and won't be able to get in touch with you using their default standard ways. Being different than masses always takes its toll. It is your call if the benefits are bigger than problems that this action causes. For me its worth it. I like that when I tail logcat on my phone and I don't use it, it stays still, because there is almost no background stuff going on. I like that my data are not being checked or processed. I like that I can backup them, migrate them, I like my 2+ days battery life and I don't like being dependent on something/someone. ### Blue pill or red pill So you decided to degoogle? Then you already took the red pill once. Now it is time to make the decision again :) Android OS itself is open source, although, it is developed mostly by Google, so it is made according to Google's plans. Unfortunately, there are many important APIs that are not opensourced and huge amount of apps rely on them. Libraries like Play Services, Google Maps API, Google Cloud Messaging, Network location provider API and so on. Taking the red pill means, you will replace your stock rom for the custom one and won't install any of the Google's proprietary apps. Your phone will be functioning well and battery life will get better, you will use F-Droid or other free store as your app store. But if you will need some app that is only on Google Play, downloading and installing the apk manually won't work if that app will try to use any of that missing APIs. It will most likely stop working during the start, or it will return some kind of error message. This is many times true for mobile banking apps, big companies apps or official government apps. Thank god there is a blue pill, the "gracefull degradation way". It is called MicroG. What this set of apps does? They are basically replacing Google's proprietary libraries by impersonating them. They publish the same APIs so the apps relying on it won't crash. In some cases, the MicroG alternative really does the same and returns meaningful data, in other cases it returns dummy data just to comply to given API. Last thing needed to make this all work is to persuade everyone that the MicroG app is really Google app. This means to provide the google package name with a valid signature. This can not be done by Microg itself. It needs an OS level patch called "signature spoofing" that will allow any app to ask for a permission to directly access signing certificate. Some custom ROMs has this patch applied, some does not. The most popular and widely known custom ROM is LineageOS (before called Cyanogenmod), bude unfortunately, they do not have signature spoofing turned on and they denied proposed PR from MicroG team. More information regarding this topic is => https://blogs.fsfe.org/larma/2016/microg-signature-spoofing-security/ here . This is why MicroG offers custom LineageOS builds with signature spoofing turn on and all MicroG apps preinstalled. What does it all mean for you? For example let your device be good old Google Nexus 6 (codename shamu). Go to the => https://download.lineage.microg.org/shamu/ MicroG LineageOS pages for shamu and download the ROM zip. Then go to the => https://eu.dl.twrp.me/shamu/ twrp recovery pages for shamu and download the img file. And that's it! Now you can follow the standard => https://wiki.lineageos.org/devices/shamu/install LineageOS installation instructions for shamu. The only difference will be, that you won't use the img and zip linked in that manual but you will use the previously downloaded twrp img file during recovery flash procedure and the MicroG LineageOS build zip during the ROM installation. Congratulations! Now you have a degoogled Android phone ready to serve you well. ### Real life scenario Now you may say. Ok, I have booted up the device and have nothing besides calls, sms and camera. Let's quickly cover how to setup the device to standard use. You already have browser and photo apps in the initial installation too.
Also, there may be a situation, when you need to keep some application that you don't want installed and you would like to block its internet access. You can do so with NetGuard. This app act as firewall, but it doesn't require root privileges, because it does not utilize iptables, but rather acts as a VPN service. Because of that, whole OS traffic is comming through the app and app can then restrict traffic based on your rules. Unfortunatelly, this can be applied only if you don't use real VPN, as Android won't let you run two VPNs simultaneously.
It is much easier to degoogle your phone these days than it was in the past. Up to date and easy to follow step by step installation manuals of LineageOS together with MicroG are the reasons why it is available to much broader spectrum of people. If privacy matters something to you and you're not frequent Google Play Store downloader or mobile gamer, this setup is arguably the best what you can get in the Android world. Of course, you have also other options, for example running mobile linux distribution like Mobian, Manjaro ARM, or OpenSUSE. You will need a very specific device for these distros though. (e.g. PinePhone, Librem5 or some phones from the Nexus and Pixel family) and the resulting usability is drastically behind the average Android experience.]]></content>
<author>
<name>Marián Mižik</name>
</author>
<category term="Android, Degoogle, MicroG" />
<summary>
There are many reasons to degoogle your phone and there are 2 main ways how to do it. The hardcore way and the second one, for the sake of this article, can be called the "gracefull degradation way". Both of them end with your stock android OS replaced by a custom ROM that will lack all of the the Google apps and background services.</summary>
</entry>
<entry>
<title>Howto setup and secure web server</title>
<link href="gemini://mizik.eu/blog/how-to-setup-and-secure-web-server/" rel="alternate" type="text/plain" title="Howto setup and secure web server" />
<published>2021-01-08T00:00:00+01:00</published>
<updated>2021-01-08T00:00:00+01:00</updated>
<id>gemini://mizik.eu/blog/how-to-setup-and-secure-web-server/</id>
<content><![CDATA[Web server is one of the most basic services you can self-host. Very simple to install, reasonably simple to configure for basic use. Not that hard to setup for more robust usage, but the hardest thing is to run it in secure way. This is also the reason why this episode is a bit longer than usual.
Howto setup your personal XMPP server
Howto setup your personal CalDAV/CardDAV server
Howto proxy your self-hosted services using web server
Howto setup and secure web server
Services you can selfhost on you personal Linux VPS
Howto secure your personal Linux VPS
Howto setup your personal Linux VPS
Why setup your personal Linux VPS
In the [episode 4](gemini://mizik.eu/blog/what-service-you-can-host-on-your-personal-linux-vps/) we talked about apache, nginx and also some other not that common web server implementations. Today we will focus on nginx. Nginx, together with apache2, takes over
of the market share. But over the last months and years, apache is loosing its position and nginx is still on the rise. That's also the reason why I will focus on nginx today. All of the topics I will cover in this article apply on Apache2 too, just google the exact syntax variation of the steps.
Basic installation is as simple as 'sudo apt install nginx' in case of debian/ubuntu, but comparably simple in different distros too. Your web server should be now up and running, serving its default welcome page when typing localhost into your browser of choice. Now, there are 3 important places to look at:
First one is /var/www/ directory, which is the default directory to store web content that web server will serve. So you can find there the default info index.html page that was loaded in the browser. If you want to host some web, just create new directory under /var/www and copy the site content. Don't forget to apply correct rights as web server runs under www-data user and this user needs to be able to access those files. You don't need to stick with default directory, web server can serve files from any location if that location has the right permissions. In same cases, administrators even chroot the folders that web server hosts so in case, when it will be compromised, attacker would find himself in the sandbox. But I am not going to cover this option here.
This is the directory for virtual host configuration files. Best practice is to have different configuration file for different web pages (or web services). Web server will then execute them as a separate virtual server. Check the default configuration file to see how it look like. In most distros it will have plenty of comments. Check also the /etc/nginx/sites-enabled directory. You will see the symlink to the file in sites-available. That's because nginx serves only those configs that are enabled by creating a symlink to this directory. When you add a new symlink or remove one, you need to reload web server service (service nginx reload) or using the old non-systemd way on some distros: /etc/init.d/nginx reload.
This is the main configuration file. It is the global configuration for web server itself and it will also apply to all configuration files in sites-available too, but they can override these global settings by defining the same configuration option again in their file. Main config imports everything in sites-enabled directory at the end so it is obvious what is the relationship between them and also why only those configs from the sites-available directory that are linked to sites-enabled are actively used.
Ok! Take a look at some basic virtual server configuration file now. Let's create it in /etc/nginx/sites-available/example :
server { listen 80; server_name example.mizik.sk; charset utf-8; root /var/www/example.mizik.sk; index index.html; location / { try_files $uri $uri/ =404; } }
So what do we have here. We are defining server, that listens on port 80 (plain HTTP) and it will react only if the requested host will be example.mizik.sk. Default charset will be utf8, index file should be called index.html (it needs to be defined because it can be also php file or other file type) and root directory will be /var/www/example. Then inside the server section we can define multiple location sections. Configuration options declared inside location section will apply only to the defined location and recursively down from there. In our case, one location section is enough and we are not declaring much, only that first we will try to serve the path as a file, then as a directory. If we will symlink this new file to sites-enabled directory and reload nginx, we will be able to access the page defined in root clause using the hostname in server_name clause. (of course, there should be a valid DNS 'A' record that points example.mizik.sk to the IP of our server). Our web is now up and running!
Now let's talk about more robust but important topic. Securing the default installation and the web pages we are going to host.
All mainstream browsers now almost force web pages to have a valid ssl certificate, otherwise your page may be declared as not secure. Fome time to time, there is some
over the topic of hosting webs only over https and therefore effectively blocking old computers from accessing it as they will not support the new TLS variants or they will have no computing power to do so. I don't consider this a reason to host over plain HTTP. We are talking about les than 0.1%. I am voting for using things as long as possible, but this may not be the situation. Almost everyone can afford secondhand Raspberry pi for a single digit amount of dollars or euro, which will have no problem with loading and rendering web page using latest mainstream browser.
So let's add the 's' to our http, shell we? Nowadays, there are multiple ssl certificate issuers that offer free and automatic way to generate and renew a valid certificate for your domain(s). Best known is
. Just follow the
, install certbot, run the command and you will generate new certificate for your domain(s) in couple of minutes. Then you will have to enable ssl in your virtual host config file under server section and point to the newly generated cert files like so:
ssl on; ssl_certificate /etc/letsencrypt/live/mizik.sk/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mizik.sk/privkey.pem;
Then change the port from 80 to 443, reload web server and try it. You should have a valid https connection in the browser. For backwards compatibility, it is good to handle also port 80 and automatically upgrade the connection to https using redirect. Just add another server section above the one you already have:
server { listen 80; server_name example.mizik.sk; return 301 https://$host$request_uri; }
HTTPS is useless if it won't provide what promised only because there are security issues that will compromise the encryption. That's why we will disable unsecure versions of SSL and TLS which by default can be used to negotiate encrypted connection. Attacker mask themselves as clients that can connect only using old protocol or cipher and therefore forcing web server to use less secure and outdated version of it. We can reconfigure it to fail in those cases rather then obey. In /etc/nginx/nginx.conf set 'ssl_protocols TLSv1.2 TLSv1.3;' to disable SSLv3, TLSv1.0 and TLSv1.1. All the current browsers and mobile devices support v1.3, so in case you don't care about older (unsupported versions of browsers and mobile OSes) you can stick with 1.3 only, but in time of writing this article, 1.2 is still considered a viable TLS version.
We have restricted TLS versions to only secure ones, now we need to do the same for the cipher that will be used in encryption itself. Add/replace these lines in your /etc/nginx/nginx.conf:
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384; ssl_ecdh_curve secp384r1; ssl_prefer_server_ciphers on;
Online Certificate Status Protocol (OCSP) was created as an alternative to the Certificate Revocation List (CRL) protocol. Both protocols are used to check whether an SSL Certificate has been revoked. OCSP stapling can be used to enhance the OCSP protocol by letting the webhosting site be more proactive in improving the client (browsing) experience. OCSP stapling allows the certificate presenter (i.e. web server) to query the OCSP responder directly and then cache the response. OCSP stapling addresses a privacy concern with OCSP because the CA no longer receives the revocation requests directly from the client (browser). OCSP stapling also addresses concerns about OCSP SSL negotiation delays by removing the need for a separate network connection to a CA’s responders. To turn on stapling just add/update these two lines in /etc/nginx/nginx.conf.
ssl_stapling on; ssl_stapling_verify on;
Diffie-Hellman is a key exchange mechanism which allows two parties who have not previously met to securely establish a key which they can use to secure their communications. Don't use pregenerated DH group because it is only 1024 bit and used on millions other servers that kept the original value, which makes them an optimal target for precomputation, and potential eavesdropping. We will generate custom one and with 4096 bits using openssl:
openssl dhparam -out dhparams.pem 4096
then create new directory 'ssl' int /etc/nginx and move the file over there. Don't forget to set the correct rights. pem file itself should be writable only to root. Then add/modify this line in /etc/nginx/nginx.conf:
ssl_dhparam /etc/nginx/ssl/dhparams.pem;
By using CAA DNS record you are letting the world (browser) know who should issue your domain SSL/TLS certificate. It prevents mis-issuance of the certificate, where attacker would by some chance be able to generate certificate for your domain signed by a trusted certificate authority. By setting CAA you are restricting to a specific CA you have used. In case of our example and in case of using letsencrypt as an issuer, the record would look like this:
example.mizik.sk. CAA 0 issue "letsencrypt.org"
Many DNS admin web interfaces don't provide ability to set CAA record yet, because it is relatively young specification (2017), but many times if you are able to ask for it directly on provider's support, they will set it for you.
Every software has bugs, web server by default present itself with the name and version. Based on this version, attacker may find out what vulnerabilities apply to it, so let's disable sending the version whatsoever by adding 'set server_tokens off;' in /etc/nginx/nginx.conf.
There are some indications, that by reducing client buffer and body sizes, it will be much harder to exploit any potential buffer overflow bug in the web server by simply reducing amount of data attacker can send in the request. In cases when you are sending bigger data using forms or when you are making proxy for some service, these values won't suffice. But general rule is, start with most restrictive policy and then make changes if necessary. So let's add/modify another 4 lines in /etc/nginx/nginx.conf.
client_body_buffer_size 1K; client_header_buffer_size 1k; client_max_body_size 1k; large_client_header_buffers 2 1k;
HTTP Strict Transport Security (HSTS) is a web server directive that informs browsers how to handle its connection through a response header. This sets the Strict-Transport-Security policy field parameter. It forces those connections over HTTPS encryption, disregarding any script's call to load any resource in that domain over plain HTTP. By setting add_header parameter in the server section of our web configuration in sites-available, web server will send this header for every response it will make.
add_header "Strict-Transport-Security" "max-age=31536000; includeSubDomains; preload";
The HTTP X-XSS-Protection response header is a feature of Internet Explorer, Chrome and Safari that stops pages from loading when they detect reflected cross-site scripting (XSS) attacks. Although these protections are largely unnecessary in modern browsers when sites implement a strong Content-Security-Policy that disables the use of inline JavaScript ('unsafe-inline'), they can still provide protections for users of older web browsers that don't yet support CSP. But we will talk about CSP later. For now, just add another automated response header to your web page configuration like in case of HSTS:
add_header "X-XSS-Protection" "1; mode=block";
The X-Frame-Options HTTP response header can be used to indicate whether or not a browser should be allowed to render a page in a <frame>, <iframe>, <embed> or <object>. Sites can use this to avoid click-jacking attacks, by ensuring that their content is not embedded into other sites. So let's add third automated reponse header:
add_header "X-Frame-Options" "DENY";
The X-Content-Type-Options response HTTP header is a marker used by the server to indicate that the MIME types advertised in the Content-Type headers should not be changed and be followed. This is a way to opt out of MIME type sniffing, or, in other words, to say that the MIME types are deliberately configured, so let's add another automated reponse header:
add_header "X-Content-Type-Options" "nosniff";
If you serve only normal web page and not making proxy for some kind of REST API, then it is safe to restrict what HTTP methods can be used in HTTP requests from client. By this setting we will disable using of DELETE HTTP method as a possible attack vector. Add it to the server section of your web page configuration in sites-available.
if ($request_method !~ ^(GET|HEAD|POST)$) { return 444; }
Content Security Policy (CSP) is an added layer of security that helps to detect and mitigate certain types of attacks, including Cross Site Scripting (XSS) and data injection attacks. These attacks are used for everything from data theft to site defacement to distribution of malware. The implementation is in form of a response http header or <meta> tag. In it's value we are able to define from where can browser load specific web page sources like scripts, css, images and so on. Header below will disable all 3rd party resources and enable only those that are hosted together with html files. It will also disable inline definition of scripts and css which is potentially insecure. This should also be our go to configuration. If you have some specific reason why you need to enable some 3rd party resource, or some inline definition of css or script, you can compute a hash of that inline chunk, or define a custom nonce and add it the the header value. For more information, check
. In our case, let's try to stick with full restrictions:
add_header "Content-Security-Policy" "default-src 'self'; font-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; frame-ancestors 'self'; form-action 'self'; base-uri 'none'; upgrade-insecure-requests; block-all-mixed-content;";
It is possible to prevent DoS attacks too by joining forces with fail2ban and firewall. The implementation consists of three steps. First one is to configure web server to log all suspicious requests to a specific log file. Second is to read this file by fail2ban which will then use the preconfigured hook (jail) to restrict IP of the requesting party using firewall. There are already two nice articles with example configurations, so if you are interested, check
and
link.
Finally,
is really great tool to check if our ssl setup and hardening was successful and correct. Definitely use it.
Last but not least, keeping everything up to date is also crucial, but we got that covered by the automated updates discussed in the [episode 3](gemini://mizik.eu/blog/how-to-secure-your-personal-linux-vps/). So here we are with the web server up and running, properly secured and prepared for adventures much bigger than hosting couple of static sites. The hardening above is sufficient for most of the professional use cases.]]></content>
<author>
<name>Marián Mižik</name>
</author>
<category term="VPS, Linux, Self-host" />
<summary>
Web server is one of the most basic services you can self-host. Very simple to install, reasonably simple to configure for basic use. Not that hard to setup for more robust usage, but the hardest thing is to run it in secure way. This is also the reason why this episode is a bit longer than usual.</summary>
</entry>
<entry>
<title>Services you can selfhost on you personal Linux VPS</title>
<link href="gemini://mizik.eu/blog/what-service-you-can-host-on-your-personal-linux-vps/" rel="alternate" type="text/plain" title="Services you can selfhost on you personal Linux VPS" />
<published>2020-12-30T00:00:00+01:00</published>
<updated>2020-12-30T00:00:00+01:00</updated>
<id>gemini://mizik.eu/blog/what-service-you-can-host-on-your-personal-linux-vps/</id>
<content><![CDATA[Fourth article of the Linux VPS series covers some of the services you can selfhost and and what are the pros and cons of selfhosting them compared to using established cloud services from big companies.
Howto setup your personal XMPP server
Howto setup your personal CalDAV/CardDAV server
Howto proxy your self-hosted services using web server
Howto setup and secure web server
Services you can selfhost on you personal Linux VPS
Howto secure your personal Linux VPS
Howto setup your personal Linux VPS
Why setup your personal Linux VPS
One of the most basic and simple services is web server. You can host your personal web page, blog, or even social network like mastodon. Web server can also be used as a proxy, that is hiding other services behind it and providing additional unified security features such as ssl with valid certificate, or simple DoS and tampering prevention with the help of fail2ban. Another nice thing is, that all those proxied services communication ports can be blocked by your firewall as they doesn't need to be visible to the public. When hosting your own web services, you have complete control over your data, access, server settings, modules and so on.
This may be one of the hardest setups if done manually, but the reward would be lower level understanding of how it works and knowledge for making changes or fixing some problems if needed. Mail server setup consists of several parts. MTA (mail transfer agent) that routes, sends and receives the mail (Postfix, qmail...). POP3/IMAP client that provides your email data using specified protocol. (e.g. Dovecot). These two are necessary for standard use. There are some other, that are very important if you want to use it as your daily driver. It is spam filter (spamassassin, rspamd) and antivirus engine (clamav). These will check incoming emails and can inform you or take some actions if configured that way. Last but not least, you can also setup a webmail client to provide another way of access to your email besides POP3 and IMAP. All these things doesn't need to be configured manually and there are ready to go packages like mailcow or iRedMail that will do most of the work for you.
It is obvious why you would like to selfhost your emails. Complete control over your data, no limit for accounts or aliases, nifty ability to use your own domain for emails and so on. The biggest caveat is that sometimes, even if your setup is top-notch with all the bells and whistles regarding security, open relay, authentication mechanisms like DMARC and DKIM, big players like gmail or hotmail may still put your emails to spam folder.
Some people still like to use RSS even though current internet is strongly pushing towards social network news feeds. If you are one of those who still likes to get the news over the RSS but want to keep the data safe, you can self host it using several feature complete packages like Tiny Tiny RSS or FreshRSS. Most of the webpages still does have rss feed even though it is not publicly advertised. For example, in WordPress you only need to append /feed/ after the domain. You can even get RSS to your favourite youtube channel and use it instead of default subscription mechanism. Just use this url:
https://www.youtube.com/feeds/videos.xml?channel%5C_id=%5BYOUR%5C_FAVOURITE%5C_CHANNEL%5C_ID
]
Most of us has some sort of task and/or todo apps. And most of us want it with synchronization between our daily used devices. There are at least two well known options. Nerdish taskwarrior and NextCloud/OwnCloud, which will give you much more than tasks and todos. They are complete selfhosted cloud mechanism features virtually all you would want in one package, which has its obvious positives but also drawbacks.
CalDav and CardDav protocols will give you the ability to selfhost, store, sync and share you contacts and calendars. There are several "one purpose" options like Davical, Xandikos or Radicale and you will get it from NextCloud/OwnCloud too.
People use VPN for 4 main purposes. First one is creation of private network. That's what it was ment for. Another one is to bypass blockage and firewalls when trying to access some other resource on the internet. Third one is anonymity, especially when it is used many many other users. The last one is to fight your internet/network provider spying efforts. You can use your selfhow VPN server any of it.
When talking about spying by internet provider, another thing you need to do to get rid of it is to not use your providers DNS. Using your VPN to anonymize yourself and asking your providers DNS for IP resolving everytime you are heading someshere on the internet is not the best idea. Luckily is not that hard to setup you own Bind instance. But if you are not into into it, you can still reconfigure your devices to use one of the public DNS server providers like 1.1.1.1 or 8.8.8.8
What about moving your whole family to selfhosted jabber server and leave Facebook messenger or Skype for others? Basic jabber chat server for text messages and file exchange is very simple to setup. Use Prosody for example. It is CPU and RAM efficient with very good documentation. But you can setup even audio and video calls and conferences using Jitsi for example.
If you are developer or at least have something with IT, you would probably know the benefits and use for version control system. Hosting your own git (mercurial, subversion...) is nice alternative to github. Setting up personal selfhosted git accessed only by ssh is piece of cake, but there are several full featured self hosting solutions like Gitea, BitBucket or GitLab.
Oldshool people like me would use rsync or git. Most of the crowd would use NextCloud/OwnCloud with support for ios/android too, and some would look for single purpose, but user friendly alternative like syncthing, that also have its android app on F-Droid.
There is so much you can selfhost and gain the additional value of self reliance, data control, access control. It is also one of the ways to degoogle your internet presence. It will take your time and efforts at the beginning, but maintenance itself doesn't take more than couple of hours a month. If you care about degoogling your phone too, I wrote an [howto article](gemini://mizik.eu/blog/howto-degoogle-your-android-phone/) about it. But next article of this selfhosting series will be about setup and hardening of your first service: [web service](gemini://mizik.eu/blog/how-to-setup-and-secure-web-server/).]]></content>
<author>
<name>Marián Mižik</name>
</author>
<category term="VPS, Linux, Self-host" />
<summary>
Fourth article of the Linux VPS series covers some of the services you can selfhost and and what are the pros and cons of selfhosting them compared to using established cloud services from big companies.</summary>
</entry>
<entry>
<title>Picotui, the most understandable tui library out there</title>
<link href="gemini://mizik.eu/blog/picotui-the-most-understandable-tui-library-out-there/" rel="alternate" type="text/plain" title="Picotui, the most understandable tui library out there" />
<published>2020-10-04T00:00:00+02:00</published>
<updated>2020-10-04T00:00:00+02:00</updated>
<id>gemini://mizik.eu/blog/picotui-the-most-understandable-tui-library-out-there/</id>
<content><![CDATA[I have been using several TUI libraries. curses, urwid, Npyscreen and also some non Python ones. The story is always the same. Library is written using catasthrophic api, obsolete paradigms and with no simple way of extending existing code. The code is often very hard to understand. So I went on a quest to find the most understandable one, that will suit my needs.
I crunched throught a bunch of them and then found
. It is very small python TUI library, that does not use curses as a rendering engine, it also does not optimize screen refresh bandwith, but comes with decent set of widgets and whole code is in 9 files. There are also no tests, there is no documentation and also very little comments in the code. I almost forgot, author defines it as an experimental WIP project and based on his activity on github, its development and support is virtually non existent. That being said, after I went through most of the code and after building some examples, I can say, that it is still the best library out there when it comes to extendability and ability to understant basic architectural patterns.
Every widget extends Screen class. Screen defines basic utilities for rendering widgets on the terminal screen. So you get the screen size, tty init and dispose, mouse support toggle, rendering functions for several types of borders and boxes, cursor manipulations, character attributes manipulation functions, character write support and hooks for screen redraw and screen resize. That's basically it. Then you get set of abstract widgets built on top of the Screen class: base widget, focusable widget, editable widget, choice widget and item selection widget. Base widget adds key and mouse input handling, primitive support for events and default loop function. The rest of mentioned widgets extend the base widget and add only very little on top. This is the common abstraction layer for all widget implementations: label, frame, button, check box, radio, list box, popup list, drop down, single and multi line entry, combo box and editor. There is also support for menus and dialogs. There are even some common dialog implementations like confirmation dialog, or single input dialog. The most complex widget is Editor, with its extended variants EditorExt, Viewer, LineEditor or versions with some kind of color support. Editor will let you write text that will be wrapped when you reach end of the widget height. It also handles enter, delete and backspace keys for new line and deletion, but it ends there. No support for home, end or tab keys. You also can not init Editor straight with given text, you will have to split it to lines and you also are responsible for keeping an eye on correct line height and therefore splitting on the right place (not in the middle of the word or url)
Basic logic is the same as in any other TUI library and that is the loop. Loop is an infinite cycle that waits for input. When it comes, it will be processed by your logic, loop iteration then finishes and it waits again for another input. Every widget has its loop, but in most cases, your application will not use the native widget loop, but it will run some kind of your custom wrapping loop function instead, that will choose which widget will consume the input. Second important fact is, that in most cases, you need to manage screen redrawing by yourself. Widgets do redraw themselves when using their native API functions, but in most real world usecases, you will have to redraw some other widgets that relies on the one that consumed the input too. Picotui also lacks any automated layouting system, you will have to statically position (and reposition) every widget on your screen. The current screen size can be obtained from Screen class.
If you are building TUI applicattion that would run locally, or remotely using stable average internet connection. If you care about extendability and simplicity. If you want to understand libraries you are using as a building blocks for your project. If you don't mind learning directly from the code with no dev support available. Then this is the TUI library for you. It will take you day or two to grasp the concepts and functionalities, but then you will be able to work with it as if it is direct part of your project. I have customized it to my liking too. Based on what it delivers, it will be necessary in most real world scenarios. I have wrapped some of the native widgets with borders, added text wrapping support to Editor, switched controlling key bindings to vim like bindings and rewrote two of my personal apps with it, finally understanding how and why it is doing what it does without hours of surfing internet to find the answers.]]></content>
<author>
<name>Marián Mižik</name>
</author>
<category term="Linux, TUI" />
<summary>
I have been using several TUI libraries. curses, urwid, Npyscreen and also some non Python ones. The story is always the same. Library is written using catasthrophic api, obsolete paradigms and with no simple way of extending existing code. The code is often very hard to understand. So I went on a quest to find the most understandable one, that will suit my needs.</summary>
</entry>
<entry>
<title>Howto secure your personal Linux VPS</title>
<link href="gemini://mizik.eu/blog/how-to-secure-your-personal-linux-vps/" rel="alternate" type="text/plain" title="Howto secure your personal Linux VPS" />
<published>2020-09-22T00:00:00+02:00</published>
<updated>2020-09-22T00:00:00+02:00</updated>
<id>gemini://mizik.eu/blog/how-to-secure-your-personal-linux-vps/</id>
<content><![CDATA[This is the third part of a small "Linux VPS howto" series and it talks about securing the default linux installation.
Howto setup your personal XMPP server
Howto setup your personal CalDAV/CardDAV server
Howto proxy your self-hosted services using web server
Howto setup and secure web server
Services you can selfhost on you personal Linux VPS
Howto secure your personal Linux VPS
Howto setup your personal Linux VPS
Why setup your personal Linux VPS
So our VPS is up and running. In most cases, what you have now is a minimal installation of a distribution you selected. This is good, because we want to have installed only those packages we really need and nothing more. You can check what is installed/running and remove some additional packages if possible. Probability of an exploitable vulnerability raise with every single installed package, especially if that application can be reached remotely. If I will show any real examples in this article, then consider it to be for Debian/Ubuntu as these two together are prevalent option amongs personal VPS. (I failed to find the article to confirm that claim though).
Important thing in the beginning is to update all installed packages to up to date state by using distro specific (package manager specific) set of commands. For example, in case of Debian it will be: 'apt update && apt upgrade && apt dist-upgrade && apt autoremove && apt clean'. Be aware that hardening steps in this article are valid for VPS where all users are trusted users. In case of multi user machine, where the users can not be trusted, you should apply much more rules and restriction than the ones below and those are not covered in this article.
So you are logged in as root. Change your password to something different than the generated one. Then create another not privileged user with the name of your choice. For example 'john'. You can setup sudo for this user, or stick to classic 'su', it doesn't matter in this case. Now open /etc/ssh/sshd_config, we are going to harden ssh access. Make these changes:
This will change the amount of time in seconds you have to finish login. Default is 2 minutes. Nobody need that much.
Disable ability to login directly as root. It is best practice to not have your privileged user accessible via SSH.
Enable ssh server strict mode. When enabled, system will apply more checks and controls.
Enable additional restrictions during unauthenticated incoming network traffic.
Automatically logout after specified duration of inactivity in seconds. It is a good practice to do so in case you forget to logout manually. But this setting will surely get on your nerves later :)
Disable ability to run X applications remotely via SSH. In most cases you won't need this on your personal VPS.
Disable ability to authenticate using password. Best practice is to login via public keys, as this will effectively disable brute force dictionary attacks on users passwords.
Enable authentication using public keys. When enabled and password login disabled, you need to generate set of private and public keys for every user that will login using ssh. There are plenty of howtos on the internet. for example
.
Alow only your newly created user(s) to log in using SSH
Change port on which SSH server listens to some high port. There is a controversy over this setting. If you leave your SSH port on 22, you will be (in most cases) target of significant amount of automated dictionary attacks and it will spam your logs and monitoring outputs. If your SSH is setup correctly those attacks are not harmful, but they will trigger many false positive monitoring alerts and make your logcheck outputs harder to read. This all can be solved by moving SSH to not standard high port. Automated attacks don't scan your machine for SSH, because those scripts don't want to waste time as they need to go through huge amount of IP addresses. They just go for port 22. Problem is, that in possix compliant systems, all ports below 1024 are privileged ports and require to be root to start listening on it, so you can be sure, that service listening on 22 is really your SSH server. If you move it for example to 48277, any local user can spawn daemon listening on that port trying to act as SSH server and potentially read your passwords. But for this, you will have to use passwords to login, also someone will already must have access to your system and also be able to kill already running SSH server, that runs under the root account. I personally always go for high port, because I like to have my monitoring set sensitively and I read all reports coming to my email. With SSH on port 22 I would either get huge amount of false positives, or I would have to restrict SSH monitoring. For more information why not to reconfigure your SSH server to high port, check
Default firewall in linux for many years was netfilter/iptables. Currently there is also a successor available called nftables. Both of them have similar syntax and it is hard for an unfamiliar person to master it. So for the sake of this article I will use UFW, which is shortcut for "uncomplicated firewall" and it uses iptables under the hood. Basically, it is an app that simplifies iptable usage for you. First you must install it using distro package manager. Then we will apply default set of rules by executing these commands:
Deny all incoming traffic by default. It is best practice to deny everything that is coming in and allow only specific rules manually.
Make no restrictions for outgoing traffic, because by default we do not fear what is going out. That is because outgoing traffic is initiated either by a service we installed and trust, or by an user we allowed and in case of our setup most likely also impersonate.
Allow incoming traffic on port 48277 we chose as an example for custom high SSH port. If you've chosen different port, or went for default 22, then put here your selected port. This rule is crucial, otherwise we would cut ourselves out from SSH and our remote connection may be lost.
Allow incoming traffic on port 80. Allow this only if you plan to host a web server, or use a web server as a proxy for some other services. Port 80 should be enabled only for backward compatibility with browsers and other clients. Best practice is to serve everything over HTTPS, which is port 443. So if someone requests data using HTTP (port 80), web server should upgrade (redirect) communication to HTTPS at first and then continue to serve whatever client requested. We will take care of it in different article where we will cover web server setup.
Allow incoming traffic on port 443. Allow this only if you plan to host web server.
Firewall is by default disabled after the installation. This will enable it and it will automatically apply all rules we have defined so far. Definition of rules is persistent, so it will survive both firewall restart and OS restart.
Print out current firewall status with all applied rules.
You can do much more with the firewall and it is important to at least know what you are able to do with it. There are many articles talking about ufw capabilities, or you can also use ufw man pages. Definitely check some docs out, because sooner or later you would like to enable your new service, or remove some automatically applied IP block created by fail2ban, which we will cover later in this article.
This is another highly debated step. World already depleted all IPV4 segments, but IPV4 is still dominant. Unfortunatelly, there is much higher rate of suspicious activities on IPV6 than IPV4 and the main rule when trying to secure your VPS is: disable/delete/remove everything you don't need. Therefore I by default always turn IPV6 off if it is not needed. To do so, you need to modify /etc/sysctl.conf file like so:
net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 net.ipv6.conf.eth0.disable_ipv6 = 1 net.ipv6.conf.eth1.disable_ipv6 = 1 net.ipv6.conf.ppp0.disable_ipv6 = 1 net.ipv6.conf.tun0.disable_ipv6 = 1
As you can see, we are disabling ipv6 globally and then also for every network interface present on the machine. Double check what interfaces you have present and modify lines above according to your findings. To turn off IPV6 for UFW, you need to set 'IPV6=no' in /etc/default/ufw. You may need to manually turn it off for other services you will host. For example in case of postfix mail server, you would have to set 'inet_protocols = ipv4' in /etc/postfix/main.cf
Fail2Ban is an intrusion detection and prevention software that protects your VPS from brute-force attacks. It monitors log files and takes actions according to findings and based on how it is configured. It covers automatically many log formats and supports many standard services out there. It mainly consists of actions, filters and jails. Actions define what should be done when something happens. Filters define how to detect that something happened. Jails put actions and filters together with some additional configurations and settings. For now, we have no service installed and exposed to public besides SSH and networking, so fail2ban should target these two by detecting port knocking and ssh connection failures. There are plenty of howtos for most types of jails. For example one for port scan is
and one for ssh is
. Don't forget to set the correct sender email address in /etc/fail2ban/jail.d/jail.local
Logcheck is a simple utility that can read many log formats and summarize them into email. Destination email can be configured in /etc/logcheck/logcheck.conf. Log files that logcheck will add to final summarization can be defined in /etc/logcheck/logcheck.logfiles. You can define what in those log files should be ignored and not added to summary. Make sure, that all log files you want to have scanned are readable by logcheck. Logcheck is not executed automatically, you should take care of it by scheduling it using cron.
Fail2ban, logcheck and also your future monitoring utilities will depend on ability to send emails from your VPS. You don't have to setup whole email server for that, the only thing you need is ability to send mail. For the sake of this series, I will use postfix, because in later articles I will explain how to setup self-hosted email server using postfix. There are many alternatives like sendmail, exim, qmail and others. You can check the differences and make a personal decision. One such article is
Debian for example, has an automatic, after install, curses based setup, where you will only select to use an 'Internet site' configuration and then define your domain as the system mail name. But everything can be done manually too using two main postfix configuration files: /etc/postfix/main.cf and /etc/postfix/master.cf. It is also good practice to set your mail domain in /etc/mailname file if your distro supports it.
Many distros have the ability to automate package updating. Regularly updated OS is very important when it comes to security, so be sure you are doing so manually or by setting up an automated procedure. In case of debian based distros, it will be package 'unattended-upgrades' with a simple configuration using config files located in /etc/apt/
Some distributions will allow you to check checksum validity of installed packages. Thanks to such mechanism you are able to detect any corrupted and altered files. In case of debian, the package is called debsums. It can even check if package log file is not smaller compared to last check, which can point out to manual log file modification. Be sure to add the debsum log file amongs files that are summarized by logcheck.
Security-Enhanced Linux is a kernel module that can define access controls for the users, applications, processes and files. So for example if you have a database that should run under specific user and accessing only data in specified directory, you can define set of rules that will constrict the running process to obey those rules. It can be configured to run in enforcing or permissive mode. In latter case it will only log detected issues. SELinux uses users and groups in its rules, but these are not your OS users and groups, this is confusing for many and you should remember that. SELinux is a huge topic and is covered in detail in official documentations of every big distro out there.
If you apply everything above, your VPS is secured and you are ready to deploy some useful services. In [next article](gemini://mizik.eu/blog/what-service-you-can-host-on-your-personal-linux-vps/) I will talk about what interesting services can be self hosted and why.]]></content>
<author>
<name>Marián Mižik</name>
</author>
<category term="VPS, Linux, Self-host" />
<summary>
This is the third part of a small "Linux VPS howto" series and it talks about securing the default linux installation.</summary>
</entry>
<entry>
<title>Howto setup your personal Linux VPS</title>
<link href="gemini://mizik.eu/blog/how-to-setup-your-personal-linux-vps/" rel="alternate" type="text/plain" title="Howto setup your personal Linux VPS" />
<published>2020-08-21T00:00:00+02:00</published>
<updated>2020-08-21T00:00:00+02:00</updated>
<id>gemini://mizik.eu/blog/how-to-setup-your-personal-linux-vps/</id>
<content><![CDATA[This is the second article of a small series about taking care of your own VPS. This one is about all necessary non admin things to think about when setting the VPS up.
Howto setup your personal XMPP server
Howto setup your personal CalDAV/CardDAV server
Howto proxy your self-hosted services using web server
Howto setup and secure web server
Services you can selfhost on you personal Linux VPS
Howto secure your personal Linux VPS
Howto setup your personal Linux VPS
Why setup your personal Linux VPS
Domain is almost necessary part when it comes to connecting to your VPS and the services you deploy on your virtual machine. It doesn't really matter what domain you choose. If you don't care, go for the cheapest TLD, if you want something fancy, go for fancy :) Some domain registrators will also give you the ability to manage DNS. If you choose one that does not support such a feature, then you should double check if your VPS provider has such option, otherwise you are stuck with setting up your own DNS server instance, or using 3rd party service to manage DNS.
There is plethora of VPS providers these days. Just choose the one you like. If you're choosing from world wide providers, then you can compare them on several vps comparison portals. Be sure to check the price, SLA, how good is their VPS management website, how good is their support, if they support linux distro you would like to have installed. I personally use local central european provider called
. They are bit on the expensive side, but they have great admin page, online chat support, they are also a domain registrator, they provide free DNS management, free VPS snapshot feature, their VPS are super stable and they almost never have their IP segments in
. This, together with ability to set reverse DNS record is crucial if you want to host your own email server.
The price of VPS is based on HW params you choose. I personally always target the lowest possible and upgrade if necessary. Almost every provider gives an option to upgrade HW parameters of an existing VPS. Be aware, that not every one gives you the ability to downgrade. One more reason other than price to go for lowest possible settings is also the fact, that then your are forced to care about how carefuly selected and optimized are the services you host. By tweaking them and reading about them, you learn. And one great reason to host your own server besides the privacy, is that you learn the Linux more deeply and in a different way, as in this case you are not only the user, but also an administrator. My VPS never had more than 1vCPU, 1GB RAM and 20GB of space. But it all depends on what services you would like to host. I currently run only one VPS with 512MB RAM and I host email server, webdav, carddav, caldav, task management app, rss feader and aggregator and webserver.
Most of the providers will let you choose the name, distro, HW parameters, additional features and a root password you will use to log in to the machine for the first time. Some providers will give you the option to upload certificate to use for ssh authentication. After submitting the form, you will need to wait some time for VPS to be generated. In most cases it doesn't take longer than couple of minutes. After that you are ready to log in to your machine.
Before you start any work on your VPS, it is good thing to setup at least DNS 'A' record as you already have the VPS public IP. In some cases, it takes several hours to propagate DNS update so let's do it as soon as possible. The VPS IP is in most cases shown in the admin page, but if not, you can get it in terminal using commands 'ip a' or 'ifconfig'. If you are planning to setup reverse DNS, do it now. Reverse DNS are based on PTR records, but most of the providers give you dedicated GUI form for setting it up. If you have opted for any form of monitoring, backup, snapshot generation and so on, you can configure it now if applicable.
Now you should have your VPS set up and running. It is time to secure and harden it before you will start installing any services. I will cover it in [next article](gemini://mizik.eu/blog/how-to-secure-your-personal-linux-vps/).]]></content>
<author>
<name>Marián Mižik</name>
</author>
<category term="VPS, Linux, Self-host" />
<summary>
This is the second article of a small series about taking care of your own VPS. This one is about all necessary non admin things to think about when setting the VPS up.</summary>
</entry>
<entry>
<title>Why setup your personal Linux VPS</title>
<link href="gemini://mizik.eu/blog/why-setup-your-personal-linux-vps/" rel="alternate" type="text/plain" title="Why setup your personal Linux VPS" />
<published>2020-07-20T00:00:00+02:00</published>
<updated>2020-07-20T00:00:00+02:00</updated>
<id>gemini://mizik.eu/blog/why-setup-your-personal-linux-vps/</id>
<content><![CDATA[This article is first from a small series about owning your personal VPS. It is about reasons why you should or should not host services by yourself on personal VPS.
Howto setup your personal XMPP server
Howto setup your personal CalDAV/CardDAV server
Howto proxy your self-hosted services using web server
Howto setup and secure web server
Services you can selfhost on you personal Linux VPS
Howto secure your personal Linux VPS
Howto setup your personal Linux VPS
Why setup your personal Linux VPS
Setting up an operating system with self hosted services require some knowledge. Debugging problems or optimizing configurations too. If you are interested in Linux, OS in general, networking, firewalls, services, security... then the best way how to learn it is to work with it. Your personal server / play ground is great choice for practical learning.
If you care about privacy of your personal data, then you will have full control if services you use will be self hosted by you, on your machine. You don't need to care about licenses, terms of service, or cyber attacks targetting online services with millions of users. There is much smaller chance of being hacked because of data when you selfhost your services. The reason is, no one cares. It is more efficient to attack services with huge amount of users. Personal data of one individual has no value unless someone is interested directly in you.
You are having full control over you data and also over services you host. You can replace it or make any changes you want. Since you have control over filesystem and databases, there is also higher chance that you can backup, convert, or migrate data between services that have no export/import compatibility.
If tinkering with stuff mentioned above is not going to be fun for you, then you should probably not go for it, unless you have senior knowledge and you just need to get things done for some reason. If this is your first attempt to walk this rocky path, then you should be entusiastic about it, otherwise you probably won't finish it or revert to your current easy digital life after some time.
Taking care of VPS takes time. Especially in the beginning, during learning, setup and configuration phase. If you are interested in it, you will probably optimize and modify stuff on the road as you find out about better or more interesting way of doing things. If you don't have time, better try it later when you'll have some.
Most of the mainstream services are rock stable. Backed by redundancy, clustering, automatic crash recovery, backups and 24/7 care by system admins. If you don't have the experience and knowledge, it will take time to finetune things to be stable. You need to count with that. But don't worry. After that, you are able to get stable enough for daily use without much problems.
After reading information above, if you are still interested in setting up your own personal VPS. Continue to my [next article](gemini://mizik.eu/blog/how-to-setup-your-personal-linux-vps/)]]></content>
<author>
<name>Marián Mižik</name>
</author>
<category term="VPS, Linux, Self-host" />
<summary>
This article is first from a small series about owning your personal VPS. It is about reasons why you should or should not host services by yourself on personal VPS.</summary>
</entry>
<entry>
<title>My journey to become a Gentoo fan</title>
<link href="gemini://mizik.eu/blog/my-journey-to-become-gentoo-fan/" rel="alternate" type="text/plain" title="My journey to become a Gentoo fan" />
<published>2020-06-13T00:00:00+02:00</published>
<updated>2020-06-13T00:00:00+02:00</updated>
<id>gemini://mizik.eu/blog/my-journey-to-become-gentoo-fan/</id>
<content><![CDATA[I started working with linux 17 years ago (in 2003). It was Debian Woody. Kernel version was 2.4.x and everybody was talking about making the big step to 2.6. Linux of that era was complete disaster when it comes to UX, or working "out of the box", but for me it was fun and I also liked that "underground" feeling about it. I didn't understand most of the underlying things, and to be honest, every time I got sick of it, or I wanted to play some games, I just rebooted to Windows XP :)
So there I was, hopping regularly between two systems based on my actual mood and laziness. One day I was sitting on my dorm room balcony together with my room mate. Both of us were volunteering in our university network administration club. I was a web master, room mate was system administrator. He was very eager about that distro he heard of, which does not have any automated installation and you had to locally compile whole system by yourself from scratch, optimised for your cpu architecture, cpu features and custom needs. You could choose between 3 types of initial library collections called stage1, stage2, or stage3, where stage3 was almost complete minimal system and stage1 was almost only a compiler with necessary dependencies to start with. You would have to unpack it, chroot into it and start building your system from ground zero. I was thinking about making the effort to completely move to Linux and this distro sounded intriguing. I didn't know what chroot was, but I got a gap between two semesters and plenty of time as I only got a part time job as a Java programmer. So we shook hands, opened some beers and the mission started right in that moment. I formatted my main drive and smashed my Windows installation CD in half. Ready to begin journey to became pure linux user, to get everything what I needed (and before had only on Windows), installed, configured and running on Linux. I knew it would take many hours to accomplish, but boy! If someone would told me, that the only thing I would have after 40 hours of almost straight work would be successfull boot to TTY, I probably wouldn't take that rocky path.
It took me 9 days of 12+ hours to get most of the stuff ready, but during that week, I absorbed such a huge amount of linux knowledge, that I felt like a Neo from The Matrix. I had super small and optimized kernel compiled with no modules. I got TTY with high resolution and framebuffer. I got optimized and very quick system running with much smaller memory footprint. I learned what was that chroot, and also about init, runlevels, process priorities, I fall in love with portage, overlays, ebuilds, use flags and all the main gentoo concepts. I found out, that I can choose my own init system, cron scheduler, system logger, boot loader, kernel patches. I finally achieved to make an installation where KDE (Qt) a Gnome (Gtk) libraries were not messed up together because of random software dependencies. I knew what software I had in my system and why. I could finally read ps aux and understand most of the lines and I could check what ports are opened and knew the reason for it. First time in my life I was master of my operating system and it felt great.
Nowadays, I can install gento in 3-4 hours. Official Gentoo documentation now tell you to start with stage3 to spare you the agony of trying to grow your system from stage1. (AFAIK it is even not available anymore). My machine is able to compile kernel under 1 minute, which is a huge blast. Imagine waiting for almost 1 hour after every kernel config change, then reboot to only get kernel panic during the startup :) In other words, it is much easier these days to go with gentoo, but the benefits are the same as they were in the past. Last time I checked, the official installations documentation was still superb and up to date.
Since then I had to use many other distros because of school or work and I found out, that even when I had the knowledge how things should work, some distros were fighting against me. It looked like the more user friendly linux distro tries to be, the more it is fighting the power user when he wants to do something manually, or in a custom way. But not on Gentoo's watch. Over more than 12 years, every time I didn't like something, gentoo gave me an elegant way to accomplish the change I was about to make. I wanted plain old simple init system, and it never forced me to use systemd. I hated the instability of pulseaudio in its early versions and gentoo until this day gives me the ability to run on plain alsa. If I had issues with new HW and needed bleeding edge kernel, piece of cake in gentoo. You need different version of some software in the repo? Most of the time you will only pinpoint the preferred version in your portage config file and run install. Want to migrate your system from openssl to libressl because of security? Just change the use flag and rebuild affected packages. I tried other advanced "do it yourself" distros like Slackware or Arch, but for me personally, Gentoo is the one. Especially in current times, when recompiling something is not matter of minutes, but rather seconds.
By the way, that 2 weeks without all the luxury of a graphical user interface gave me one more great experience. I learned how to use terminal alternatives of GUI applications. I had to use lynx for browsing, finch for IM, midnight commander for file management, vim for editing, htop for process maintenance, mutt for mails and so on, because I wasn't able to run X for quite some time :D Funny thing is, that after I got the GUI back, I found out, that in many cases I am faster and more satisfied with TUI solutions than the GUI ones and I am using them until now, but that is completely different story to write about...]]></content>
<author>
<name>Marián Mižik</name>
</author>
<category term="Gentoo, Linux, Personal" />
<summary>
I started working with linux 17 years ago (in 2003). It was Debian Woody. Kernel version was 2.4.x and everybody was talking about making the big step to 2.6. Linux of that era was complete disaster when it comes to UX, or working "out of the box", but for me it was fun and I also liked that "underground" feeling about it. I didn't understand most of the underlying things, and to be honest, every time I got sick of it, or I wanted to play some games, I just rebooted to Windows XP :)</summary>
</entry>
</feed>