💾 Archived View for perso.pw › blog › rss.xml captured on 2024-12-17 at 10:33:42.
View Raw
More Information
⬅️ Previous capture (2024-09-29)
🚧 View Differences
-=-=-=-=-=-=-
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Solene'%</title>
<description></description>
<link>gemini://perso.pw/blog/</link>
<atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" />
<item>
<title>Getting started to write firewall rules</title>
<description>
<![CDATA[
<pre># Introduction
This blog post is about designing firewall rules, not focusing on a specific operating system.
The idea came after I made a mistake on my test network where I exposed LAN services to the Internet after setting up a VPN with a static IPv4 on it due to too simplistic firewall rules. While discussing this topic on Mastodon, some mentioned they never know where to start when writing firewall rules.
# Firewall rules ordering
Firewall rules are evaluated one by one, and the evaluation order matters.
Some firewall use a "first match" type, where the first rule matching a packet is the rule that is applied. Other firewalls are of type "last match", where the last matching rule is the one applied.
# Block everything
The first step when writing firewall rules is to block all incoming and outgoing traffic.
There is no other way to correctly configure a firewall, if you plan to block all services you want to restrict and let the default allow rule do its job, you are doing it wrong.
# Identify flows to open
As all flows should be blocked by default, you have to list what should go through the firewall, inbound and outbound.
In most cases, you will want to allow outbound traffic, except if you have a specific environment on which you want to only allow outgoing traffic to a certain IP / port.
For inbound traffic, if you do not host any services, there are nothing to open. Otherwise, make a list of TCP, UDP, or any other ports that should be reachable, and who should be allowed to reach it.
# Write the rules
When writing your rules, whether they are inbound or outbound, be explicit whenever possible about this:
- restrict to a network interface
- restrict the source addresses (maybe a peer, a LAN, or anyone?)
- restrict to required ports only
Eventually, in some situations you may want to filter by source and destination port at the same time. This is usually useful when you have two servers communicating over a protocol enforcing both ports.
This is actually where I failed and exposed my LAN minecraft server to the wild. After setting up a VPN with a static IPv4 address, I only had a "allow tcp/25565" rule on my firewall as I was relying on my ISP router to not forward traffic. This rule was not effective once the traffic was received from the VPN, although it would have been filtrated when using a given network interface or a source network.
If you want to restrict the access of a critical service to a some user (1 or more), but that they do not have a static IP address, you should consider using a VPN for this service and restrict the access to the VPN interface only.
# Write comments and keep track of changes
Firewall rules will evolve over time, you may want to write for your future you why you added this or that rule. Ideally, use a version control system on the firewall rules file, so you can easily revert changes or track history to understand a change.
# Do not lock yourself out
When applying the firewall rules the first time, you may have made a mistake and if it is on remote equipment with no (or complicated) physical access, it is important to prepare an escape.
There are different methods, the most simple is to run a command in a second terminal that sleeps for 30 seconds before resetting the firewall to a known state, you have to run this command just before loading the new rules. So if you are locked out after applying, just wait 30 seconds to fix the rules.
# Add statistics and logging
If you want to monitor your firewall, consider adding counters to rules, it will tell you how many times it was evaluated/matched and how many packets and traffic went through. With nftables on Linux they are named "counters", whereas OpenBSD packet filter names this "label".
It is also possible to log packets matching a rule, this can be useful to debug an issue on the firewall, or if you want to receive alerts in your logs when a rule is triggered.
# Conclusion
Writing firewall rules is not a hard task once you identified all flows.
While companies have to maintain flow tables, I do not think it can be useful for a personal network (your mileage may vary).
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/writing-firewall-rules.gmi</guid>
<link>gemini://perso.pw/blog//articles/writing-firewall-rules.gmi</link>
<pubDate>Wed, 11 Dec 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>Why I stopped using OpenBSD</title>
<description>
<![CDATA[
<pre># Introduction
Last month, I decided to leave the OpenBSD team as I have not been using OpenBSD myself for a while. A lot of people asked me why I stopped using OpenBSD, although I have been advocating it for a while. Let me share my thoughts.
First, I like OpenBSD, it has values, and it is important that it exists. It just does not fit all needs, it does not fit mine anymore.
# Issues
Here is a short list of problems that, while bearable when taken individually, they summed up to a point I had to move away from OpenBSD.
## Hardware compatibility
- no Bluetooth support
- limited game pad support (not supported by all programs, not all game pad will work)
- battery life / heat / power usage (OpenBSD draws more power than alternatives, by a good margin)
## Software compatibility
As part of staying relevant on the DevOps market, I need to experiment and learn with a lot of stuff, this includes OCI containers, but also machine learning and some weird technologies. Running virtual machines on OpenBSD is really limited, running programs headless with one core and poor performance is not a good incentive to work at staying sharp.
As part of my consultancy work, I occasionally need to run proprietary crap, this is not an issue when running it in a VM, but I can not do that on OpenBSD without a huge headache and very bad performance.
## Reliability
I have grievances against OpenBSD file system. Every time OpenBSD crash, and it happens very often for me when using it as a desktop, it ends with file corrupted or lost files. This is just not something I can accept.
Of course, it may be some hardware compatibility issue, I never have issues on an old ThinkPad T400, but I got various lock up, freeze or kernel panic on the following machines:
- ThinkPad X395
- ThinkPad t470
- ThinkPad t480
- ryzen 5600X + AMD GPU (desktop)
Would you like to keep using an operating system that daily eat your data? I don't. Maybe I am doing something weirds, I don't know, I have never been able to pinpoint why I got so many crashes although everyone else seem to have a stable experience with OpenBSD.
# Moving to Linux
I moved from OpenBSD to Qubes OS for almost everything (except playing video games) on which I run Fedora virtual machines (approximately 20 VM simultaneously in average). This provides me better security than OpenBSD could provide me as I am able to separate every context into different spaces, this is absolutely hardcore for most users, but I just can't go back to a traditional system after this.
=> https://dataswamp.org/~solene/2023-06-17-qubes-os-why.html Earlier blog post: Why one would use Qubes OS?
In addition, I have learned the following Linux features and became really happy of it:
- namespaces: being able to reduce the scope of a process is incredibly powerful, this is something that exists in Linux since a very long time, this is also the foundation for running containers, it is way better than chroots.
- cgroups: this is the name of the kernel subsystem that is responsible for resource accounting, with it, it is possible to get access to accurate and reliable monitoring. It is possible to know how much network, i/o, CPU or memory have been used by a process. From an operator point of view, it is really valuable to know exactly what is consuming resources when looking at the metrics. Where on OpenBSD you can notice a CPU spike at some timestamp, on Linux you would be able to know which user used the CPU.
- systemd: journald, timers and scripting possibilities. I need to write a blog post about this, systemd is clearly disruptive, but it provides many good features. I understand it can make some people angry as they have to learn how to use it. The man pages are good though.
- swap compression: this feature allows me to push my hardware to its limit, with lz4 compression algorithm, it is easy to get access to **extremely** fast swap paid with some memory. The compression ratio is usually 3:1 or 4:1 which is pretty good.
- modern storage backend: between LVM, btrfs and ZFS, there are super nice things to achieve depending on the hardware, for maximum performance / reliability and scalability. I love transparent compression as I can just store more data on my hardware. (when it's compressible of course).
- flatpak: I really like software distribution done with flatpak, packages are all running in their own namespace, they can't access all the file system, you can roll back to a previous version, and do some interesting stuff
- auditd: this is a must-have for secure environments, it allows logging all accesses matching some rules (like when was accessed this arbitrary file, when that file is modified, etc...). This does not even exist in OpenBSD (maybe if you can run ktrace on pid 1 you could do something?). This kind of feature is a basic requirement for many qualified secure environments.
- SELinux: although many people disable it immediately after the first time it gets on their way (without digging further), this is a very powerful security mechanism that mitigates entire classes of vulnerabilities.
When using a desktop for gaming, I found Fedora Silverblue to be a very solid system with reliable upgrades, good quality and a lot of software choice.
# Conclusion
I got too many issues with OpenBSD, I wanted to come back to it twice this year, but I just have lost 2 days of my life due to all the crashes eating data. And when it was working fine, I was really frustrated by the performance and not being able to achieve the work I needed to do.
But as I said, I am glad people there are happy OpenBSD users who enjoy it and have a reliable system with it. From the various talks I had with users, the most common (by far) positive fact that make OpenBSD good is that users can understand what is going on. This is certainly a quality that can only be found in OpenBSD (maybe NetBSD too?).
I will continue to advocate OpenBSD for situations I think it is relevant, and I will continue to verify OpenBSD compatibility when contributing to open source software (last in date is Peergos). This is something that matters a lot for me, in case I go back to OpenBSD :-)
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/why-i-stopped-using-openbsd.gmi</guid>
<link>gemini://perso.pw/blog//articles/why-i-stopped-using-openbsd.gmi</link>
<pubDate>Mon, 18 Nov 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>Self-hosted web browser bookmarks syncing</title>
<description>
<![CDATA[
<pre># Introduction
This blog post is about Floccus, a self-hosting web browser bookmarks and tabs syncing software.
What is cool with Floccus is that it works on major web browsers (Chromium, Google Chrome, Mozilla Firefox, Opera, Brave, Vivaldi and Microsoft Edge), allowing sharing bookmarks/tabs without depending on the web browser integrated feature, but it also supports multiple backends and also allow the sync file to be encrypted.
=> https://floccus.org/ Floccus official project website
The project is actively developed and maintained.
=> https://github.com/floccusaddon/floccus Floccus GitHub repository
If you want to share a bookmark folder with other people (relatives, a team at work), do not forget to make a dedicated account on the backend as the credentials will be shared.
# Features
- can sync bookmarks or tabs
- sync over WebDAV, Nextcloud, git, linkwarden and Google Drive
- (optional) encrypted file on the shared storage with WebDAV and Google Drive backends
- (optional) security to not sync if more than 50% of the bookmarks changed
- can sync a single bookmark directory
- sync one-way or two-ways
- non HTTP URLs can be saved when using WebDAV or Google Drive backends (ftp:, javascript, data:, chrome:)
- getting rid of Floccus is easy, it has an export feature, but you can also export your bookmarks
# Setup
There is not much to setup, but the process looks like this:
1. install the web browser extension (it is published on Chrome, Mozilla and Edge stores)
2. click on the Floccus icon and click on "Add a profile"
3. choose the backend
4. type credentials for the backend
5. configure the sync options you want
6. enjoy!
After you are done, repeat the process on another web browser if you want to enable sync, otherwise Floccus will "only" serve as a bookmark backup solution.
# Conclusion
It is the first bookmark sync solution I am happy with, it just works, supports end-to-end encryption, and does not force you to use the same web browser across all your devices.
Before this, I tried integrated web browser sync solutions, but self-hosting them was not always possible (or a terrible experience). I gave a try to "bookmark managers" (linkding, buku, shiori), but whether in command line or with a web UI, I did not really like it as I found it rather impractical for daily use. I just wanted to have my bookmarks stored in the browser, and be able to easily search/open them. Floccus does the job.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/selfhosted-bookmark-sync.gmi</guid>
<link>gemini://perso.pw/blog//articles/selfhosted-bookmark-sync.gmi</link>
<pubDate>Tue, 05 Nov 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>Using a dedicated administration workstation for my infrastructure</title>
<description>
<![CDATA[
<pre># Introduction
As I moved my infrastructure to a whole new architecture, I decided to only expose critical accesses to dedicated administration systems (I have just one). That workstation is dedicated to my infrastructure administration, it can only connect to my servers over a VPN and can not reach the Internet.
This blog post explains why I am doing this, and gives a high level overview of the setup. Implementation details are not fascinating as it only requires basics firewall, HTTP proxy and VPN configuration.
# The need
I wanted to have my regular computer not being able to handle any administration task, so I have a computer "like a regular person" without SSH keys, VPN and a password manager that does not mix personal credentials with administration credentials ... To prevent credentials leaks or malware risks, it makes sense to uncouple the admin role from the "everything else" role. So far, I have been using Qubes OS which helped me to do so at the software level, but I wanted to go further.
# Setup
This is a rather quick and simple explanation of what you have to do in order to set up a dedicated system for administration tasks.
## Workstation
The admin workstation I use is an old laptop, it only needs a web browser (except if you have no internal web services), a SSH client, and being able to connect to a VPN. Almost any OS can do it, just pick the one you are the most conformable with, especially with regard to the firewall configuration.
The workstation has its own SSH key that is deployed on the servers. It also has its own VPN to the infrastructure core. And its own password manager.
Its firewall is configured to block all in and out traffic except the following:
- UDP traffic to allow WireGuard
- HTTP proxy address:port through WireGuard interface
- SSH through WireGuard
The HTTP proxy exposed on the infrastructure has a whitelist to allow some fqdn. I actually want to use the admin workstation for some tasks, like managing my domains through my registrar web console. Keeping the list as small as possible is important, you do not want to start using this workstation for browsing the web or reading emails.
On this machine, make sure to configure the system to use the HTTP proxy for updates and installing packages. The difficulty of doing so will vary from an operating system to another. While Debian required a single file in `/etc/apt/apt.conf.d/` to configure apt to use the HTTP proxy, OpenBSD needed both `http_proxy` and `https_proxy` environment variables, but some scripts needed to be patched as they do not use the variables, I had to check fw_update, pkg_add, sysupgrade and syspatch were all working.
Ideally, if you can afford it, configure a remote logging of this workstation logs to a central log server. When available, `auditd` monitoring important files access/changes in `/etc` could give precious information.
## Servers
My SSH servers are only reachable through a VPN, I do not expose it publicly anymore. And I do IP filtering over the VPN, so only the VPN clients that have a use to connect over SSH will be allowed to connect.
When I have some web interfaces for services like Minio, Pi-Hole and the monitoring dashboard, all of that is restricted to the admin workstations only. Sometimes, you have the opportunity to separate the admin part by adding a HTTP filter on a `/admin/` URI, or if the service uses a different port for the admin and the service (like Minio). When enabling a new service, you need to think about all the things you can restrict to the admin workstations only.
Depending on your infrastructure size and locations, you may want to use dedicated systems for SSH/VPN/HTTP proxy entry points, it is better if it is not shared with important services.
## File exchange
You will need to exchange data to the admin workstation (rarely the other way), I found nncp to be a good tool for that. You can imagine a lot of different setup, but I recommend picking one that:
- does not require a daemon on the admin workstation: this does not increase the workstation attack surface
- allows encryption at rest: so you can easily use any deposit system for the data exchange
- is asynchronous: as a synchronous connection could be potentially dangerous because it establishes a link directly between the sender and the receiver
=> https://dataswamp.org/~solene/2024-10-04-secure-file-transfer-with-nncp.html Previous blog post: Secure file transfer with NNCP
# Conclusion
I learned about this method while reading ANSSI (French cybersecurity national agency) papers. While it may sound extreme, it is a good practice I endorse. This gives a use to old second hand hardware I own, and it improves my infrastructure security while giving me peace of mind.
=> https://cyber.gouv.fr/ ANSSI website (in French)
In addition, if you want to allow some people to work on your infrastructure (maybe you want to set up some infra for an association?), you already have the framework to restrict their scope and trace what they do.
Of course, the amount of complexity and resources you can throw at this is up to you, you could totally have a single server and lock most of its services behind a VPN and call it a day, or have multiple servers worldwide and use dedicated servers to enter their software defined network.
Last thing, make sure that you can bootstrap into your infrastructure if the only admin workstation is lost/destroyed. Most of the time, you will have a physical/console access that is enough (make sure the password manager is reachable from the outside for this case).
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/my-admin-workstation.gmi</guid>
<link>gemini://perso.pw/blog//articles/my-admin-workstation.gmi</link>
<pubDate>Wed, 23 Oct 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>Securing backups using S3 storage</title>
<description>
<![CDATA[
<pre># Introduction
In this blog post, you will learn how to make secure backups using Restic and a S3 compatible object storage.
Backups are incredibly important, you may lose important files that only existed on your computer, you may lose access to some encrypted accounts or drives, when you need backups, you need them to be reliable and secure.
There are two methods to handle backups:
- pull backups: a central server connects to the system and pulls data to store it locally, this is how rsnapshot, backuppc or bacula work
- push backups: each system run the backup software locally to store it on the backup repository (either locally or remotely), this is how most backups tool work
Both workflows have pros and cons. The pull backups are not encrypted, and a single central server owns everything, this is rather bad from a security point of view. While push backups handle all encryption and accesses to the system where it runs, an attacker could destroy the backup using the backup tool.
I will explain how to leverage S3 features to protect your backups from an attacker.
# Quick intro to object storage
S3 is the name of an AWS service used for Object Storage. Basically, it is a huge key-value store in which you can put data and retrieve it, there are very little metadata associated with an object. Objects are all stored in a "bucket", they have a path, and you can organize the bucket with directories and subdirectories.
Buckets can be encrypted, which is an important feature if you do not want your S3 provider to be able to access your data, however most backup tools already encrypt their repository, so it is not really useful to add encryption to the bucket. I will not explain how to use encryption in the bucket in this guide, although you can enable it if you want. Using encryption requires more secrets to store outside of the backup system if you want to restore, and it does not provide real benefits because the repository is already encrypted.
S3 was designed to be highly efficient for retrieving / storage data, but it is not a competitor to POSIX file systems. A bucket can be public or private, you can host your website in a public bucket (and it is rather common!). A bucket has permissions associated to it, you certainly do not want to allow random people to put files in your public bucket (or list the files), but you need to be able to do so.
The protocol designed around S3 was reused for what we call "S3-compatible" services on which you can directly plug any "S3-compatible" client, so you are not stuck with AWS.
This blog post exists because I wanted to share a cool S3 feature (not really S3 specific, but almost everyone implemented this feature) that goes well with backups: a bucket can be versioned. So, every change happening on a bucket can be reverted. Now, think about an attacker escalating to root privileges, they can access the backup repository and delete all the files there, then destroy the server. With a backup on a versioned S3 storage, you could revert your bucket just before the deletion happened and recover your backup. In order to prevent this, the attacker should also get access to the S3 storage credentials, which is different from the credentials required to use the bucket.
Finally, restic supports S3 as a backend, and this is what we want.
## Open source S3-compatible storage implementations
There is a list of open source and free S3-compatible storage, I played with them all, and they have different goals and purposes, they all worked well enough for me:
=> https://github.com/seaweedfs/seaweedfs Seaweedfs GitHub project page
=> https://garagehq.deuxfleurs.fr/ Garage official project page
=> https://min.io/ Minio official project page
A quick note about those:
- I consider seaweedfs to be the Swiss army knife of storage, you can mix multiple storage backends and expose them over different protocols (like S3, HTTP, WebDAV), it can also replicate data over remote instances. You can do tiering (based on last access time or speed) as well.
- Garage is a relatively new project, it is quite bare bone in terms of features, but it works fine and support high availability with multiple instances, it only offers S3.
- Minio is the big player, it has a paid version (which is extremely expensive) although the free version should be good enough for most users.
# Configure your S3
You need to pick a S3 provider, you can self-host it or use a paid service, it is up to you. I like backblaze as it is super cheap, with $6/TB/month, but I also have a local minio instance for some needs.
Create a bucket, enable the versioning on it and define the data retention, for the current scenario I think a few days is enough.
Create an application key for your restic client with the following permissions: "GetObject", "PutObject", "DeleteObject", "GetBucketLocation", "ListBucket", the names can change, but it needs to be able to put/delete/list data in the bucket (and only this bucket!). After this process done, you will get a pair of values: an identifier and a secret key
Now, you will have to provide the following environment variables to restic when it runs:
- `AWS_DEFAULT_REGION` which contains the region of the S3 storage, this information is given when you configure the bucket.
- `AWS_ACCESS_KEY` which contains the access key generated when you created the application key.
- `AWS_SECRET_ACCESS_KEY` which contains the secret key generated when you created the application key.
- `RESTIC_REPOSITORY` which will look like `s3:https://$ENDPOINT/$BUCKET` with $ENDPOINT being the bucket endpoint address and $BUCKET the bucket name.
- `RESTIC_PASSWORD` which contains your backup repository passphrase to encrypt it, make sure to write it down somewhere else because you need it to recover the backup.
If you want a simple script to backup some directories, and remove old data after a retention of 5 hourly, 2 daily, 2 weekly and 2 monthly backups:
restic backup -x /home /etc /root /var
restic forget --prune -H 5 -d 2 -w 2 -m 2
Do not forget to run `restic init` the first time, to initialize the restic repository.
# Conclusion
I really like this backup system as it is cheap, very efficient and provides a fallback in case of a problem with the repository (mistakes happen, there is not always need for an attacker to lose data ^_^').
If you do not want to use S3 backends, you need to know Borg backup and Restic both support an "append-only" method, which prevents an attacker from doing damages or even read the backup, but I always found the use to be hard, and you need to have another system to do the prune/cleanup on a regular basis.
# Going further
This approach could work on any backend supporting snapshots, like BTRFS or ZFS. If you can recover the backup repository to a previous point in time, you will be able to access to the working backup repository.
You could also do a backup of the backup repository, on the backend side, but you would waste a lot of disk space.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/secure-backups-with-s3.gmi</guid>
<link>gemini://perso.pw/blog//articles/secure-backups-with-s3.gmi</link>
<pubDate>Tue, 22 Oct 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>Snap integration in Qubes OS templates</title>
<description>
<![CDATA[
<pre># Introduction
Snap package format is interesting, while it used to have a bad reputation, I wanted to make my opinion about it. After reading its design and usage documentation, I find it quite good, and I have a good experience using some programs installed with snap.
=> https://snapcraft.io/ Snapcraft official website (store / documentation)
Snap programs can be either packaged as "strict" or "classic"; when it is strict there is some confinement at work which can be inspected on an installed snap using `snap connections $appname`, while a "classic" snap has no sandboxing at all. Snap programs are completely decorrelated from the host operating system where snap is running, so you can have old or new versions of a snap packaged program without having to handle shared library versions.
The following setup explains how to install snap programs in a template to run them from AppVMs, and not how to install snap programs in AppVMs as a user, if you need this, please us the Qubes OS guide linked below.
Qubes OS documentation explains how to setup snap in a template, but with a helper to allow AppVMs to install snap programs in the user directory.
=> https://www.qubes-os.org/doc/how-to-install-software/#installing-snap-packages Qubes OS official documentation: install snap packages in AppVMs
In a previous blog post, I explained how to configure a Qubes OS template to install flatpak programs in it, and how to integrate it to the template.
=> https://dataswamp.org/~solene/2023-09-15-flatpak-on-qubesos.html Previous blog post: Installing flatpak programs in a Qubes OS template
# Setup on Fedora
All commands are meant to be run as root.
## Snap installation
=> https://snapcraft.io/docs/installing-snap-on-fedora Snapcraft official documentation: Installing snap on Fedora
Installing snap is easy, run the following command:
dnf install snapd
To allow "classic" snaps to work, you need to run the following command:
sudo ln -s /var/lib/snapd/snap /snap
## Proxy configuration
Now, you have to configure snap to use the http proxy in the template, this command can take some time because snap will time out as it tries to use the network when invoked...
snap set system proxy.http="http://127.0.0.1:8082/"
snap set system proxy.https="http://127.0.0.1:8082/"
## Run updates on template update
You need to prevent snap from searching for updates on its own as you will run updates when the template is updated:
snap refresh --hold
To automatically update snap programs when the template is updating (or doing any dnf operation), create the file `/etc/qubes/post-install.d/05-snap-update.sh` with the following content and make it executable:
!/bin/sh
if [ "$(qubesdb-read /type)" = "TemplateVM" ]
then
snap refresh
fi
## Qube settings menu integration
To add the menu entry of each snap program in the qube settings when you install/remove snaps, create the file `/usr/local/sbin/sync-snap.sh` with the following content and make it executable:
!/bin/sh
when a desktop file is created/removed
- links snap .desktop in /usr/share/applications
- remove outdated entries of programs that were removed
- sync the menu with dom0
inotifywait -m -r \
-e create,delete,close_write \
/var/lib/snapd/desktop/applications/ |
while IFS=':' read event
do
find /var/lib/snapd/desktop/applications/ -type l -name "*.desktop" | while read line
do
ln -s "$line" /usr/share/applications/
done
find /usr/share/applications/ -xtype l -delete
/etc/qubes/post-install.d/10-qubes-core-agent-appmenus.sh
done
Install the package `inotify-tools` to make the script above working, and add this to `/rw/config/rc.local` to run it at boot:
/usr/local/bin/sync-snap.sh &
You can run the script now with `/usr/local/bin/sync-snap.sh &` if you plan to install snap programs.
## Snap store GUI
If you want to browse and install snap programs using a nice interface, you can install the snap store.
snap install snap-store
You can run the store with `snap run snap-store` or configure your template settings to add the snap store into the applications list, and run it from your Qubes OS menu.
# Debian
The setup on Debian is pretty similar, you can reuse the Fedora guide except you need to replace `dnf` by `apt`.
=> https://snapcraft.io/docs/installing-snap-on-debian Snapcraft official documentation: Installing snap on Debian
# Conclusion
More options to install programs is always good, especially when it comes with features like quota or sandboxing. Qubes OS gives you the flexibility to use multiple templates in parallel, a new source of packages can be useful for some users.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/snap-on-qubesos.gmi</guid>
<link>gemini://perso.pw/blog//articles/snap-on-qubesos.gmi</link>
<pubDate>Sat, 19 Oct 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>Asynchronous secure file transfer with nncp</title>
<description>
<![CDATA[
<pre># Introduction
nncp (node to node copy) is a software to securely exchange data between peers. Is it command line only, it is written in Go and compiles on Linux and BSD systems (although it is only packaged for FreeBSD in BSDs).
The website will do a better job than me to talk about the numerous features, but I will do my best to explain what you can do with it and how to use it.
=> http://www.nncpgo.org/ nncp official project website
# Explanations
nncp is a suite of tools to asynchronously exchange data between peers, using zero knowledge encryption. Once peers have exchanged their public keys, they are able to encrypt data to send to this peer, this is nothing really new to be honest, but there is a twist.
- a peer can directly connect to another using TCP, you can even configure different addresses like a tor onion or I2P host and use the one you want
- a peer can connect to another using ssh
- a peer can generate plain files that will be carried over USB, network storage, synchronization software, whatever, to be consumed by a peer. Files can be split in chunks of arbitrary size in order to prevent anyone snooping from figuring how many files are exchanged or their name (hence zero knowledge).
- a peer can generate data to burn on a CD or tape (it is working as a stream of data instead of plain files)
- a peer can be reachable through another relay peer
- when a peer receives files, nncp generates ACK files (acknowledgement) that will tell you they correctly received it
- a peer can request files and/or trigger pre-configured commands you expose to this peer
- a peer can send emails with nncp (requires a specific setup on the email server)
- data transfer can be interrupted and resumed
What is cool with nncp is that files you receive are unpacked in a given directory and their integrity is verified. This is sometimes more practical than a network share in which you are never sure when you can move / rename / modify / delete the file that was transferred to you.
I identified a few "realistic" use cases with nncp:
- exchange files between air gap environments (I tried to exchange files over sound or QR codes, I found no reliable open source solution)
- secure file exchange over physical medium with delivery notification (the medium needs to do a round-trip for the notification)
- start a torrent download remotely, prepare the file to send back once downloaded, retrieve the file at your own pace
- reliable data transfer over poor connections (although I am not sure if it beats kermit at this task :D )
- "simple" file exchange between computers / people over network
This let a lot of room for other imaginative use cases.
# Real world example: Syncthing gateway
My preferred workflow with nncp that I am currently using is a group of three syncthing servers.
Each syncthing server is running on a different computer, the location does not really matter. There is a single share between these syncthing instances.
The servers where syncthing are running have incoming and outgoing directories exposed over a NFS / SMB share, with a directory named after each peer in both directories. Deposing a file in the "outgoing" directory of a peer will make nncp to prepare the file for this peer, put it into the syncthing share and let it share, the file is consumed in the process.
In the same vein, in the incoming directory, new files are unpacked in the incoming directory of emitting peer on the receiver server running syncthing.
Why is it cool? You can just drop a file in the peer you want to send to, it disappears locally and magically appears on the remote side. If something wrong happens, due to ACK, you can verify if the file was delivered and unpacked. With three shares, you can almost have two connected at the same time.
It is a pretty good file deposit that requires no knowledge to use.
This could be implemented with pure syncthing, however you would have to:
- for each peer, configure a one-way directory share in syncthing for each other peer to upload data to
- for each peer, configure a one-way directory share in syncthing for each other peer to receive data from
- for each peer, configure an encrypted share to relay all one way share from other peers
This does not scale well.
Side note, I am using syncthing because it is fun and requires no infrastructure. But actually, a webdav filesystem, a Nextcloud drive or anything to share data over the network would work just fine.
# Setup
## Configuration file and private keys
On each peer, you have to generate a configuration file with its private keys. The default path for the configuration file is `/etc/nncp.hjson` but nothing prevents you from storing this file anywhere, you will have to use the parameter `-cfg /path/to/config` file in that case.
Generate the file like this:
nncp-cfgnew > /etc/nncp.hjson
The file contains comments, this is helpful if you want to see how the file is structured and existing options. Never share the private keys of this file!
I recommend checking the spool and log paths, and decide which user should use nncp. For instance, you can use `/var/spool/nncp` to store nncp data (waiting to be delivered or unpacked) and the log file, and make your user the owner of this directory.
## Public keys
Now, generate the public keys (they are just derived from the private keys generated earlier) to share with your peers, there is a command for this that will read the private keys and output the public keys in a format ready to put in the nncp.hjson file of recipients.
nncp-cfgmin > my-peer-name.pub
You can share the generated file with anyone, this will allow them to send you files. The peer name of your system is "self", you can rename it, it is just an identifier.
## Import public keys
When import public keys, you just need to add the content generated by the command `nncp-cfgmin` of a peer in your nncp configuration file.
Just copy / paste the content in the `neigh` structure within the configuration file, just make sure to rename "self" by the identifier you want to give to this peer.
If you want to receive data from this peer, make sure to add an attribute line `incoming: "/path/to/incoming/data"` for that peer, otherwise you will not be able to unpack received file.
# Usage
Now you have peers who exchanged keys, they are able to send data to each other. nncp is a collection of tools, let's see the most common and what they do:
- nncp-file: add a file in the spool to deliver to a peer
- nncp-toss: unpack incoming data (files, commands, file request, emails) and generate ack
- nncp-reass: reassemble files that were split in smaller parts
- nncp-exec: trigger a pre-configured command on the remote peer, stdin data will be passed as the command parameters. Let's say a peer offers a "wget" service, you can use `echo "https://some-domain/uri/" | nncp-exec peername wget` to trigger a remote wget.
If you use the client / server model over TCP, you will also use:
- nncp-daemon: the daemon waiting for connections
- nncp-caller: a daemon occasionally triggering client connections (it works like a crontab)
- nncp-call: trigger a client connection to a peer
If you use asynchronous file transfers, you will use:
- nncp-xfer: generates to / consumes files from a directory for async transfer
# Workflow (how to use)
## Sending files
For sending files, just use `nncp-file file-path peername:`, the file name will be used when unpacked, but you can also give the filename you want to give once unpacked.
A directory could be used as a parameter instead of a file, it will be stored automatically in a .tar file for delivery.
Finally, you can send a stream of data using nncp-file stdin, but you have to give a name to the resulting file.
## Sync and file unpacking
This was not really clear from the documentation, so here it is how to best use nncp when exchanging files using plain files, the destination is `/mnt/nncp` in my examples (it can be an external drive, a syncthing share, a NFS mount...):
When you want to sync, always use this scheme:
1. `nncp-xfer -rx /mnt/nncp`
2. `nncp-toss -gen-ack`
3. `nncp-xfer -keep -tx -mkdir /mnt/nncp`
4. `nncp-rm -all -ack`
This receives files using `nncp-xfer -rx`, the files are stored in nncp spool directory. Then, with `nncp-toss -gen-ack`, the files are unpacked to the "incoming" directory of each peer who sent files, and ACK are generated (older versions of `nncp-toss` does not handle ack, you need to generate ack befores and remove them after tx, with `nncp-ack -all 4>acks` and `nncp-rm -all -pkt < acks`).
`nncp-xfer -tx` will put in the directory the data you want to send to peers, and also the ack files generated by the rx which happened before. The `-keep` flag is crucial here if you want to make use of ACK, with `-keep`, the sent data are kept in the pool until you receive the ACK for them, otherwise the data are removed from the spool and will not be retransmited if the files were not received. Finally, `nncp-rm` will delete all ACK files so you will not transmit them again.
# Explanations about ACK
From my experience and documentation reading, there are three cases with the spool and ACK:
- the shared drive is missing the files you sent (that are still in pool), and you received no ACK, the next time you run `nncp-xfer`, the files will be transmitted again
- when you receive ACK files for files in spool, they are deleted from the spool
- when you do not use `-keep` when sending files with `nncp-xfer`, the files will not be stored in the spool so you will not be able to know what to retransmit if ACK are missing
ACKs do not clean up themselves, you need to use `nncp-rm`. It took me a while to figure this, my nodes were sending ACKs to each other repeatedly.
# Conclusion
I really like nncp as it allows me to securely transfer files between my computers without having to care if they are online. Rsync is not always possible because both the sender and receiver need to be up at the same time (and reachable correctly).
The way files are delivered is also practical for me, as I already shared above, files are unpacked in a defined directory by peer, instead of remembering I moved something in a shared drive. This removes the doubt about files being in a shared drive: why is it there? Why did I put it there? What was its destination??
I played with various S3 storage to exchange nncp data, but this is for another blog post :-)
# Going further
There are more features in nncp, I did not play with all of them.
You can define "areas" in parallel of using peers, you can use emails notifications when a remote receives data from you to have a confirmation, requesting remote files etc... It is all in the documentation.
I have the idea to use nncp on a SMTP server to store encrypted incoming emails until I retrieve them (I am still working at improving the security of email storage), stay tuned :)
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/secure-file-transfer-with-nncp.gmi</guid>
<link>gemini://perso.pw/blog//articles/secure-file-transfer-with-nncp.gmi</link>
<pubDate>Sun, 06 Oct 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>I moved my emails to Proton Mail</title>
<description>
<![CDATA[
<pre># Introduction
I recently took a very hard decision: I moved my emails to Proton Mail.
This is certainly a shock for people following this blog for a long time, this was a shock for me as well! This was actually pretty difficult to think this topic objectively, I would like to explain how I came up to this decision.
I have been self-hosting my own email server since I bought my first domain name, back in 2009. The server have been migrated multiple times, from hosting companies to another and regularly changing the underlying operating system for fun. It has been running on: Slackware, NetBSD, FreeBSD, NixOS and Guix.
# My needs
First, I need to explain my previous self-hosted setup, and what I do with my emails.
I have two accounts:
- one for my regular emails, mailing lists, friends, family
- one for my company to reach client, send quotes and invoices
Ideally, having all the emails retrieved locally and not stored on my server would be ideal. But I am using a lot of devices (most are disposable), and having everything on a single computer will not work for me.
Due to my emails being stored remotely and containing a lot of private information, I have never been really happy with how emails work at all. My dovecot server has access to all my emails, unencrypted and a single password is enough to connect to the server. Adding a VPN helps to protect dovecot if it is not exposed publicly, but the server could still be compromised by other means. OpenBSD smtpd server got critical vulnerabilities patched a few years ago, basically allowing to get root access, since then I have never been really comfortable with my email setup.
I have been looking for ways to secure my emails, this is how I came to the setup encrypting incoming emails with GPG. This is far from being ideal, and I stopped using it quickly. This breaks searches, the server requires a lot of CPU and does not even encrypt all information.
=> https://dataswamp.org/~solene/2024-08-14-automatic-emails-gpg-encryption-at-rest.html Emails encryption at rest on OpenBSD using dovecot and GPG
Someone shown me a dovecot plugin to encrypt emails completely, however my understanding of the encryption of this plugin is that the IMAP client must authenticate the user using a plain text password that is used by dovecot to unlock an asymmetric encryption key. The security model is questionable: if the dovecot server is compromised, users passwords are available to the attacker and they can decrypt all the emails. It would still be better than nothing though, except if the attacker has root access.
=> https://0xacab.org/liberate/trees Dovecot encryption plugin: TREES
One thing I need from my emails is to arrive to the recipients. My emails were almost always considered as spam by big email providers (GMail, Microsoft), this has been an issue for me for years, but recently it became a real issue for my business. My email servers were always perfectly configured with everything required to be considered as legit as possible, but it never fully worked.
# Proton Mail
Why did I choose Proton Mail over another email provider? There are a few reasons for it, I evaluated a few providers before deciding.
Proton Mail is a paid service, actually this is an argument in itself, I would not trust a good service to work for free, this would be too good to be true, so it would be a scam (or making money on my data, who knows).
They offer zero-knowledge encryption and MFA, which is exactly what I wanted. Only me should be able to read my email, even if the provider is compromised, adding MFA on top is just perfect because it requires two secrets to access the data. Their zero-knowledge security could be criticized for a few things, ultimately there is no guarantee they do it as advertised.
Long story short, when making your account, Proton Mail generates an encryption key on their server that is password protected with your account password. When you use the service and log-in, the encrypted key is sent to you so all crypto operations happens locally, but there is no way to verify if they kept your private key unencrypted at the beginning, or if they modified their web apps to key log the password typed. Applications are less vulnerable to the second problem as it would impact many users and this would leave evidences. I do trust them for doing the things right, although I have no proof.
I did not choose Proton Mail for end-to-end encryption, I only use GPG occasionally and I could use it before.
IMAP is possible with Proton Mail when you have a paid account, but you need to use a "connect bridge", it is a client that connects to Proton with your credentials and download all encrypted emails locally, then it exposes an IMAP and SMTP server on localhost with dedicated credentials. All emails are saved locally and it syncs continuously, it works great, but it is not lightweight. There is a custom implementation named hydroxide, but it did not work for me. The bridge does not support caldav and cardav, which is not great but not really an issue for me anyway.
=> https://github.com/emersion/hydroxide GitHub project page: hydroxide
Before migrating, I verified that reversibility was possible, aka being able to migrate my emails away from Proton Mail. In case they stop providing their export tool, I would still have a local copy of all my IMAP emails, which is exactly what I would need to move it somewhere else.
There are certainly better alternatives than Proton with regard to privacy, but Proton is not _that_ bad on this topic, it is acceptable enough for me.
## Benefits
Since I moved my emails, I do not have deliverability issues. Even people on Microsoft received my emails at first try! Great success for me here.
The anti-spam is more efficient that my spamd trained with years of spam.
Multiple factor authentication is required to access my account.
## Interesting features
I did not know I would appreciate scheduling emails sending, but it's a thing and I do not need to keep the computer on.
It is possible to generate aliases (10 or unlimited depending on the subscription), what's great with it is that it takes a couple seconds to generate a unique alias, and replying to an email received on an alias automatically uses this alias as the From address (webmail feature). On my server, I have been using a lot of different addresses using a "+" local prefix, it was rarely recognized, so I switched to a dot, but these are not real aliases. So I started managing smtpd aliases through ansible, and it was really painful to add a new alias every time I needed one. Did I mention I like this alias feature? :D
If I want to send an end-to-end encrypted email without GPG, there is an option to use a password to protect the content, the email would actually send a link to the recipient, leading to a Proton Mail interface asking for the password to decrypt the content, and allow that person to reply. I have no idea if I will ever use it, but at least it is a more user-friendly end-to-end encryption method. Tuta is offering the same feature, but it is there only e2e method.
Proton offer logs of login attempts on my account, this was surprising.
There is an onion access to their web services in case you prefer to connect using tor.
The web interface is open source, one should be able to build it locally to connect to Proton servers, I guess it should work?
=> https://github.com/ProtonMail/WebClients GitHub project page: ProtonMail webclients
## Shortcomings
Proton Mail cannot be used as an SMTP relay by my servers, except through the open source bridge hydroxide.
The calendar only works on the website and the smartphone app. The calendar it does not integrate with the phone calendar, although in practice I did not find it to be an issue, everything works fine. Contact support is less good on Android, they are restrained in the Mail app and I still have my cardav server.
The web app is first class citizen, but at least it is good.
Nothing prevents Proton Mail from catching your incoming and outgoing emails, you need to use end-to-end encryption if you REALLY need to protect your emails from that.
I was using two accounts, this would require a "duo" subscription on Proton Mail which is more expensive. I solved this by creating two identities, label and filter rules to separate my two "accounts" (personal and professional) emails. I actually do not really like that, although it is not really an issue at the moment as one of them is relatively low traffic.
The price is certainly high, the "Mail plus" plan is 4€ / month (48€ / year) if you subscribe for 12 months, but is limited to 1 domain, 10 aliases and 15 GB of storage. The "Proton Unlimited" plan is 10€ / month (120€ / year) but comes with the kitchen sink: infinite aliases, 3 domains, 500 GB storage, and access to all Proton services (that you may not need...) like VPN, Drive and Pass. In comparison, hosting your email service on a cheap server should not cost you more than 70€ / year, and you can self-host a nextcloud / seafile (equivalent to Drive, although it is stored encrypted there), a VPN and a vaultwarden instance (equivalent to Pass) in addition to the emails.
Emails are limited to 25MB, which is low given I always configured my own server to allow 100 MB attachments, but it created delivery issues on most recipient servers, so it is not a _real_ issue, but I prefer when I can decide of this kind of limitation.
## Alternatives
I evaluated Tuta too, but for the following reasons I dropped the idea quickly:
- they don't support email import (it's "coming soon" since years on their website)
- you can only use their app or website
- there is no way to use IMAP
- there is no way to use GPG because their client does not support it, and you cannot connect using SMTP with your own client
Their service is cool though, but not for me.
# My ideal email setup
If I was to self-host again (which may be soon! Who knows), I would do it differently to improve the security:
- one front server with the SMTP server, cheap and disposable
- one server for IMAP
- one server to receive and analyze the logs
Only the SMTP server would be publicly available, all ports would be closed on all servers, servers would communicate between each other through a VPN, and exports their logs to a server that would only be used for forensics and detecting security breaches.
Such setup would be an improvement if I was self-hosting again my emails, but the cost and time to operate is non-negligible. It is also an ecological nonsense to need 3 servers for a single person emails.
# Conclusion
I started this blog post with the fact that the decision was hard, so hard that I was not able to decide up to a day before renewing my email server for one year. I wanted to give Proton a chance for a month to evaluate it completely, and I have to admit I like the service much more than I expected...
My Unix hacker heart hurts terribly on this one. I would like to go back to self-hosting, but I know I cannot reach the level of security I was looking for, simply because email sucks in the first place. A solution would be to get rid of this huge archive burden I am carrying, but I regularly search information into this archive and I have not found any usable "mail archive system" that could digest everything and serve it locally.
## Update 2024-09-14
I wrote this blog post two days ago, and I cannot stop thinking about this topic since the migration.
The real problem certainly lies in my use case, not having my emails on the remote server would solve my problems. I need to figure how to handle it. Stay tuned :-)
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/email-selfhost-to-protonmail.gmi</guid>
<link>gemini://perso.pw/blog//articles/email-selfhost-to-protonmail.gmi</link>
<pubDate>Sun, 15 Sep 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>Self-hosting at home and privacy</title>
<description>
<![CDATA[
<pre># Introduction
You may self-host services at home, but you need to think about the potential drawbacks for your privacy.
Let's explore what kind of information could be extracted from self-hosting, especially when you use a domain name.
# Public information
## Domain WHOIS
A domain name must expose some information through WHOIS queries, basically who is the registrar responsible for it, and who could be contacted for technical or administration matters.
Almost every registrar will offer you feature to hide your personal information, you certainly not want to have your full name, full address and phone number exposed on a single WHOIS request.
You can perform a WHOIS request on the link below, directly managed by ICANN.
=> https://lookup.icann.org/en ICANN Lookup
## TLS certificates using ACME
If you use TLS certificates for your services, and ACME (Let's Encrypt or alternatives), all the domains for which a certificate was emitted can easily be queried.
You can visit the following website, type a domain name, and you will immediately have a list of existing domain names.
=> https://crt.sh/ crt.sh Certificate Search
In such situation, if you planned to keep a domain hidden by not sharing it with anyone, you got it wrong.
## Domain name
If you use a custom domain in your email, it is highly likely that you have some IT knowledge and that you are the only user of your email server.
Using this statement (IT person + only domain user), someone having access to your email address can quickly search for anything related to your domain and figure it is related to you.
## Public IP
Anywhere you connect, your public IP is known of the remote servers.
Some bored sysadmin could take a look at the IPs in their logs, and check if some public service is running on it, polling for secure services (HTTPS, IMAPS, SMTPS) will immediately give associated domain name on that IP, then they could search even further.
# Mitigations
There are not many solutions to prevent this, unfortunately.
The public IP situation could be mitigated by either continuing hosting at home by renting a cheap server with a public IP and establish a VPN between the two and use the public IP of the server for your services, or to move your services to such remote server. This is an extract cost of course. When possible, you could expose the service over Tor hidden service or I2P if it works for your use case, you would not need to rent a server for this.
The TLS certificates names being public could be easily solved by generating self-signed certificates locally, and deal with it. Depending on your services, it may be just fine, but if you have strangers using the services, the fact to accept to trust the certificate on first use (TOFU) may appear dangerous. Some software fail to connect to self-signed certificates and do not offer a bypass...
# Conclusion
Self-hosting at home can be practical for various reasons: reusing old hardware, better local throughput, high performance for cheap... but you need to be aware of potential privacy issues that could come with it.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/self-hosting-at-home-privacy-issues.gmi</guid>
<link>gemini://perso.pw/blog//articles/self-hosting-at-home-privacy-issues.gmi</link>
<pubDate>Thu, 12 Sep 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>How to use Proton VPN port forwarding</title>
<description>
<![CDATA[
<pre># Introduction
If you use Proton VPN with the paid plan, you have access to their port forwarding feature. It allows you to expose a TCP and/or UDP port of your machine on the public IP of your current VPN connection.
This can be useful for multiple use cases, let's see how to use it on Linux and OpenBSD.
=> https://protonvpn.com/support/port-forwarding-manual-setup/ Proton VPN documentation: port forwarding setup
If you do not have a privacy need with regard to the service you need to expose to the Internet, renting a cheap VPS is a better solution: cheaper price, stable public IP, no weird script for port forwarding, use of standard ports allowed, reverse DNS, etc...
# Feature explanation
Proton VPN port forwarding feature is not really practical, at least not as practical as doing a port forwarding with your local router. The NAT is done using NAT-PMP protocol (an alternative to UPnP), you will be given a random port number for 60 seconds. The random port number is the same for TCP and UDP.
=> https://en.wikipedia.org/wiki/NAT_Port_Mapping_Protocol Wikipedia page about NAT Port Mapping Protocol
There is a NAT PMPC client named `natpmpc` (available almost everywhere as a package) that need to run in an infinite loop to renew the port lease before it expires.
This is rather not practical for multiple reasons:
- you get a random port assigned, so you must configure your daemon every time
- the lease renewal script must run continuously
- if something wrong happens (script failing, short network failure) that prevent renewing the lease, you will get a new random port
Although it has shortcomings, it is a useful feature that was dropped by other VPN providers because of abuses.
# Setup
Let me share a script I am using on Linux and OpenBSD that does the following:
- get the port number
- reconfigure the daemon using the port forwarding feature
- infinite loop renewing the lease
You can run the script from supervisord (a process manager) to restart it upon failure.
=> http://supervisord.org/ Supervisor official project website
In the example, the Java daemon I2P will be used to demonstrate the configuration update using sed after being assigned the port number.
## OpenBSD
Install the package `natpmpd` to get the NAT-PMP client.
Create a script with the following content, and make it executable:
!/bin/sh
PORT=$(natpmpc -a 1 0 udp 60 -g 10.2.0.1 | awk '/Mapped public/ { print $4 }')
check if the current port is correct
grep "$PORT" /var/i2p/router.config || /etc/rc.d/i2p stop
update the port in I2P config
sed -i -E "s,(^i2np.udp.port).*,\1=$PORT, ; s,(^i2np.udp.internalPort).*,\1=$PORT," /var/i2p/router.config
make sure i2p is started (in case it was stopped just before)
/etc/rc.d/i2p start
while true
do
date # use for debug only
natpmpc -a 1 0 udp 60 -g 10.2.0.1 && natpmpc -a 1 0 tcp 60 -g 10.2.0.1 || { echo "error Failure natpmpc $(date)"; break ; }
sleep 45
done
The script will search for the port number in I2P configuration, stop the service if the port is not found. Then the port line is modified with sed (in all cases, it does not matter much). Finally, i2p is started, this will only do something in case i2p was stopped before, otherwise nothing happens.
Then, in an infinite loop with a 45 seconds frequency, there is a renewal of the TCP and UDP port forwarding happening. If something wrong happens, the script exits.
### Using supervisord
If you want to use supervisord to start the script at boot and maintain it running, install the package `supervisor` and create the file `/etc/supervisord.d/nat.ini` with the following content:
[program:natvpn]
command=/etc/supervisord.d/continue_nat.sh ; choose the path of your script
autorestart=unexpected ; when to restart if exited after running (def: unexpected)
Enable supervisord at boot, start it and verify it started (a configuration error prevents it from starting):
rcctl enable supervisord
rcctl start supervisord
rcctl check supervisord
### Without supervisord
Open a shell as root and execute the script and keep the terminal opened, or run it in a tmux session.
## Linux
The setup is exactly the same as for OpenBSD, just make sure the package providing `natpmpc` is installed.
Depending on your distribution, if you want to automate the script running / restart, you can run it from a systemd service with auto restart on failure, or use supervisord as explained above.
If you use a different network namespace, just make sure to prefix the commands using the VPN with `ip netns exec vpn`.
Here is the same example as above but using a network namespace named "vpn" to start i2p service and do the NAT query.
!/bin/sh
PORT=$(ip netns exec vpn natpmpc -a 1 0 udp 60 -g 10.2.0.1 | awk '/Mapped public/ { print $4 }')
FILE=/var/i2p/.i2p/router.config
grep "$PORT" $FILE || sudo -u i2p /var/i2p/i2prouter stop
sed -i -E "s,(^i2np.udp.port).*,\1=$PORT, ; s,(^i2np.udp.internalPort).*,\1=$PORT," $FILE
ip netns exec vpn sudo -u i2p /var/i2p/i2prouter start
while true
do
date
ip netns exec vpn natpmpc -a 1 0 udp 60 -g 10.2.0.1 && ip netns exec vpn natpmpc -a 1 0 tcp 60 -g 10.2.0.1 || { echo "error Failure natpmpc $(date)"; break ; }
sleep 45
done
# Conclusion
Proton VPN port forwarding feature is useful when need to expose a local network service on a public IP. Automating it is required to make it work efficiently due to the unusual implementation.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/protonvpn-port-forwarding.gmi</guid>
<link>gemini://perso.pw/blog//articles/protonvpn-port-forwarding.gmi</link>
<pubDate>Tue, 03 Sep 2024 00:00:00 GMT</pubDate>
</item>
</channel>
</rss>