Sunday, August 10, 2025

Self-hosted open-source multi-user multi-platform secret management

TLDR: Syncthing, KeePassXC, Keepass2Android, AuthPass, and WebDAV via Apache HTTP Server allow for self-hosted open-source multi-user multi-platform secret management.

This article describes the secrets management setup used by me and my family. This is not a tutorial, but rather an overview of the possibilities and what works for us.

The setup:

  • is fully open source with Open Source Initiative-approved licenses
  • is multi-platform, it supports macOS, Linux, Windows, iPhone, and Android
  • is multi-user, you can share secrets
  • is self-hosted with low maintenance
  • supports passwords, TOTP, SSH keys and any other secret
  • has browser support
  • does not require tech-savvy family members (one is enough)

The tools

KeePassXC, Keepass2Android and AuthPass

These are three nice and complete apps that all support secret databases in the KeePass format. Although some variations exist, I have never experienced interoperability issues with these tools.

To use KeePassXC in the browser, you need a browser add-on. Many browsers are supported. Keepass2Android and AuthPass integrate well with the Android and iOS environments and don't require additional software.

Bonus features

Bonus feature 1: KeePassXC can also be used as an SSH-agent. This allows you to use SSH-keys as long as the KeePass database is unlocked. The SSH keys are synced along with all the other secrets. No more private key files on disk!

Bonus feature 2: if you ever lost a phone with Google Authenticator installed, you know how painful it is to set up 2FA with TOTP again. Configure TOTP in these apps instead, and that worry is gone.

Syncthing

Syncthing is an amazing program. It just continues to work with very little maintenance. It can synchronize multiple folders between multiple devices. Once a device is connected to another device, they automatically find each other over the internet.

Each person stores their main secrets database in a 'small' folder containing only the files they want to sync to their phone. This small folder is not shared between people. Then there are 'larger' folders that are optionally shared between multiple people. These larger folders are only synchronized between desktops and laptops and are a good place to store shared KeePass databases.

To ensure that all devices with Syncthing always stay in sync, it is a good idea to share all folders with a machine that is always on. Ideally the Syncthing port (22000) would be exposed directly to the internet. This reduces sync conflicts because it is more likely that all devices see the changes from the other devices.

Since you're going to create many folders, think about a naming convention. Our folders start with the name of the owner. The Syncthing folder name can be different from the directory in the file system. For example, the Syncthing folder could be named erik-documents while on my device the directory is called documents.

Even though there is a very nice Android application, Google has made it maddeningly difficult to publish to the Play store. So difficult even, that the maintainers have given up. Fortunately, you can still get a maintained fork via F-Droid or use one of the other install options.

Bonus features and tips

Bonus feature 1: Store all your documents in a Syncthing folder so that you can access them from multiple devices.

Bonus feature 2: Configure a Syncthing instance to keep older file versions. Now you have a backup too!

Bonus feature 3: Sync the camera folder on your Android phone.

Tip 1: Using Homebrew? The most convenient way to install Syncthing is with the command brew install --cask syncthing-app.

Tip 2: When starting a new Syncthing device, remove the default folder shared from directory ~/Sync. Instead, put the folders you're sharing below the ~/Sync directory.

Tip 3: Before you create any folder in Syncthing, change the folder default configuration to have the following ignore pattern. This is especially important when you use Apple devices.

(?d)**/.DS_Store
(?d).DS_Store
#include .syncthing-patterns.txt

Tip 4: All the GUIs of the locally running Syncthing instances have the same 'localhost' URL. Since the URL is the same, you should also use the same password. Otherwise, storing the password in KeepassXC becomes difficult.

Support iPhone with Apache HTTP Server and WebDAV

Due to limitations imposed by iOS (no background threads unless you are paid by Apple), Syncthing does not run on iPhones. Fortunately, we found AuthPass which supports reading from and writing to a WebDAV folder. AuthPass does this really well; if you make changes while being offline, it automatically merges those changes into the latest version of the remote database once you go online again!

Fortunately, we already have a Linux server running Apache HTTP Server that is always on. (The websites there are also synced with Syncthing.) By configuring a WebDAV folder in Apache HTTP Server (protected by a username/password), we can share a Syncthing folder with AuthPass. Each person with an iPhone will need their own WebDAV folder.

Sharing secrets with KeeShare

KeeShare is a KeePassXC feature that allows you to synchronize a shared KeePass database with your main database. Since the main database contains a complete copy of the shared database, you only need to set up KeeShare on one device. Other devices, including your mobile phone, do not require direct access to the shared database.

Since KeeShare is only supported in KeePassXC, you must periodically open KeePassXC. Otherwise, you will miss changes in the shared databases. Shared databases won't sync if you only use the KeePass database on your mobile phone.

Tip: Sharing secrets is limited; you can only share entire databases. Therefore, plan ahead and decide how you want to organize secrets. We settled on a shared database for the whole family, and another shared database for just me and my partner.
Since each shared KeePass database is password-protected, you can store them all in the same shared Syncthing folder. However, if you are sharing other things as well, you may want to create multiple Syncthing folders.

Maintenance

Sometimes two offline devices modify the same KeePass file. Later, Syncthing detects the conflict and stores multiple files, one for each conflicting edit. You can merge all the conflict files using the database merge feature in KeePassXC. After merging them into the main database, you can delete the conflict files. Unfortunately, there is no default way to detect the presence of these conflict files. I manually check the synced folders once every few months (or when I miss a secret!). If you build a detection script, please share!

Since Syncthing runs so quietly in the background, you won't notice when things go wrong. To prevent this, check the Syncthing UI every few months.

Not explored options

KeePassXC has a command-line interface. This could be useful for Linux servers or scripts.

Conclusion

We have used this setup for over four years and have found it to be user-friendly and low-maintenance. Even my teenager kids are fervent users. Despite losing several devices, our secrets have never been lost.

Tuesday, July 8, 2025

Shutting down Version 99 does not exist

July 2007, I was so fed up with Maven's inability to exclude commons-logging that I wrote a virtual maven repository to fake it. A few months later this became Version 99 does not exist. The virtual repository, has been running on my home server ever since.

In the early years, minor changes were made to increase compatibility with more tools.

Unfortunately, in 2011, the virtual repository lost its hostname.

Some time later, I reinstated a proper DNS name for the service: version99.grons.nl. For some unknown reason, I never blogged about this!

In 2013 the original version (90 line of Ruby with Camping), was replaced by a Go implementation written by Frank Schröder. This version (with super minor changes) has been in place ever since.

In the meantime, commons-logging has been replaced by Slf4j and tools have become better at excluding dependencies. Therefore, after almost 18 years, I am shutting down Version 99 does not exist. Version 99, it was fun having you, but luckily we no longer need you.

Saturday, June 21, 2025

Installing a theme for Launch Drupal CMS

Drupal CMS trial for Desktop is a wonderful way to try Drupal CMS. Unfortunately, you can't install new theme's from the admin api in the browser. Once you have selected a theme, for example Corporate Clean, there is an instruction on using a tool called composer. Its funny how there are so many pockets of developers where it is just assumed you have some particular tool installed.

As I found out, composer is a package manager for PHP, and it is installed inside the ~/Documents/drupal directory. This is the directory that the launcher creates on the Mac, the location may differ per OS. We also need a PHP runtime, I found one in the Launcher application bundle.
Bringing this all together, this are the commands to install a theme from the command line:

cd ~/Documents/drupal /Applications/Launch\ Drupal\ CMS.app/Contents/Resources/bin/php \ ./vendor/bin/composer require 'drupal/corporateclean:^1.0'

Sunday, June 8, 2025

Running Wallabag with Posgresql and Docker compose

Now that Pocket is going away, it is time to host a read-it-later app myself. After looking at a few options, my eyes fell on Wallabag. Its not all that smooth, but it works reasonably well.

I run several services with docker compose for its ease of upgrading, and, more importantly, for the ease with which you can get rid of a service once you no longer need it.

Since it didn't work out of the box, here is how I installed Wallabag with Postgresql, using Docker compose.

Installation

Create the directory /srv/wallabag. This is where wallabag will store the article images and its database.

Prepare the docker-compose.yaml file with:

services: wallabag: image: wallabag/wallabag restart: unless-stopped environment: - POSTGRES_PASSWORD=***_random_string_1_*** - POSTGRES_USER=postgres - SYMFONY__ENV__DATABASE_DRIVER=pdo_pgsql - SYMFONY__ENV__DATABASE_HOST=db - SYMFONY__ENV__DATABASE_PORT=5432 - SYMFONY__ENV__DATABASE_NAME=wallabag - SYMFONY__ENV__DATABASE_USER=wallabag - SYMFONY__ENV__DATABASE_PASSWORD=***_random_string_2_*** - SYMFONY__ENV__DOMAIN_NAME=https://wallabag.domain.com - SYMFONY__ENV__SERVER_NAME="Wallabag" - SYMFONY__ENV__LOCALE=en ports: - "8000:80" volumes: - /srv/wallabag/images:/var/www/wallabag/web/assets/images healthcheck: test: ["CMD", "wget" ,"--no-verbose", "--tries=1", "--spider", "http://localhost/api/info"] interval: 1m timeout: 3s depends_on: - db - redis db: image: postgres:17 restart: unless-stopped environment: - POSTGRES_PASSWORD=***_random_string_1_*** - POSTGRES_USER=postgres volumes: - /srv/wallabag/data:/var/lib/postgresql/data healthcheck: test: - CMD-SHELL - 'pg_isready -U postgres' interval: 5s timeout: 5s retries: 5 redis: image: redis:alpine restart: unless-stopped healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 20s timeout: 3s

Replace the two secrets, change your DNS domain, and add more env variables as desired (see wallabag on docker hub for more information). Make sure you read the entire file.

Wallabag's auto initialization code doesn't really support postgresql that well. Howewer, with the following commands you should get it to work:

docker compose pull docker compose up -d docker compose exec db psql --user=postgres \ -c "GRANT ALL ON SCHEMA public TO wallabag; \ ALTER DATABASE wallabag OWNER TO wallabag;" sleep 30 docker compose exec --no-TTY wallabag \ /var/www/wallabag/bin/console doctrine:migrations:migrate \ --env=prod --no-interaction docker compose restart

What did we get?

You should now have a running Wallabag on port 8000. Go configure Caddy, Ngix, or whatever as a proxy with HTTPS termination.

Create a user

What you don't have yet, is a way to login. For this you need to create a user. You can do this with the following command:

docker compose exec -ti wallabag \ /var/www/wallabag/bin/console fos:user:create --env=prod

More commands are documented on Wallabag console commands. Do not forget the mandatory --env=prod argument.

start.sh

To make upgrades a bit easier, you can use the following script:

#!/bin/bash set -euo pipefail IFS=$'\n\t' docker compose pull docker compose up -d sleep 2 docker compose exec --no-TTY wallabag /var/www/wallabag/bin/console doctrine:migrations:migrate --env=prod --no-interaction docker image prune

Future stuff

Once I have figured out how, I will update this article with:

  • Backup
  • Fail2ban integration

Wednesday, January 1, 2025

Using the TransIP API from bash

TLDR: Signing API requests from Bash is tricky, but doable with a temporary file.

Every couple of months I rotate the DKIM keys of my email server, after which I publish them on my website. This article on publishing dkim keys gives a good overview of why this is a good idea.

Initially this was all done manually, but over time I automated more and more. The toughest part was finding a good DNS registrar (DKIM keys are published in DNS), that has a proper API, and then using that API from Bash. The DNS registrar I am using is TransIP.

Here is how I did it.

Before we can use any other endpoint of the API, we need to get a token. We get the token by sending an API request with your username. The request must be signed with the private key that you uploaded/obtained from the API part of the TransIP console.

Using Bash variables to hold the request makes signing very tricky, before you know it a newline is added or removed, invalidating the signature; the transmitted request must be byte-for-byte the same as what was signed. Instead, we side step all Bash idiosycrasies by storing the request in a temporary file. Here we go:

# Configure your TransIP username and location of the private key. TRANSIP_USERNAME=your-username TRANSIP_PRIVATE_KEY=/path/to/your-transip-private-key.pem # The temporary file that holds the request. TOKEN_REQUEST_BODY=$(mktemp) # Create the request from your username. # We're going to write DNS entries so 'read_only' must be 'false'. # The request also needs a random nonce. # The token is only needed for a short time, 30 seconds is enough # in a Bash script. # I vagely remember that the label must be unique, so some randomness # is added there as well. cat <<EOF > "$TOKEN_REQUEST_BODY" { "login": "$TRANSIP_USERNAME", "nonce": "$(openssl rand -base64 15)", "read_only": false, "expiration_time": "30 seconds", "label": "Add dkim dns entry $RANDOM", "global_key": true } EOF # Sign the request with openssl and encode the signature in base64. SIGNATURE=$( cat "$TOKEN_REQUEST_BODY" | openssl dgst -sha512 -sign $TRANSIP_PRIVATE_KEY | base64 --wrap=0 ) # Send the request with curl. # Note how we use '--data-binary' option to make sure curl transmit # the request byte-for-byte as it was generated. TOKEN_JSON=$( curl \ --silent \ --show-error \ -X POST \ -H "Content-Type: application/json" \ -H "SIGNATURE: $SIGNATURE" \ --data-binary "@$TOKEN_REQUEST_BODY" \ https://api.transip.nl/v6/auth ) rm -rf $TOKEN_REQUEST_BODY # Extract the TOKEN from the response using jq. TOKEN=$(echo "$TOKEN_JSON" | jq --raw-output .token) if [[ "$TOKEN" == "null" ]]; then echo "Failed to get token" echo "$TOKEN_JSON" exit 1 fi

Now we can collect the data to write a DNS entry:

DNS_DOMAIN="your-domain.com" DNS_NAME="unique-dkim-key-name._domainkey" DNS_VALUE="v=DKIM1; h=sha256; t=s; p=MIIBIjANBgkqhkiG9....DAQAB"

I am using amavisd for DKIM so the values can be fetched with some grep/awk trickery:

DNS_DOMAIN="my-domain.com" DNS_NAME=$(amavisd showkeys | grep -o '^[^.]*._domainkey') DNS_VALUE=$(amavisd showkeys | awk -F'"' '$2 != "" {VALUE=VALUE $2}; END {print VALUE}')

Now we can create the DNS entry for the DKIM key:

# Create the request. DNS_REQUEST_BODY=$( cat <<EOF {"dnsEntry":{"name":"${DNS_NAME}","expire":86400,"type":"TXT","content":"${DNS_VALUE}"}} EOF ) # Send the request with curl. REGISTER_RESULT="$( curl \ --silent \ --show-error \ -X POST \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN" \ -d "$DNS_REQUEST_BODY" \ "https://api.transip.nl/v6/domains/${DNS_DOMAIN}/dns" )" if [[ "$REGISTER_RESULT" != "[]" ]]; then echo "Failed to register new DKIM DNS entry" echo "$REGISTER_RESULT" exit 1 fi

Note that this time this request is stored in a Bash variable.

Update 2024-01-50: Constructed variable DNS_REQUEST_BODY with cat instead of read because the latter exists with a non-zero exit code causing the script to exit.