Drupal CMS trial for Desktop is a wonderful way to try Drupal CMS.
Unfortunately, you can't install new theme's from the admin api in the browser.
Once you have selected a theme, for example Corporate Clean,
there is an instruction on using a tool called composer.
Its funny how there are so many pockets of developers where it is just assumed you have some particular tool installed.
As I found out, composer is a package manager for PHP, and it is installed inside the ~/Documents/drupal
directory. This is the directory that the launcher creates on the Mac, the location may differ per OS.
We also need a PHP runtime, I found one in the Launcher application bundle.
Bringing this all together, this are the commands to install a theme from the command line:
cd ~/Documents/drupal
/Applications/Launch\ Drupal\ CMS.app/Contents/Resources/bin/php \
./vendor/bin/composer require 'drupal/corporateclean:^1.0'
Now that Pocket is going away, it is time to host a read-it-later app myself. After looking at a few options, my eyes fell on Wallabag. Its not all that smooth, but it works reasonably well.
I run several services with docker compose for its ease of upgrading, and, more importantly, for the ease with which you can get rid of a service once you no longer need it.
Since it didn't work out of the box, here is how I installed Wallabag with Postgresql, using Docker compose.
Installation
Create the directory /srv/wallabag. This is where wallabag will store the article images and its database.
Replace the two secrets, change your DNS domain, and add more env variables as desired (see wallabag on docker hub for more information). Make sure you read the entire file.
Wallabag's auto initialization code doesn't really support postgresql that well. Howewer, with the following commands you should get it to work:
docker compose pull
docker compose up -d
docker compose exec db psql --user=postgres \
-c "GRANT ALL ON SCHEMA public TO wallabag; \
ALTER DATABASE wallabag OWNER TO wallabag;"
sleep 30
docker compose exec --no-TTY wallabag \
/var/www/wallabag/bin/console doctrine:migrations:migrate \
--env=prod --no-interaction
docker compose restart
What did we get?
You should now have a running Wallabag on port 8000. Go configure Caddy, Ngix, or whatever as a proxy with HTTPS termination.
Create a user
What you don't have yet, is a way to login. For this you need to create a user. You can do this with the following command:
TLDR: Signing API requests from Bash is tricky, but doable with a temporary file.
Every couple of months I rotate the DKIM keys of my email server, after which I publish them on my website. This article on publishing dkim keys gives a good overview of why this is a good idea.
Initially this was all done manually, but over time I automated more and more.
The toughest part was finding a good DNS registrar (DKIM keys are published in DNS), that has a proper API, and then using that API from Bash.
The DNS registrar I am using is TransIP.
Here is how I did it.
Before we can use any other endpoint of the API, we need to get a token.
We get the token by sending an API request with your username.
The request must be signed with the private key that you uploaded/obtained from the API part of the TransIP console.
Using Bash variables to hold the request makes signing very tricky, before you know it a newline is added or removed, invalidating the signature; the transmitted request must be byte-for-byte the same as what was signed.
Instead, we side step all Bash idiosycrasies by storing the request in a temporary file.
Here we go:
# Configure your TransIP username and location of the private key.
TRANSIP_USERNAME=your-username
TRANSIP_PRIVATE_KEY=/path/to/your-transip-private-key.pem
# The temporary file that holds the request.
TOKEN_REQUEST_BODY=$(mktemp)
# Create the request from your username.
# We're going to write DNS entries so 'read_only' must be 'false'.
# The request also needs a random nonce.
# The token is only needed for a short time, 30 seconds is enough
# in a Bash script.
# I vagely remember that the label must be unique, so some randomness
# is added there as well.
cat <<EOF > "$TOKEN_REQUEST_BODY"
{
"login": "$TRANSIP_USERNAME",
"nonce": "$(openssl rand -base64 15)",
"read_only": false,
"expiration_time": "30 seconds",
"label": "Add dkim dns entry $RANDOM",
"global_key": true
}
EOF
# Sign the request with openssl and encode the signature in base64.
SIGNATURE=$(
cat "$TOKEN_REQUEST_BODY" |
openssl dgst -sha512 -sign $TRANSIP_PRIVATE_KEY |
base64 --wrap=0
)
# Send the request with curl.
# Note how we use '--data-binary' option to make sure curl transmit
# the request byte-for-byte as it was generated.
TOKEN_JSON=$(
curl \
--silent \
--show-error \
-X POST \
-H "Content-Type: application/json" \
-H "SIGNATURE: $SIGNATURE" \
--data-binary "@$TOKEN_REQUEST_BODY" \
https://api.transip.nl/v6/auth
)
rm -rf $TOKEN_REQUEST_BODY
# Extract the TOKEN from the response using jq.
TOKEN=$(echo "$TOKEN_JSON" | jq --raw-output .token)
if [[ "$TOKEN" == "null" ]]; then
echo "Failed to get token"
echo "$TOKEN_JSON"
exit 1
fi
# Create the request.
DNS_REQUEST_BODY=$(
cat <<EOF
{"dnsEntry":{"name":"${DNS_NAME}","expire":86400,"type":"TXT","content":"${DNS_VALUE}"}}
EOF
)
# Send the request with curl.
REGISTER_RESULT="$(
curl \
--silent \
--show-error \
-X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d "$DNS_REQUEST_BODY" \
"https://api.transip.nl/v6/domains/${DNS_DOMAIN}/dns"
)"
if [[ "$REGISTER_RESULT" != "[]" ]]; then
echo "Failed to register new DKIM DNS entry"
echo "$REGISTER_RESULT"
exit 1
fi
Note that this time this request is stored in a Bash variable.
Update 2024-01-50: Constructed variable DNS_REQUEST_BODY with cat instead of read because the latter exists with a non-zero exit code causing the script to exit.
TlDR: Concurrency and pre-fetching gives zio-kafka a higher consumer throughput than the default java Kafka client for most workloads.
Zio-kafka is an asynchronous Kafka client based on the ZIO async runtime. It allows you to consume from Kafka with a ZStream per partition. ZStreams are streams on steroids; they have all the stream operators known from e.g. RX-Java and Monix, and then some. ZStreams are supported by ZIO's super efficient runtime.
So the claim is that zio-kafka consumes faster than the regular Java client. But zio-kafka wraps the default java Kafka client. So how can it be faster? The trick is that with zio-kafka, processing (meaning your code) runs in parallel with the code that pulls records from the Kafka broker.
Obviously, there is some overhead in distributing the received records to the streams that need to process it. So the question then is, when is the extra overhead of zio-kafka less than the gain by parallel processing? Can we estimate this? In turns out we can! For this estimate we use the benchmarks that zio-kafka runs from GitHub. In particular, we look at these 2 benchmarks with runs from 2024-11-30:
the throughput benchmark, uses zio-kafka and takes around 592 ms
the kafkaClients benchmark, uses Kafka java client directly and takes around 544 ms
Both benchmarks:
run using JMH on a 4 core github runner with 16GB RAM
consume 50k records of ~512 bytes from 6 partitions
process the records by counting the number of incoming records, batch by batch
are using the same consumer settings, in particular max.poll.records is set to 1000
the broker runs in the same JVM as the consumer, so there is almost no networking overhead
do not commit offsets (but note that committing does not change the outcome of this article)
First we calculate the overhead of java-kafka. For this we assume that counting the number of records takes no time at all. This is reasonable, a count operation is just a few CPU instructions, nothing compared to fetching something over the network, even if it is on the same machine. Therefore, in the figure, the 'processing' rectangle collapses to a thick vertical line.
Polling and processing as blocks of execution on a thread/fiber.
We also assume that every poll returns the same amount of records, and takes the same amount of time. This is not how it works in practise, but when the records are already available we're probably not that far off. As the program consumes 50k records, and each poll returns a batch of 1k records, there are 50 polls. Therefore, the overhead of java-kafka is 544/50 ≈ 11 ms per poll.
Now we can calculate the overhead of zio-kakfa. Again we assume that processing takes no time at all, so that every millisecond that the zio-kafka benchmark takes longer can be attributed to zio-kafka's overhead. The zio-kakfa benchmark runs longer for 592-544 = 48 ms. Therefore zio-kafka's overhead is 48/50 ≈ 1ms per poll.
Now lets look at a more realistic scenario where processing does take time.
Polling and processing as blocks of execution on a thread/fiber.
As you can see a java-kafka program alternates between polling and processing, while the zio-kafka program processes the records in parallel distributed over 6 streams (1 stream per partition, each stream runs independently on its own ZIO fiber). In the figure we assume that the work is evenly distributed, but unless the load is really totally skewed, it won't matter that much in practice due to zio-kafka's per-partition pre-fetching. As you can see the zio-kafka program in this scenario is much faster. But is this a realistic example? When do zio-kafka programs become faster than java-kafka programs?
Observe that how long polling takes is not important for this question because polling time is the same for both programs. So we will only look at the time between polls. We use (p) for the processing time per batch for the java-kafka program. For the zio-kafka program time between polls is equal to zio-kafka's overhead (o) plus processing time per batch divided by the number of partitions (n). So we want to know for which p:
This solves to:
For the benchmark we fill in the zio-kakfa overhead (o = 1 ms) and the number of partitions (n = 6) and we get: p ≥ 1.2 ms. (Remember, this is processing time per batch of 1000 records.)
In the previous paragraph we assumed that processing is IO intensive. When the processing is compute intensive, ZIO can not actually run more fibers in parallel than the number of cores available. In our example we have 4 cores, using n = 4 gives p ≥ 1.3 ms which is still a very low tipping point.
Conclusion
For IO loads, or with enough cores available, even quite low processing times per batch, makes zio-kafka consume faster than the plain java Kafka library. The plain java consumer is only faster for trivial processing tasks like counting.
If you'd like to know what this looks like in practice, you can take a look at this real-world example application: Kafka Big-Query Express. This is a zio-kafka application recently open sourced by Adevinta. Here is the Kafka consumer code. (Disclaimer: I designed and partly implemented this application.)
MavenGate claims that some Maven namespaces (for example nl.grons, the namespace I control) are vulnerable to hijacking. If I understand it correctly, the idea is that hackers can place a package with the existing or newer Maven coordinates in the same, or different Maven repository, thereby luring users into using a hacked version of your package. Sounds serious, and it probably is.
However, they then went on to create a list of Maven namespaces that are vulnerable. Unfortunately, they do not say what criteria were used to put namespaces on this list. Is it because the associated DNS domain expired? Because the DNS domain moved to a different owner, or only to another DNS registrar? Is it because the PGP key used to sign packages is not on a known server? Or something else entirely? For some reason my namespace ended up on the list, even though I never lost control of the DNS domain and strictly follow all their recommendations.
Even more unfortunately, this is not even the right way to look at the problem. It is not the namespaces that are vulnerable, it is the Maven repositories themselves! It is the Maven repositories that are responsible for checking the namespace against ownership of the associated DNS domain and link that to a PGP key. Once the key is linked to the namespace, packages signed with a different PGP key should not be accepted. Any exceptions to this rule should be considered very carefully.
Now to my second point, how does this hurt open source? Since my Maven Central account was blocked after MavenGate, I contacted Sonatype, the owners of Maven Central. Luckily, I use Keybase and was therefore easily able to assert I am still owner of the DNS domain and the PGP key that has been used to sign packages. But then Sonatype also wrote this:
It is important to note that, even if we are able to verify your publisher authorization, security software may flag components published under this namespace. It may be worth considering registering a separate, new namespace with a clean-slate reputation.
I am just an individual publishing open source packages in my free time. IMHO it is totally unreasonable to ask people to switch to another domain because some random company on the internet suspects you might be vulnerable! Switching to a new DNS domain is a lot of work and in addition, not everyone is willing or able to bear the costs. I suspect that many people, including me, will give up rather than join a race against 'security software'.
To summarize:
MavenGate declares Maven namespaces to be vulnerable based on unclear and probably wrong criteria.
If this is taken seriously, the bar to publishing open source becomes so high that many will give up instead.
Note: I have tried to contact the MavenGate authors, but unfortunately did not receive a reply yet.
My talk "Making ZIO-Kafka Safer And Faster" at Functional Scala 2023 went online!
Explore Erik van Oosten's presentation on improving ZIO-Kafka for better safety and performance. Learn about the modifications introduced in 2023, get insights into the library's internal workings, and uncover useful ZIO techniques and Kafka's lesser-known challenges.
Contents in the video:
2:07 Improvements
9:06 Results
10:29 Rebalances
18:10 The Future