Sunday, November 26, 2023

Discovering scala-cli while fixing my digital photo archive

Over the years I built up a nice digital photo library with my family. It is a messy process. Here are some of the things that can go wrong:

  • Digital cameras that add incompatible exif metadata.
  • Some files have exif tag CreateDate, others DateTimeOriginal.
  • Images shared via Whatsapp or Signal do not have an exif date tag at all.
  • Wrong rotation.
  • Fuzzy, yet memorable jpeg images wich take 15MB because of their resolution and high quality settings.
  • Badly supported ancient movie formats like 3gp and RIFF AVI.
  • Old movie formats that need 3 times more disk space than h.265.
  • Losing almost all your photos because you thought you could copy an Iphoto library using tar and cp (hint: you can’t). (This took a low-level harddisk scan and months of manual de-duplication work to recover the photos.)
  • Another low-level scan of an SD card to find accidentally deleted photos.
  • Date in image file name corresponds to import date, not creation date.
  • Weird file names that order the files differently than from creation date.
  • Images from 2015 are stored in the folder for 2009.
  • etc.

I wrote countless bash scripts to mold the collection into order. Unfortunately, to various success. However, now that I am ready to import the library into Immich (please, do sponsor them, they are building a very nice product!), I decided to start cleaning up everything.

So there I was, writing yet another bash script, struggling with parsing a date response from exiftool. And then I remembered the recent articles about scala-cli and decided to try it out.

The experience was amazing! Even without proper IDE support, I was able to crank out scripts that did more, more accurately and faster than I could ever have accomplished in bash.

Here are some of the things that I learned:

  • Take the time to learn os-lib.
  • If the scala code gets harder to write, open a proper IDE and use code completion. Then copy the code over to your .sc file.
  • One well placed .par (using scala-parallel-collections) can more than quadruple the performance of your script!
  • You will still spend a lot of time parsing the output from other programs (like exiftoool).
  • Scala-cli scripts run very well from Github actions as well.

Conclusions

Next time you open your editor to write a bash file, think again. Perhaps you should really write some scala instead.

Sunday, October 8, 2023

Dependabot, Gradle and Scala

Due to a series of unfortunate circumstances we have to deal with a couple of projects that use Gradle as build tool at work. For these projects we wanted automatic PR generation for updated dependencies. Since we use Github Enterprise, using Dependabot seems logical. However, this turned out to be not very straightforward. This article documents one way that works for us.

As we were experimenting with Dependabot, we discovered the following rules:

  1. The scala version in the artifact name must not be a variable.
  2. A variable for the artifact version is fine, but it must be declared in the same file in the ext block.
  3. Versions should follow the Semver specification.
  4. You must not use Gradle’s + version range syntax anywhere, Maven’s version range syntax is fine.

In our projects the scala version comes from a plugin. In addition, we sometimes need to cross build for different scala versions, very much at odds with rule no. 1. We solved this with a switch statement.

With these rules and constraints we discovered that the following structure works for us and Dependabot:

ext { jacksonVersion = '2.15.2' scalaTestVersion = '3.0.8' } dependencies { switch(scalaMainVersion) { case "2.12": implementation "com.fasterxml.jackson.module:jackson-module-scala_2.12:$jacksonVersion" testImplementation "org.scalatest:scalatest_2.12:$scalaTestVersion" break case "2.13": implementation "com.fasterxml.jackson.module:jackson-module-scala_2.13:$jacksonVersion" testImplementation "org.scalatest:scalatest_2.13:$scalaTestVersion" break default: break } // implementation 'com.example:library:0.8+' // Don't do this implementation 'com.example:library:[0.8,1.0[' // This is fine }

It took 3 people a month to slowly discover this solution (thank you!). I hope that you, dear reader, will spend your time more productive.

Thursday, April 20, 2023

Zio-kafka hacking day

Not long ago I contacted Steven (committer of the zio-kafka library) to get some better understanding of how the library works. April 12, not more than 2 months later I am a committer, and I was sitting in a room together with Steven, Jules Ivanic (another committer) and wildcard Pierangelo Cecchetto (contributor), hacking on zio-kafka.

The meeting was an idea of Jules who was ‘in the neighborhood’. He was traveling from Australia for his company (Conduktor). We were able to get a nice room in the Amsterdam office of my employer (Adevinta). Amsterdam turned out to be a nice middle ground for Steven, me and Pierangelo. (Special thanks to Go Data Driven who also had place for us.)

In the morning we spoke about current and new ideas on how to improve the library. Also, we shared detailed knowledge on ZIO and what Kafka expects from its users. After lunch we started hacking. Having someone to start an ad hoc discussion turned out to be very productive; we were able to move some tough issues forward.

Here are some highlights.

PR #788 — Wait for stream end in rebalance listener is important to prevent duplicates during a rebalance process. This PR was mostly finished for quite some time, but many details made the extensive test suite fail. We were able to solve many of these issues.

In the area of performance we implemented an idea to replace buffering (pre-fetching a fixed number of polls), with pre-fetching based on the stream’s queue size. This resulted in PR #803 — Alternative backpressure mechanism.

We also laid the seeds for another performance improvement implementation: PR #908 — Optimistically resume partitions early.

These last two PRs showed great performance improvements bringing us much closer to direct usage of the Java Kafka client. All 3 PRs are now in review.

All in all it was a lot of fun to meet fellow enthusiasts and hack on the complex machinery that is inside zio-kafka.

Sunday, January 29, 2023

Kafka is good for transport, not for system boundaries

In the last years I have learned that you should not run Kafka as a system boundary. A system boundary in this article is the place where messages are passed from one autonomy domain to another.

Now why is that? Let’s look at two classes of problems: connecting to Kafka and the long feedback loop. To prove my points, I am going to bore you with long stories from my personal experience. You may be in a different situation, YMMV!

Problem 1: Connecting to Kafka is hard

Compared to calling an HTTP endpoint, sending messages to Kafka is much much harder.

Don’t agree? Watch out for observation bias! During my holiday we often have long high-way drives through unknown countries. After looking at a highway for several hours non-stop, you might be inclined to believe that the entire country is covered by a dense highway network. In reality though, the next highway might be 200km away. A similar thing can happen at work. My part of the company offers Kafka as a service. We also run several services that invariable use Kafka in some way. We have deep knowledge and experience. It would be easy to think that Kafka is simple for everyone. However, for the rest of the company this Kafka thing is just another far away system that they have to integrate with and knowledge will be spotty and incomplete.

Let’s look at some of the problems that you have to deal with.

Partitioning is hard

It is easier to deal with partitioning problems when you control both the producer and the broker. We once had a problem where our systems could not keep up with the inflow of Kafka messages for one of the producers. The weird thing is that most of the machines were just idling. The problem grew slowly, so it took us some time before we realized it was caused by some partitions having most of the traffic. Producers of Kafka events do not always realize the effect of wrongly chosen key values. When many messages have the same key they end up in the same partition. It took some time before we got across that they needed to change the message key.

When you run an HTTP endpoint, spreading traffic and partitioning is handled by the load-balancer and is therefore under control of the receiver and not the sender.

Cross network connections are hard

Producers and the Kafka brokers need to have the same view of the network. This is because the brokers will tell a producer to which broker (by DNS name or IP address) it needs to connect to for each partition. This might go wrong when the producers and brokers use a different DNS server, or when they are on networks with colliding IP address ranges. Getting this right is a lot easier when you’re running everything in a single network you control.

This is not a problem with HTTP endpoints. Producers only need 1 hostname and optionally an HTTP proxy.

We didn’t talk about authentication and encryption yet. Kafka is very flexible; it has many knobs and settings in this area and the producers have to be configured exactly right or else it just won’t work. And don’t expect good error messages. Good documentation and cooperation is required to make this work across different teams.

With HTTP endpoints, encryption is very well-supported through https. Authentication is straight forward with HTTP’s basic authentication.

Problems that have been solved

Just for completeness here are some problems from around 2019 that have since been solved.

Around 2019 Kafka did not support authentication and TLS out of the box. Crossing untrusted networks was quite cumbersome.

Also around that time you had to be very careful about versioning. The client and server had to be upgraded in a very controlled order. Today this looks much better; you can combine almost any client and server version.

The default partitioner would give slow brokers more, instead of less work. This has been solved a few months ago.

Problem 2: Long feedback loop

When messages are being given to you via Kafka, you can not reject them. They are send and forget, the producer no longer cares. Dealing with invalid messages is now your responsibility.

In one of our projects we used to set invalid messages apart and offer Slack alerts so that the producers knew they had to look at the validation errors. Unfortunately, it didn’t work well. The feedback loop was simply too long and the number of invalid messages stayed high.

Later we introduced an HTTP endpoint in which we reject invalid messages with a 400 response. This simple change was nothing less than a miracle. For every producer that switched the vast majority of invalid messages disappeared. The number of invalid messages has remained very low since then.

Because we were able to reject invalid messages the feedback loop shortened and became much more effective.

Conclusions

Kafka within your own autonomy domain can be a great solution for message transport. However, Kafka as a boundary between autonomy domains will hurt.

Footnotes

  1. Though at high enough volume, HTTP is not easy either; you’ll need proper connection pooling and an endpoint that accepts batches or else deploy a huge server park.
  2. Many load balancers offer sticky sessions which is a weak form of partitioning.
  3. We suffered both.
  4. When your authentication settings are wrong, the Kafka command line tools tell you that by showing an OutOfMemoryError. My head still hurts from this one.
  5. Though unfortunately, many architects will make this complex by using oauth or other such systems.
  6. Most invalid messages could be fixed with a few minutes of coding time.