Monday, January 10, 2022

From Adoptopenjdk to Temurin on a Mac using Homebrew

Adoptopenjdk joined the Eclipse foundation and renamed their JDK to Temurin. Here are instructions on how to migrate on Macs with Homebrew.

The following instructions removes Adoptopenjdk JDKs you may still have:

brew remove adoptopenjdk/openjdk/adoptopenjdk8 brew remove adoptopenjdk/openjdk/adoptopenjdk11 brew untap AdoptOpenJDK/openjdk brew remove adoptopenjdk8 brew remove adoptopenjdk11 brew remove adoptopenjdk

Use /usr/libexec/java_home -V to get an overview of any other JDK you may still have. Just delete what you don't need any more.

Then install Temurin 8, 11 and 17. The first command (brew tap …) is only needed in case you need Temurin 8 or 11:

brew tap homebrew/cask-versions brew install --cask temurin8 brew install --cask temurin11 brew install --cask temurin

Bonus: execute the following to define aliases that let you easily switch between Java versions:

cat <<-EOF >> ~/.zshrc # Aliases for switching java version alias java17="export JAVA_HOME=\$(/usr/libexec/java_home -v 17)" alias java11="export JAVA_HOME=\$(/usr/libexec/java_home -v 11)" alias java8="export JAVA_HOME=\$(/usr/libexec/java_home -v 1.8)" java11 EOF

Are you looking for more power? For example you need to test for many more JDKs? Then maybe Sdkman is something for you.

Sunday, December 19, 2021

Customizing the Jitsi Meet UI in a Docker deployment

I manage a Jitsi instance for a small for-benefit organization. I wanted to make some changes to the UI to make it visually belong to the organization. Unfortunately, Jitsi doesn't make it easy to do this. Upon every upgrade your changes are gone. This post describes a workaround for Jitsi deployments that use Docker.

Although the details can be hairy, the idea is quite simple. We are going to put another layer over the provided Docker image called 'web'. The additional layer contains all the changes we need. When Jitsi publishes an update, we just apply the changes again as part of the deployment process.

Our starting point is the docker-compose.yaml provided by Jitsi. Make all the changes as instructed. However, before we make any changes to the UI, you should make sure your Jitsi instance is working.

Its working? Congratulations! Start with a small change to your docker-compose.yaml.
Replace:

web: image: jitsi/web:latest

with:

web: build: ./jitsi-web

This sets you up for building your own Docker image.

Create the jitsi-web directory, and put all the artwork you want to override in it. You should end up with a directory structure like this (more details follow):

Directory structure

The Dockerfile in the jitsi-web initially has just one line like this:

FROM jitsi/web

Build the image and deploy it with:

docker-compose build --pull docker-compose up -d

Make sure that Jitsi is still working.

Now it is your turn to get creative. With some RUN instructions you can change any file in the base image.

To get you started, I'll show what is in my Dockerfile. Details are discussed directly below:

FROM jitsi/web # Add wasm mime type # https://community.jitsi.org/t/loading-wasm-webassembly-file-on-jitsimeetjs/68071/3 RUN sed -i '/}/i \ application/wasm wasm;' /etc/nginx/mime.types # Replace/add some images COPY --chown=root:root overrides /usr/share/jitsi-meet/ RUN sed -i "s|\(// \)\?defaultLanguage:.*|defaultLanguage: 'nl',|" /defaults/config.js; \ sed -e 's/welcome-background.png/welcome-background.jpg/' \ -e 's|.deep-linking-mobile .header{width:100%;height:70px;background-color:#f1f2f5;text-align:center}|.deep-linking-mobile .header{width:100%;height:70px;background-color:#003867;text-align:center}|' \ -e 's|.deep-linking-mobile .header .logo{margin-top:15px;margin-left:auto;margin-right:auto;height:40px}|.deep-linking-mobile .header .logo{margin-top:10px;margin-left:auto;margin-right:auto;height:50px}|' \ -i /usr/share/jitsi-meet/css/all.css; \ sed -e 's|"headerTitle": "Jitsi Meet"|"headerTitle": "Mijn Organisatie"|' \ -e 's|"headerSubtitle": "Veilige vergaderingen van hoge kwaliteit"|"headerSubtitle": "Wij vergaderen online!"|' \ -i /usr/share/jitsi-meet/lang/main-nl.json; \ sed -e 's|C().createElement("h1",{className:"header-text-title"},t("welcomepage.headerTitle"))|C().createElement("h1",{className:"header-text-title"},C().createElement("img",{src:"images/logo-deep-linking.png",alt:"Mijn Organisatie",height:100}))|' \ -e 's|"headerTitle":"Jitsi Meet"|"headerTitle":"Mijn Organisatie"|' \ -e 's|"headerSubtitle":"Secure and high quality meetings"|"headerSubtitle":"Wij vergaderen online!"|' \ -i /usr/share/jitsi-meet/libs/app.bundle.min.js; \ sed -e "s|\bAPP_NAME: .*|APP_NAME: 'Mijn Organisatie Jitsi',|" \ -e "s|\bPROVIDER_NAME: .*|PROVIDER_NAME: 'Mijn Organisatie Cloud',|" \ -e "s|\bDEFAULT_REMOTE_DISPLAY_NAME: .*|DEFAULT_REMOTE_DISPLAY_NAME: 'Gespreksgenoot',|" \ -e "s|\bDEFAULT_LOCAL_DISPLAY_NAME: .*|DEFAULT_LOCAL_DISPLAY_NAME: 'Ik',|" \ -e "s|\bGENERATE_ROOMNAMES_ON_WELCOME_PAGE: .*|GENERATE_ROOMNAMES_ON_WELCOME_PAGE: false,|" \ -i /defaults/interface_config.js

The first RUN instruction adds a line to the Nginx configuration which enables clients to download Wasm files. Unfortunately this is not yet fixed in Jitsi docker itself (checked at December 2021, but YMMV).

The COPY instruction copies your images over the existing stuff. Feel free to add more as needed.

The second RUN instruction is where the magic happens. This changes existing files. Let's go through them one by one.

The first file that gets changes in /defaults/config.js where we set the default language to Dutch.

The next file that gets changes is /usr/share/jitsi-meet/css/all.css. Normally Jitsi uses a png background image on the welcome page but I needed to use a jpg image. The first line takes care of that. Note that there is no welcome-background.jpg image in the base image, but I added it in the overrides/images directory.
The next 2 changes for this file are some small color and layout changes to the welcome page for mobile browsers.

The next file that gets changes is /usr/share/jitsi-meet/lang/main-nl.json. There are many more files in this directory, one for each language.

The next one, file /usr/share/jitsi-meet/libs/app.bundle.min.js is tricky. This file contains a fully compiled React application in minified Javascript. The first change you see here replaces the header text with a header image. The next two lines replace the the default titles with the Dutch version. For some reason Jitsi initially renders the page in English and then re-renders it in the correct locale. On slow devices this can take quite some time. I found this quite disturbing, especially for the texts that make your first impression. By changing some default texts, most of my users (which are Dutch) will see less flapping texts.
This is the file that is most sensitive to changes in the base image. Make sure your tweaks still work after an upgrade.

Finally, in /defaults/interface_config.js some more settings are tweaked.

Some more tips

Don't worry if you break something. Just fix your changes and re-deploy. A re-deploy is very quick.

Finding out what to change can be pretty hard. Sometimes it helps to extract the file from the image to see what it contains. First find the file you want to change by opening a shell in the base image:

docker-compose exec web bash

Extract the file for more detailed inspection with something like this:

docker-compose exec web cat /usr/share/jitsi-meet/libs/app.bundle.min.js > app.bundle.min.js

Jitsi updates

When you see new images appear at Jitsi on docker hub you can deploy them as follows:

# Pulls the images that we're not changing (e.g. prosody, jicofo and jvb): docker-compose pull # Rebuild the 'web' image, checking for a new base image: docker-compose build --pull # Deploy changes: docker-compose up -d # Remove old images: docker image prune

Most of the things that were tweaked here were pretty stable over the last years. But I advice you check anyway.

That's it, go creative!

Wednesday, December 8, 2021

Akka graceful shutdown - continued

Some time ago I wrote on how to gracefully shutdown Akka HTTP servers, crucial to prevent aborted requests during re-deployments or in elastic (cloud) environments where instances come and go. Look at the previous post for more details on how graceful shutdown works and some common caveats in setting it up.

This post refreshes how this works for newer Akka versions, and it gives some tips on how to speed up a shutdown.

Coordinated shutdown

Newer Akka versions have more extensive support for graceful shutdown in the form of coordinated shutdown. Here we show an example that uses coordinated shutdown to configure a graceful shutdown.

import akka.actor.{ActorSystem, CoordinatedShutdown} import akka.http.scaladsl.Http import scala.concurrent.duration._ import scala.util.{Failure, Success} implicit val system: ActorSystem = ??? val logger = ??? val routes: Route = ??? val interface: String = "0.0.0.0" val port: Int = 80 val shutdownDeadline: FiniteDuration = 5.seconds Http() .newServerAt(interface, port) .bind(routes) .map(_.addToCoordinatedShutdown(httpShutdownTimeout)) // ← that simple! .foreach { server => logger.info(s"HTTP service listening on: ${server.localAddress}") server.whenTerminationSignalIssued.onComplete { _ => logger.info("Shutdown of HTTP service initiated") } server.whenTerminated.onComplete { case Success(_) => logger.info("Shutdown of HTTP endpoint completed") case Failure(_) => logger.error("Shutdown of HTTP endpoint failed") } }

The important line is where we use addToCoordinatedShutdown. What follows is just logging so we know what's going on.

Shutting down more components

You probably have more parts that would benefit from a proper shutdown, e.g. a database connection pool. Here is an example on how to hook into the coordinated shutdown:

// Add this code _before_ construction of the HTTP server CoordinatedShutdown(system).addTask( CoordinatedShutdown.PhaseBeforeClusterShutdown, "database connection pool shutdown" ) { () => val dbPoolShutdown: Future[Done] = shutdownDatabase() dbPoolShutdown.onComplete { case Success(_) => logger.info("Shutdown of database connection pool completed") case Failure(_) => logger.error("Shutdown of database connection pool failed") } logger.info("Shutdown of database connection pool was initiated") dbPoolShutdown }

Coordinated shutdown consists of multiple phases. Which phases exist is configurable but the default phases are fine for this post.

Our code runs in the phase called before-cluster-shutdown. This phase runs after phase service-unbind in which the HTTP service shuts down.

Tips to speed up shutdown

The default phases need to complete in 10 seconds. If this is challenging for your system, here are 2 tips that might help.

First of all, you need to make sure that any blocking/synchronous code is wrapped in a blocking construct. This will signal to the Akka execution context that it needs to extend its threadpool. This is especially relevant if you have many shutdown tasks but it is good practice anyway. For example:

import akka.Done import scala.concurrent.blocking def shutdownDatabase: Future[Done] = { Future { blocking { database.close() // blocking code logger.info("Database connection closed") } Done } }

The second thing you could do is only shutdown on a best-effort basis. Closing the connection to a read-only system is probably not essential.
The trick is to use coordinated shutdown as initiator, but then immediately report completion. For example:

import akka.Done import scala.concurrent.blocking def bestEffortShutdownDatabase: Future[Done] = { // Starts db close in a Future, but ignore the result Future { blocking { database.close() } } Future.successful(Done) }

Now, when closing the databse takes too long, Akka won't complain about this in the logs.

For more details see Akka's coordinated shutdown documentation.

Tuesday, July 7, 2020

Avoiding Scala's Option.fold

Scala's Option.fold is a bit nasty to use sometimes because type inference is not always great. Here is a stupid toy example:

val someValue: Option[String] = ??? val processed = someValue.fold[Either[Error, UserName]]( Left(Error("No user name given")) )( value => Right(UserName(value)) )

The [Either[Error, UserName]] on fold is necessary otherwise the scala compiler can not derive the type.

Here is a really small trick to avoid Option.fold when you need to convert an Option to an Either:

val someValue: Option[String] = ??? val processed = someValue .toRight(left = Error("No user name given")) .map(value => Right(UserName(value)))

Much nicer!

Wednesday, April 15, 2020

Traefik v2 enable HSTS, Docker and nextcloud

This took me days to figure out how to configure Traefik v2. Here it is for posterity.

This is a docker-compose.yaml fragment to append to a service section:

labels: - "traefik.enable=true" - "traefik.http.routers.service.rule=Host(`www.example.com`)" - "traefik.http.routers.service.entrypoints=websecure" - "traefik.http.routers.service.tls.certresolver=myresolver" - "traefik.http.middlewares.servicests.headers.stsincludesubdomains=false" - "traefik.http.middlewares.servicests.headers.stspreload=true" - "traefik.http.middlewares.servicests.headers.stsseconds=31536000" - "traefik.http.middlewares.servicests.headers.isdevelopment=false" - "traefik.http.routers.service.middlewares=servicests"

It will:

  • tell Traefik to direct traffic for www.example.com to this container,
  • on the websecure entrypoint (this is configured statically),
  • using the myresolver (for Acme, resolver also configured statically),
  • configure middleware to add HSTS headers,
  • enable the middleware.

Nextcloud

Here is a slightly more complex example for a nextcloud deployment which includes the recommended redirects.

labels: - "traefik.enable=true" - "traefik.http.routers.nextcloud.rule=Host(`nextcloud.example.com`)" - "traefik.http.routers.nextcloud.entrypoints=websecure" - "traefik.http.routers.nextcloud.tls.certresolver=myresolver" - "traefik.http.middlewares.nextcloudredir.redirectregex.permanent=true" - "traefik.http.middlewares.nextcloudredir.redirectregex.regex=https://(.*)/.well-known/(card|cal)dav" - "traefik.http.middlewares.nextcloudredir.redirectregex.replacement=https://$$1/remote.php/dav/" - "traefik.http.middlewares.nextcloudsts.headers.stsincludesubdomains=false" - "traefik.http.middlewares.nextcloudsts.headers.stspreload=true" - "traefik.http.middlewares.nextcloudsts.headers.stsseconds=31536000" - "traefik.http.middlewares.nextcloudsts.headers.isdevelopment=false" - "traefik.http.routers.nextcloud.middlewares=nextcloudredir,nextcloudsts"

Friday, April 10, 2020

Akka-http graceful shutdown

Why?

By default, when you restart a service, the old instance is simply killed. This means that all current requests are aborted; the caller will be left with a read timeout. We can do better!

What?

A graceful shutdown looks as follows:

  1. The scheduler (Kubernetes, Nomad, etc.) sends a signal (usually SIGINT) to the service.
  2. The service gets the signal and closes all server-ports; it can no longer receive new request. This is very quickly picked up by the load-balancer. The load-balancer will no longer send new requests.
  3. All requests-in-progress complete one by one.
  4. When all requests are completed, or on a timeout, the service terminates.
Caveats

Getting the signal to your service is unfortunately not always trivial. I have seen the following problems:

  • The Nomad scheduler by default does not send an SIGINT signal to the service. You will have to configure this.
  • When the service runs in a Docker container, by default the init process (with PID 1) will ignore the signal. Back when every Unix installation had control over the entire computer this made lots of sense. In a container though, not so much. This may be fixed in newer Docker version. Otherwise you will have to use a special init process such as tini.
Akka-HTTP

Akka-http has excellent support for graceful shutdown. Unfortunately, the documentation is not very clear about it. Here follows an example which can be used as a template:

Update 2021-12-08: For newer Akka versions, please use the template in the follow-up article.

Just for reference, here is the old template:

import akka.http.scaladsl.Http import akka.http.scaladsl.server._ import scala.concurrent.duration._ val logger = ??? val route: Route = ??? val interface: String = "0.0.0.0" val port: Int = 80 val shutdownDeadline: FiniteDuration = 30.seconds // Don't use this, see follow-up article instead! Http() .bindAndHandle(route, interface, port) .map { binding => logger.info( "HTTP service listening on: " + s"http://${binding.localAddress.getHostName}:${binding.localAddress.getPort}/" ) sys.addShutdownHook { binding .terminate(hardDeadline = shutdownDeadline) .onComplete { _ => system.terminate() logger.info("Termination completed") } logger.info("Received termination signal") } } .onComplete { case Failure(ex) => logger.error("server binding error:", ex) system.terminate() sys.exit(1) case _ => }

Tuesday, March 3, 2020

Push Gauges

A colleague was complaining to me that Micrometer gauges didn't work the way he expected. This led to some interesting work.

What is a gauge?

In science a gauge is a device for making measurements. In computer systems a gauge is very similar: a 'metric' which tracks something in your system over time. For example, you could track the number of items in a job queue. Libraries like Micrometer and Dropwizard metrics make it easy to define gauges. Since the measurement in itself is not useful, those libraries also make it easy to send the measurements to a metric system such as Graphite or Prometheus. These systems are used for visualization and generating alerts.

Gauges are typically defined with a callback function that does the measurement. For example, using metrics-scala, the scala API for Dropwizard metrics, it looks like:

class JobQueue extends DefaultInstrumented { private val waitingJobsQueue = ??? // Defines a gauge metrics.gauge("queue.size") { // This code block is the callback which does a 'measurement'. waitingJobsQueue.size } }

Please note that the metric library determines when the callback function is invoked. For example, once every minute.

What is a push gauge?

My colleague had something else in mind. He didn't have access to the value all the time, but only when something was being processed. More like this:

class ExternalCacheUpdater extends DefaultInstrumented { def updateExternalCache(): Unit = { val items = fetchItemsFromDatabase() pushItemsToExternalCache(items) gauge.push(items.size) // Pushes a new measurement to the gauge. } }

In the example the application becomes responsible for pushing new measurements. The push gauge simply keeps track of the last value and reports that whenever the metrics library needs it. So under the covers the push gauge behaves like a normal gauge.

Push gauges like in this example are now made possible in this pull-request for metrics-scala. The only thing that was missing is the definition of the push gauge:

class ExternalCacheUpdater extends DefaultInstrumented { // Defines a push gauge private val gauge = metrics.pushGauge[Int]("cached.items", 0) def updateExternalCache(): Unit = // as above }
Push gauge with timeout

In some situations it may be misleading to report a very old measurement as the 'current' value. If the external cache in our example evicts items after 10 minutes, then the push gauge should not report measurements from more then 10 minutes ago. This is solved with a push gauge with timeout:

class ExternalCache extends DefaultInstrumented { // Defines a push gauge with timeout private val gauge = metrics.pushGaugeWithTimeout[Int]("cached.items", 0, 10.minutes) def updateExternalCache(): Unit = // as above }
Feedback wanted!

I have not seen this concept before in any metric library in the JVM ecosystem. Therefore I would like to collect as much feedback as possible before shipping this as a new feature of metrics-scala. If you have any ideas, comments or whatever, please leave a comment on the push-gauges pull-request or drop me an email!

Update 2020-03-05: The code example have been updated to reflect changes in the pull request.