TIL #3: Using xbar to build ArgoCD deployment monitor

This week I was going over the latest edition(Volume 27) of Thoughtworks Technology Radar and found the addition of xbar in their Tools section. xbar lets you put output from any script/program in your macOS menu bar. I first wrote about it in October 2021 when I showed how you can use it show WordPress page views analytics in your macOS menu bar.

From the Thoughtworks Radar entry on xbar

On remote teams, we sorely lack having a dedicated build monitor in the room; unfortunately, newer continuous integration (CI) tools lack support for the old CCTray format. The result is that broken builds aren’t always picked up as quickly as we’d like. To solve this problem, many of our teams have started using xbar for build monitoring. With xbar, one can execute a script to poll build status, displaying it on the menu bar.

Continue reading “TIL #3: Using xbar to build ArgoCD deployment monitor”

TIL #2: Kafka poison pill message and CommitFailedException

Yesterday I was working with a team that was facing issue with their Kafka related code. The Kafka consumer was failing with the exception

[] ERROR [2022-11-22 08:32:52,853] com.abc.MyKakfaConsumer: Exception while processing events
! java.lang.NullPointerException: Cannot invoke "org.apache.kafka.common.header.Header.value()" because the return value of "org.apache.kafka.common.header.Headers.lastHeader(String)" is null
! at com.abc.MyKakfaConsumer.run(MyKakfaConsumer.java:83)
! at java.base/java.lang.Thread.run(Thread.java:833)

The consumer code looked like as shown below.

Continue reading “TIL #2: Kafka poison pill message and CommitFailedException”

TIL #1 : Using liquibase validCheckSum to solve a deployment issue

Taking inspiration from Simon Willison[1] I will start posting TIL (Today I learned) posts on something new/interesting I learn while building software. Today, I was working with a colleague on a problem where in our database migration script was working in the dev environment but failing in the staging environment. The customer platform team has mandated that we can’t access database directly and the only way to fix things is via liquibase scripts. In this post I will not discuss if I agree with them or not. That’s a rant for another day.

In our staging environment we were getting following exception

changelog-main.xml::2::author1 was: 8:a67c8ccae76190339d0fe7211ffa8d98 but is now: 8:d76c3d3a528a73a083836cb6fd6e5654
changelog-main.xml::3::author2 was: 8:0f90fb0771052231b1ax45c1x8bdffax but is now: 8:a25ca918b2eb27a2b453d6e3bf56ff77

If you have worked with Liquibase or any other similar database migration tool you will understand that this happens when developer has changed an existing changeset. This causes checksum to change for an existing changset. So, when next time liquibase tries to apply changset it gives validation error and fails.

Developer should never change an existing changeset and this is one thing we make sure we don’t miss during our code reviews.

Continue reading “TIL #1 : Using liquibase validCheckSum to solve a deployment issue”

Building a simple JSON processor using Java 17 and GraalVM

This week I finally decided to play with GraalVM to build a simple command-line JSON processor based on JsonPath. I find jq syntax too complex for my taste so I decided to build JSON processor based on JsonPath. Since, I wanted to release this as a native executable GraalVM seemed like a good soluton.

GraalVM is a relatively new JVM and JDK implementation built using Java itself. It supports additional programming languages and execution modes, like ahead-of-time compilation of Java applications for fast startup and low memory footprint.

Continue reading “Building a simple JSON processor using Java 17 and GraalVM”

Enforcing Logging best practices by writing ArchUnit tests

We have following three logging best practices:

  1. All loggers should be final variables. So, we prefer
   private final Logger logger = LoggerFactory
.getLogger(MyService.class); // good

Instead of

   private static final Logger LOGGER = LoggerFactory
.getLogger(MyService.class); // bad

Using constant based syntax makes code look ugly and require developers to use shift key for typing upper case variable name. This breaks the flow so we prefer to use field variable naming.

  1. All logs should have description and context So, we prefer
   logger.info("Event is alreay processed so not 
processing it again [eventId={}, eventDbId={}, 
eventType={}]", 
eventId, event.getId(), eventType); // good

instead of

   logger.info("Event is already processed 
so not processing"); // bad

We want logger statement to have enough context so that we can debug problems. The bad logging statement does not help you understand for which event this log statement was logged. All of these statements look similar.

  1. All error logs should have the exception in the context So, we prefer
   logger.error("Exception while processing event 
[eventId={}]", eventId, exception); // good

instead of

   logger.error("Exception while processing event 
[eventId={}]", eventId); // bad

To help developers discover these before raising their pull requests we have written ArchUnit tests for enforcing these practices. So, local build fails in case they violate these best practices. You can read my earlier post on ArchUnit[1] in case you are new to it.

Continue reading “Enforcing Logging best practices by writing ArchUnit tests”

Why an architecture with three Availability Zones is better than the one with two Availability Zones?

A couple of months back a customer asked why we are proposing a three Availability Zone (AZ in short) architecture instead of two. Their main point was which failure modes 3 AZs guard against that 2 AZs can’t do. We gave the following two reasons: 

  • We proposed 3 AZs for improved availability. Also, since services and instances will be deployed across 3 AZs then if one AZ goes down then with 3 AZs you lose 1/3 capacity. With two AZs you can lose half the capacity.
  • If there are services(like you want run your own Cassandra or something) where we need to manage quorum it is better to have three

They were not very convinced so we agreed to start with the two AZs solution.

Continue reading “Why an architecture with three Availability Zones is better than the one with two Availability Zones?”

The case for frameworks over libraries (Spring Boot vs Dropwizard)

I am working with a customer where customer took the decision to go with Dropwizard instead of Spring Boot(or Spring ecosystem). I initially respected their decision and decided to give Dropwizard a fair chance. Now after spending a couple of months building a system that uses Dropwizard I don’t recommend it. I wrote about my initial thoughts here.

There are three main advantages of a battle tested and highly used framework like Spring:

  • Good APIs
  • Solution to common problems
  • Good searchable documentation

Let me help you understand that by taking an example of integrating Kakfa in a Dropwizard application. The Dropwizard official organization provides a Kafka bundle[1] so we decided to use it for adding Kafka support. I found following issues with it:

Poor API: When you create the KafkaConsumerBundle in the *Application class you are forced to provide an implementation of ConsumerRebalanceListener. KafkaConsumerBundle does not do anything with it but it forces you to provide it [2]. If you read Kafka documentation you need to provide ConsumerRebalanceListener not at the time of creation but when you subscribe it. ConsumerRebalanceListener is used to commit the offsets in case of partitions. There is also an open issue [3] in Github repo on the same without any answer from the maintainers.

Incomplete configuration: The Dropwizard Kafka bundle does not support all the Kafka producer and consumer configuration properties. For example, it is often recommended to set number of retries in producer to Integer.MAX_VALUE and rely on delivery.timeout.ms to limit the number of retries. Since it is not a configuration option in Dropwizard bundle you have to hardcode it during the bundle creation.

Missing solution to common problems: Any real world Kafka application need to solve for these three common problems. Spring Kafka part of Spring ecosystem provides solution to these problems.

  1. It does not provide serializer/deserialzer for JSON or any other format. You have to write one yourself or find a library that implements it.
  2. Handling of Poison pill messages using ErrorHandlingDeserializer
  3. Publishing of Poison pill messages to a dead letter topic

Conclusion

Yes, you can write your own bundle which fixes all these issues. But, then you are doing the undifferentiated work. Your team can spend time on writing business logic rather than writing and maintain this code. There is no right or wrong answer here. There is a cost that has to be paid when you take these decisions. You should keep that in mind. There is no free lunch.

Resources

  1. https://github.com/dropwizard/dropwizard-kafka
  2. https://github.com/dropwizard/dropwizard-kafka/blob/master/src/main/java/io/dropwizard/kafka/KafkaConsumerBundle.java#L33
  3. https://github.com/dropwizard/dropwizard-kafka/issues/179

How does FerretDB work?

In recent weeks, I have come across FerretDB on multiple occasions, and I thought why not just get a closer look on the topic. I took a particular interest in it (FerretDB) as it is a MongoDB implementation, on top of my favourite database PostgreSQL.

While I do have high-level thoughts on how I would go about building MongoDB on top of Postgres, I wanted to confirm, validate, and learn how the FerretDB team has been doing it.

FerretDB (previously MangoDB) was founded to become the de-facto open-source substitute to MongoDB. FerretDB is an open-source proxy, converting the MongoDB 5.0+ wire protocol queries to SQL – using PostgreSQL as a database engine.

At a high level, FerretDB is a proxy, which implements MongoDB Wire Protocol that MongoDB clients speak. After establishing the connection with MongoDB clients, it translates any query sent by MongoDB clients to the SQL queries Postgres understands.

In the recent release(0.5.0) of FerretDB, it is also possible to use it as a library rather than as a proxy. FerretDB as a library helps in reducing one network hop, which leads to better performance. It is only possible for applications that are built in Go since FerretDB is implemented in Go.

Below are some of the tweets from people on this article. If you find this article useful please share and tag me @shekhargulati

Continue reading “How does FerretDB work?”

My Notes on GitLab Postgres Schema Design

I spent some time going over the Postgres schema of Gitlab. GitLab is an alternative to Github. You can self host GitLab since it is an open source DevOps platform.

My motivation to understand the schema of a big project like Gitlab was to compare it against schemas I am designing and learn some best practices from their schema definition. I can surely say I learnt a lot.

I am aware that best practices are sometimes context dependent so you should not apply them blindly.

The Gitlab schema file structure.sql [1] is more than 34000 lines of code. Gitlab is a monolithic Ruby on Rails application. The popular way to manage schema migration is using the schema.rb file. The reason the Gitlab team decided to adopt structure.sql instead is mentioned in on of their issues [2] in their issue tracker.

Now what keeps us from using those features is the use of schema.rb. This can only contain standard migrations (using the Rails DSL), which aim to keep the schema file database system neutral and abstract away from specific SQL. This in turn means we are not able to use extended PostgreSQL features that are reflected in schema. Some examples include triggers, postgres partitioning, materialized views and many other great features.

In order to leverage those features, we should consider using a plain SQL schema file (structure.sql) instead of a ruby/rails standard schema schema.rb.

The change would entail switching config.active_record.schema_format = :sql and regenerate the schema in SQL. Possibly, some build steps would have to be adjusted, too.

Now, let’s go over the things I learnt from Gitlab Postgres schema.

Below are some of the tweets from people on this article. If you find this article useful please share and tag me @shekhargulati

Continue reading “My Notes on GitLab Postgres Schema Design”

Improve Git Monorepo Performance

Today, I was exploring source code of the Gitlab project and experienced poor performance of the git status command. Gitlab is an open source alternative to Github.

Below is the output of git status command

 time git status
On branch master
Your branch is ahead of 'origin/master' by 1 commit.
  (use "git push" to publish your local commits)

nothing to commit, working tree clean
git status  0.20s user 1.13s system 88% cpu 1.502 total

The total here is the number of seconds it took for the command to complete.

The same was the case for the git add command.

time git add .
git add .  0.21s user 1.11s system 115% cpu 1.146 total

So both commands took more than a second to finish.

These commands are slow because they need to search the entire worktree looking for changes. When the worktree is very large, Git needs to do a lot of work.

Continue reading “Improve Git Monorepo Performance”