Enforcing Logging best practices by writing ArchUnit tests

We have following three logging best practices:

  1. All loggers should be final variables. So, we prefer
   private final Logger logger = LoggerFactory
.getLogger(MyService.class); // good

Instead of

   private static final Logger LOGGER = LoggerFactory
.getLogger(MyService.class); // bad

Using constant based syntax makes code look ugly and require developers to use shift key for typing upper case variable name. This breaks the flow so we prefer to use field variable naming.

  1. All logs should have description and context So, we prefer
   logger.info("Event is alreay processed so not 
processing it again [eventId={}, eventDbId={}, 
eventId, event.getId(), eventType); // good

instead of

   logger.info("Event is already processed 
so not processing"); // bad

We want logger statement to have enough context so that we can debug problems. The bad logging statement does not help you understand for which event this log statement was logged. All of these statements look similar.

  1. All error logs should have the exception in the context So, we prefer
   logger.error("Exception while processing event 
[eventId={}]", eventId, exception); // good

instead of

   logger.error("Exception while processing event 
[eventId={}]", eventId); // bad

To help developers discover these before raising their pull requests we have written ArchUnit tests for enforcing these practices. So, local build fails in case they violate these best practices. You can read my earlier post on ArchUnit[1] in case you are new to it.

Continue reading “Enforcing Logging best practices by writing ArchUnit tests”

Why an architecture with three Availability Zones is better than the one with two Availability Zones?

A couple of months back a customer asked why we are proposing a three Availability Zone (AZ in short) architecture instead of two. Their main point was which failure modes 3 AZs guard against that 2 AZs can’t do. We gave the following two reasons: 

  • We proposed 3 AZs for improved availability. Also, since services and instances will be deployed across 3 AZs then if one AZ goes down then with 3 AZs you lose 1/3 capacity. With two AZs you can lose half the capacity.
  • If there are services(like you want run your own Cassandra or something) where we need to manage quorum it is better to have three

They were not very convinced so we agreed to start with the two AZs solution.

Continue reading “Why an architecture with three Availability Zones is better than the one with two Availability Zones?”

The case for frameworks over libraries (Spring Boot vs Dropwizard)

I am working with a customer where customer took the decision to go with Dropwizard instead of Spring Boot(or Spring ecosystem). I initially respected their decision and decided to give Dropwizard a fair chance. Now after spending a couple of months building a system that uses Dropwizard I don’t recommend it. I wrote about my initial thoughts here.

There are three main advantages of a battle tested and highly used framework like Spring:

  • Good APIs
  • Solution to common problems
  • Good searchable documentation

Let me help you understand that by taking an example of integrating Kakfa in a Dropwizard application. The Dropwizard official organization provides a Kafka bundle[1] so we decided to use it for adding Kafka support. I found following issues with it:

Poor API: When you create the KafkaConsumerBundle in the *Application class you are forced to provide an implementation of ConsumerRebalanceListener. KafkaConsumerBundle does not do anything with it but it forces you to provide it [2]. If you read Kafka documentation you need to provide ConsumerRebalanceListener not at the time of creation but when you subscribe it. ConsumerRebalanceListener is used to commit the offsets in case of partitions. There is also an open issue [3] in Github repo on the same without any answer from the maintainers.

Incomplete configuration: The Dropwizard Kafka bundle does not support all the Kafka producer and consumer configuration properties. For example, it is often recommended to set number of retries in producer to Integer.MAX_VALUE and rely on delivery.timeout.ms to limit the number of retries. Since it is not a configuration option in Dropwizard bundle you have to hardcode it during the bundle creation.

Missing solution to common problems: Any real world Kafka application need to solve for these three common problems. Spring Kafka part of Spring ecosystem provides solution to these problems.

  1. It does not provide serializer/deserialzer for JSON or any other format. You have to write one yourself or find a library that implements it.
  2. Handling of Poison pill messages using ErrorHandlingDeserializer
  3. Publishing of Poison pill messages to a dead letter topic


Yes, you can write your own bundle which fixes all these issues. But, then you are doing the undifferentiated work. Your team can spend time on writing business logic rather than writing and maintain this code. There is no right or wrong answer here. There is a cost that has to be paid when you take these decisions. You should keep that in mind. There is no free lunch.


  1. https://github.com/dropwizard/dropwizard-kafka
  2. https://github.com/dropwizard/dropwizard-kafka/blob/master/src/main/java/io/dropwizard/kafka/KafkaConsumerBundle.java#L33
  3. https://github.com/dropwizard/dropwizard-kafka/issues/179