My take on libraries over framework(Spring Boot vs Dropwizard)

I am unable to sleep because of fever so I thought let me write this post. Maybe it is not the best time to write this post but who cares. A couple of weeks back I got into discussion with a customer CTO over libraries over framework. This customer wants us to prefer libraries over framework.This was mainly with regard to Spring Boot vs Dropwizard discussion. 

The best definition I have read on the web about framework and library is that a framework calls your code and your code calls the library API. A framework does much more and has strong opinions. Libraries are focussed, solve one problem, and swappable (not entirely true without proper abstractions).

The customer wanted us to use Dropwizard instead of Spring Boot because of the following reasons:

  • Spring Boot does too much auto-magic. With something like Dropwizard you can have control over how things bind together. You can use manual dependency injection instead of a IoC container doing that magic for you. You can disable auto configuration in Spring Boot. You can also wire beans by hand if you want. But, I agree the default way is to rely on auto-configuration.
  • Spring Boot vulnerability surface area is higher because it is too easy to add starter jars and bring all the transitive dependencies. I think this will be mostly true with any other approach of building software in Java unless 1) you are going down the stackless way(I don’t think Java platform is there yet) 2) you have good governance on what gets in your dependencies. 
  • Spring Boot executable size is much higher. I compared bare bones Spring Boot(spring-boot-starter-web with Tomcat) and Dropwizard(default maven archetype) executable sizes. As it turns out Spring Boot was 17M and Dropwizard was 19M.
  • Spring Boot startup time is higher. This depends a lot on what you are doing at startup. The bare bone spring boot app starts in 1.64 seconds whereas the bare bone Dropwizard app took 1.526 seconds to startup.
  • Spring Boot consumes much more memory. This was true. Spring Boot loaded 7591 classes whereas Dropwizard loaded 6255 classes. Also, heap space consumption of Spring Boot was twice compared to Dropwizard. 
  • Spring Boot apps are difficult to debug. I agree exception stacktraces are too long at times and it takes a minute or two to reach your calling code. But, I personally never had much trouble debugging Spring Boot apps. I mostly rely on good tests and logging to debug stuff.
  • Lastly, they wanted us to follow a general principle – prefer libraries over frameworks.

The funny part is Spring Boot does not call itself a framework and Dropwizard documentation states Dropwizard straddles the line between a library and a framework.

We went with Dropwizard :). I respect their decision and I think their reasons have merit. I myself have seen too many badly architected/built Spring Boot apps so I am open to trying out a new, simpler, and better alternative. 

I have read the Brandon Smith post – Write Libraries, not Frameworks. I also think libraries over frameworks is a good architecture principle that we should strive for. The only problem is when we apply principles blindly without understanding the context. 

I think principles like favouring libraries over frameworks are fundamental. For these principles to work you need to have the right context and provide the right environment. I think it will work for you when:

  • You have a good engineering team that understands the cost of adding libraries. There is no free lunch. It does not work in a typical bottom heavy pyramid team structure where teams are considered feature factories. They will add any library under the roof to deliver features if your mindset is not aligned with the principle.
  • You have good governance. It is enforced by healthy code reviews, automation (aka fitness functions), architecture knowledge sharing sessions, Microservices production readiness reviews, and architects with the skin in the game.
  • You spend effort and resources on developer experience to build tooling that makes it easy to scaffold new Microservices with your opinions and choices baked in. I am not sure how it will be any different from a pure framework approach. You will end up building your own Microservices framework with your library choices. 
  • You train your software engineers to buy into this methodology. They will have to unlearn the existing way and learn the new way to build software.
  • You understand productivity might take a hit till developers understand the new way to build software. Frameworks give you a productivity boost by helping you get started faster and solving common problems for you.
  • You use automated checks to continuously prove that software is not deviating from the principle. You can write a build tool task that fails the build if executable size reaches a threshold. You can write tests that fail the build if people use certain libraries. All of this is possible. You need to invest the time of the right engineers to make this happen.

I don’t know whether this will work in our environment or context. It will depend on if we can walk the talk. I have seen too many times all the good things thrown under the bus when business puts pressure for features.

Looks like medicine(or writing this post) has done its job. I started writing at 2:28am and now at 3:39am my fever is down and I am feeling better. 

Why naming stuff is hard?

Last few months I have spent a lot of time doing code reviews. During the code review exercise I also pair with developers to refactor and improve the quality of their pull requests (PR).  I care about two things in code reviews – correctness and understandability. In this post I will not focus on correctness (I might write a future post on this). Today, I want to focus on most important aspect of making code easier to understand – good names. Most of the time that I spend in the code review is coming up with the intention revealing names for classes, methods, interfaces, variables, packages, modules, and Microservices. I find most developers (irrespective of experience level) struggle to come up with good names. 

There are only two hard things in Computer Science: cache invalidation and naming things. 

Phil Karlton

In this post I will list three reasons I think developers struggle to come up with good names.

Continue reading “Why naming stuff is hard?”

Structuring Spring Boot Microservices Configuration

All software systems we build use some sort of configuration files. These configuration files change depending on the environment in which your service/system is deployed. They allow us build a single deployable unit that can be deployed in multiple environments without any code change. We just change our configuration file depending on the environment and provide our service path to an external location where the configuration file exists. And, our service uses the configuration file to bootstrap itself.

Configuration files become unwieldy if not managed well. Incorrect configuration values is one of the major reasons for system downtime. Most teams don’t write tests for their configurations so a lot of times bugs are discovered in higher environments.

I was also seeing the same problem in one of my projects. There was a lack of clarity on which configuration properties change between different environments and which remains the same. Also, in a local and lower environment I don’t mind database credentials in my configuration files but for a higher environment I don’t want them to be present in the code.

Continue reading “Structuring Spring Boot Microservices Configuration”

Factors to consider when architecting systems that uses third party systems

Most of us are building systems that sit on top of other third party systems. This is common in the FinTech ecosystem that I am currently involved with. Most Neo-banks are built on modern core banking systems like Mambu, Thought Machine, etc. Core banking systems are not the only third party systems you need to build a modern bank. You also need a CRM, lending management system, payment switches, AML/fraud prevention system, Engagement platform, CMS, KYC,  and a few others. Once you have selected all the ecosystem partners you have to do custom software development to build new and innovative customer journeys and integrate these systems into a working neo-bank. In this post, I will talk about important factors you should consider when architecting systems that are powered by third party systems.

Continue reading “Factors to consider when architecting systems that uses third party systems”

Notes on Gradle Microservices Monorepo setup

The product I am building/architecting at work these days uses Monorepo[1] for all our Microservices. Our Microservices are primarily built using Java 17 and Spring Boot 2.6.x. For frontend and platform code(Terraform, Helm charts, configuration files, etc) we have different Git repositories. We use Gradle 7.3 as our build tool. We also make use of shared libraries for code reuse. I know people suggest you should avoid using shared libraries in Microservices but as I discussed in an earlier post[2] I think there are valid reasons to use shared libraries in Microservices.

I prefer Monorepo for three main reasons:

  • Better visibility and control. 
  • Atomic code refactoring across Microservices. This is common in the initial phase of development. 
  • Easy Code sharing between Microservices
Continue reading “Notes on Gradle Microservices Monorepo setup”

Building a Spell Checker in Clojure using IntelliJ Idea

Today, I decided to play with a new language so I gave Clojure a try. I have previously played with Lisp so I expected some fun with brackets.

I decided to build a simple spell checker.

Setup

Install the Cursive plugin in IntelliJ. Refer to the docs.

Once installed start by creating a new project.

Next choose the location on the file system and press the create Finish.

It will install the Clojure and Leign. Once project is setup you will see the structure as shown below.

Continue reading “Building a Spell Checker in Clojure using IntelliJ Idea”

What happens when teams integrate their work?

Integration is the real thing. It is when rubber meets the road. Your teams can do their work in isolation but it has limited value until it is integrated with your application. The whole DevOps(CI/CD) movement is about integration. You want to integrate often and deliver value faster to the customer. You can’t deliver value if you don’t integrate. 

It is sad that in 2022 still we are struggling to integrate our work. We have all the tools and processes that promote integration yet I work with so many teams struggling to integrate their work. 

In this post I will cover things that are uncovered when you integrate. I am not giving any advice on how to integrate. The only advice on how part is that you have to make it your top priority and do it. The sooner you do the better. 

Let’s come back to the main topic of this post. Following is a list of things that I see happen when teams integrate their work.

Continue reading “What happens when teams integrate their work?”

Useful Stuff I Read This Week

Here are 10 posts I thought were worth sharing this week.

Service Mesh has evolved over the years. We started from a library based approach, then moved on to the sidecar containers, and finally service mesh capabilities will become part of Linux via eBPF. We use Istio at work. I was aware that there is some overhead of Istio as you have a proxy with each workload. As per this post, sidecar container based service meshes add 3-4 times of latency. This is a huge cost. For a 500 node cluster where each node runs 30 pods this adds up to 1TB of memory used by all sidecar proxies. This assumes each proxy adds 70MB of overhead. eBPF is a technology to watch. It looks like it is the technology that will make service mesh efficient and performant. It is still in early days so we will have to wait before it becomes mainstream.

I have also seen this working. Most of the time teams don’t know and understand how they should go about achieving a large goal. Then, you as a leader have to break a large goal down to small, manageable goals that the team can aim to achieve. I prefer goals to be realistic. They don’t have to be easy but they don’t have to be too difficult as well. You have to build the team’s confidence and that is built when they achieve realistic goals. It requires a leader to have clarity and they should be good at decomposing problems.

Continue reading “Useful Stuff I Read This Week”

Working with Configuration Masters in Microservices Architecture

These days I am working on building a next generation mobile banking platform. One of the solutions that I was designing this week was around how to handle configuration masters in Microservices. I am not talking about Microservices configuration properties here. I have not seen much written about this in the context of Microservices . So, I thought let me document the solution that I am going forward with. But, before we do that let’s define what these are configuration masters.

In my terminology configuration masters are those entities of the system that are static yet configurable in nature. Examples of these include IFSC codes for banks, error messages, bank and their icons, account types, status types, etc. In a reasonably big application like mobile banking there will be anywhere between 50-100 configuration master entities. These configuration master entities have three characteristics:

  • They don’t change often. This means they can be cached
  • They don’t change often but you still want the flexibility to update existing items or add new items if required. Typically, they are modified either using database scripts or by exposing APIs that some form of admin portal(used by IT operations people) uses to add new entries or modify existing entry
  • The number of rows per configuration master entity is not more than 1000. This make them suitable for local in-memory caching
Continue reading “Working with Configuration Masters in Microservices Architecture”

Correctly using Postgres as queue

I am building a central notification dispatch system that is responsible for sending different kinds of notifications to the end customer. It relies on multiple third party APIs for sending the actual email/SMS notifications. At a high level architecture of the system is shown below.

NotificationSender exposes both REST and messaging interface for accepting consumer requests. Consumers here refer to the services that need to send the notification. This is what notification system does:

  • It accepts requests from upstream services and stores that in the Postgres database after doing validation. The notification event is written to the Postgres database in ENQUEUED state. It is returns back HTTP 202 ACCEPTED to the upstream services if the request is valid else it returns HTTP 400 Bad Request.
  • At a predefined frequency a poller that is part of the NotificationDispacther polls the Postgres database for new notification events i.e. events in ENQUEUED state. For now, it respects insertion time order.
  • If enqueued events are found then it processes them and sends actual notifications using the downstream SMS and Email services.
  • After processing the events it change state of the events to processed
Continue reading “Correctly using Postgres as queue”