Building a Spell Checker in Clojure using IntelliJ Idea

Today, I decided to play with a new language so I gave Clojure a try. I have previously played with Lisp so I expected some fun with brackets.

I decided to build a simple spell checker.

Setup

Install the Cursive plugin in IntelliJ. Refer to the docs.

Once installed start by creating a new project.

Next choose the location on the file system and press the create Finish.

It will install the Clojure and Leign. Once project is setup you will see the structure as shown below.

Continue reading “Building a Spell Checker in Clojure using IntelliJ Idea”

What happens when teams integrate their work?

Integration is the real thing. It is when rubber meets the road. Your teams can do their work in isolation but it has limited value until it is integrated with your application. The whole DevOps(CI/CD) movement is about integration. You want to integrate often and deliver value faster to the customer. You can’t deliver value if you don’t integrate. 

It is sad that in 2022 still we are struggling to integrate our work. We have all the tools and processes that promote integration yet I work with so many teams struggling to integrate their work. 

In this post I will cover things that are uncovered when you integrate. I am not giving any advice on how to integrate. The only advice on how part is that you have to make it your top priority and do it. The sooner you do the better. 

Let’s come back to the main topic of this post. Following is a list of things that I see happen when teams integrate their work.

Continue reading “What happens when teams integrate their work?”

Useful Stuff I Read This Week

Here are 10 posts I thought were worth sharing this week.

Service Mesh has evolved over the years. We started from a library based approach, then moved on to the sidecar containers, and finally service mesh capabilities will become part of Linux via eBPF. We use Istio at work. I was aware that there is some overhead of Istio as you have a proxy with each workload. As per this post, sidecar container based service meshes add 3-4 times of latency. This is a huge cost. For a 500 node cluster where each node runs 30 pods this adds up to 1TB of memory used by all sidecar proxies. This assumes each proxy adds 70MB of overhead. eBPF is a technology to watch. It looks like it is the technology that will make service mesh efficient and performant. It is still in early days so we will have to wait before it becomes mainstream.

I have also seen this working. Most of the time teams don’t know and understand how they should go about achieving a large goal. Then, you as a leader have to break a large goal down to small, manageable goals that the team can aim to achieve. I prefer goals to be realistic. They don’t have to be easy but they don’t have to be too difficult as well. You have to build the team’s confidence and that is built when they achieve realistic goals. It requires a leader to have clarity and they should be good at decomposing problems.

Continue reading “Useful Stuff I Read This Week”

Working with Configuration Masters in Microservices Architecture

These days I am working on building a next generation mobile banking platform. One of the solutions that I was designing this week was around how to handle configuration masters in Microservices. I am not talking about Microservices configuration properties here. I have not seen much written about this in the context of Microservices . So, I thought let me document the solution that I am going forward with. But, before we do that let’s define what these are configuration masters.

In my terminology configuration masters are those entities of the system that are static yet configurable in nature. Examples of these include IFSC codes for banks, error messages, bank and their icons, account types, status types, etc. In a reasonably big application like mobile banking there will be anywhere between 50-100 configuration master entities. These configuration master entities have three characteristics:

  • They don’t change often. This means they can be cached
  • They don’t change often but you still want the flexibility to update existing items or add new items if required. Typically, they are modified either using database scripts or by exposing APIs that some form of admin portal(used by IT operations people) uses to add new entries or modify existing entry
  • The number of rows per configuration master entity is not more than 1000. This make them suitable for local in-memory caching
Continue reading “Working with Configuration Masters in Microservices Architecture”

Correctly using Postgres as queue

I am building a central notification dispatch system that is responsible for sending different kinds of notifications to the end customer. It relies on multiple third party APIs for sending the actual email/SMS notifications. At a high level architecture of the system is shown below.

NotificationSender exposes both REST and messaging interface for accepting consumer requests. Consumers here refer to the services that need to send the notification. This is what notification system does:

  • It accepts requests from upstream services and stores that in the Postgres database after doing validation. The notification event is written to the Postgres database in ENQUEUED state. It is returns back HTTP 202 ACCEPTED to the upstream services if the request is valid else it returns HTTP 400 Bad Request.
  • At a predefined frequency a poller that is part of the NotificationDispacther polls the Postgres database for new notification events i.e. events in ENQUEUED state. For now, it respects insertion time order.
  • If enqueued events are found then it processes them and sends actual notifications using the downstream SMS and Email services.
  • After processing the events it change state of the events to processed
Continue reading “Correctly using Postgres as queue”

Issues with mmap

A couple of months back I watched a video by Andy Pavlo, Associate Professor of Databases Carnegie Mellon, where he made a point that databases should not use mmap. He went on to say that if there is only one thing you should get from his database course is to never use mmap when building and designing database management systems. I have not used mmap before so I was intrigued to understand it in more detail. I was aware that MongoDB used to use an mmap based storage engine. It allowed them to achieve faster time to market but later they had to replace it with a new storage engine wiredtiger because of the issues they faced with mmap. MongoDB is not the only database that uses mmap. There are many databases that use mmap. Some of the databases that use mmap are RavenDB, ElasticSearch, LevelDB, InfluxDB, LMDB, BoltDB, moss (key-value store from Couchbase), etc.

Given that so many databases use mmap I wanted to understand why Andy recommended us to not use mmap. I will list all of the reasons I could find in my research and from Andy’s video in this post. But, before we do that let’s first understand mmap.

Continue reading “Issues with mmap”

Useful Stuff I Read This Week

  Here are 10 posts I thought were worth sharing this week.

This post covers how and why Hasura switched their CI service from CircleCI to Buildkite. They started by defining the requirements from their CI, then they evaluated different solutions, and finally introduced it in their ecosystem. Their main reason to switch CI service was cost. They reduced the cost by 50%. This required them to own some of the aspects of the CI operations. A couple of interesting things I learnt from this post:

  • Use of labels to trigger build. They used them to save costs.
  • The use of dynamic configuration. They wrote their build code in a Go program. This saved them from YAML hell. Interestingly, they use shellcheck   t   static analysis of shell script

It is all about perspective. Tech Debt brings negative emotions in people and it becomes difficult to sell it to higher management. In this post, the author suggests we reframe tech debt as tech wealth while communicating with stakeholders. Building tech wealth means getting more value out of the software we’re creating, as well as our efforts to develop and maintain it.  Author suggests two ways we can plan for tech wealth:

  • Allocate time within each planning cycle
  • Dedicate the last few cycles in a quarter

In one of the products I worked on we used to schedule 1 day per sprint (2 weeks) for paying tech debt. We had sprint demo every alternate Thursday and the next day i.e. Friday was scheduled for working on tech debt items. One problem with 1 day every sprint is that bigger items can’t be handled. We used to create stories for them and pick them as part of the sprint backlog.

Continue reading “Useful Stuff I Read This Week”

When to use JSON data type in database schema design?

Today, I was doing solution design for a system when I started to think when we should use JSON data type for columns. Coming up with the right schema design takes multiple iterations. I consider it more as an art than science. All the mainstream RDBMS have JSON data type support.

  • Postgres has JSON data type since version 9.2. The 9.2 version was released in September 2012
  • MySQL has JSON data type since version 5.7.8. The 5.7.8 version was released in August 2015
  • SQL Server has JSON data type since version SQL Server 2016. The 2016 version was released in June 2016
  • Oracle has JSON data type since version 19c. The 19c version was released in February 2019

They all support efficient insertion and querying of JSON data type. I will not compare their JSON capabilities today. Today, I want to answer a design question – when should a column have a JSON data type?

I use the JSON data type in design situations mentioned below. There could be other places as well where JSON is a suitable data type. 

  1. Dump request data that will be processed later
  2. Support extra fields
  3. One To Many Relationship where many side will not have to its own identity
  4. Key Value use case
  5. Simpler EAV design

Let’s talk about each of these use cases in more detail.

Continue reading “When to use JSON data type in database schema design?”

You can’t outsource product management

There are many reasons why software projects fail. In this post I will cover one of the main reasons I think  outsource product development fails to deliver the right product at the right time. The reason is that customers outsource product management as well. They think their job is done after sharing the wireframes. These wireframes are typically created by a third party design agency. Customer product team usually works closely with the design agency. They will usually call this an MVP. The only thing they get from the whole MVP concept is that it needs to be delivered faster to the end customer. They completely ignore the minimal part. I usually see MVPs with more than 500 screens. These do not include failure states. I know screens are not the right measure of the application complexity but during the proposal phase this is the max you will get.

Continue reading “You can’t outsource product management”

Key Insights From Amazon Builder Library

For the last couple of weeks I have been going over articles and videos in the Amazon Builder library. They cover useful patterns that Amazon uses to build and operate software. Below are the important points I captured while going over the material.

  1. Amazon systems strive to solve problems using reliable constant work patterns. These work patterns have three key features:
    • One, they don’t scale up or slow down with load or stress. 
    • Two, they don’t have modes, which means they do the same operations in all conditions. 
    • Three, if they have any variation, it’s to do less work in times of stress so they can perform better when you need them most.
  2. There are not many problems that can be efficiently designed using constant work patterns. 
    • For example, If you’re running a large website that requires 100 web servers at peak, you could choose to always run 100 web servers. This certainly reduces a source of variance in the system, and is in the spirit of the constant work design pattern, but it’s also wasteful. For web servers, scaling elastically can be a better fit because the savings are large. It’s not unusual to require half as many web servers off peak time as during the peak. 
  3. Based on the examples given in the post it seems that a constant work pattern is suitable for use cases where system reliability, stability, and self-healing are primary concerns. It is fine if the system does some wasteful work and costs more. These are essential concerns for systems which others use to build their systems on. I think control plane systems fall under this category. The example of such a system mentioned in the post is a system that applies configuration changes to foundational AWS components like AWS Network load balancer. The solution can be designed using both the push and pull based approach. The pull based constant work pattern approach lends to a simpler and reliable design. 
  4. Although not mentioned in the post, constant work that the system is doing should be idempotent in nature.
Continue reading “Key Insights From Amazon Builder Library”