Structuring Spring Boot Microservices Configuration

All software systems we build use some sort of configuration files. These configuration files change depending on the environment in which your service/system is deployed. They allow us build a single deployable unit that can be deployed in multiple environments without any code change. We just change our configuration file depending on the environment and provide our service path to an external location where the configuration file exists. And, our service uses the configuration file to bootstrap itself.

Configuration files become unwieldy if not managed well. Incorrect configuration values is one of the major reasons for system downtime. Most teams don’t write tests for their configurations so a lot of times bugs are discovered in higher environments.

I was also seeing the same problem in one of my projects. There was a lack of clarity on which configuration properties change between different environments and which remains the same. Also, in a local and lower environment I don’t mind database credentials in my configuration files but for a higher environment I don’t want them to be present in the code.

Continue reading “Structuring Spring Boot Microservices Configuration”

Improving Spring Data JPA/Hibernate Bulk Insert Performance by more than 100 times

This week I had to work on a performance issue. Performance issues are always fun to work with. They give an opportunity to get into the depth of the technology we are using. We learn how much we don’t know about the technology we are using everyday. These days we too quickly think about changing the database or underlying library when faced with the performance bottleneck. When what we really need to do is learn about the technology we are using in depth.

The performance issue I am talking about was related to bulk insertion of data in to the database. We were using Spring Data JPA with SQL Server. In our use case, end user was uploading an excel file that the application code first parse, then process, and finally store in the database. The NFR for this requirement was that we should be able to process and store 100,000 records in less than 5 minutes.

Before my changes we were processing 10,000 records in 47 minutes. This certainly looked bad.

After making the changes that I will discuss in this post we were able to process 10,000 records in 20 seconds. We were able to process 100,000 in less that 4 minutes which is well below our NFR.

Continue reading “Improving Spring Data JPA/Hibernate Bulk Insert Performance by more than 100 times”

Problem Details for HTTP APIs with Spring Boot

If you are building REST APIs you will understand that HTTP status codes at times are not sufficient to convey enough information about an error. There is a specification RFC 7807 that defines how you can return error information back to the client. The way it works is by identifying a specific type of problem with a URI.

To make it more clear let’s suppose you are building a todo list application where you want to build API to move items from one list to other. If the target list has a cap on maximum items it can hold and you try to move an item from source list to target list the list then you should get an exception.

If all you have is HTTP status code then you can just return HTTP status code 403 forbidden. However, it does not give the API client enough information about why the request was forbidden. Different developers have solved this problem differently by building their application specific error objects. Problem Details for HTTP APIs specification solves this problem by defining a standard format in which API error response should be returned.

Continue reading “Problem Details for HTTP APIs with Spring Boot”

How to create a custom Spring Boot FailureAnalyzer

Today, I was working with a Spring Boot application that does local JVM cache warming on the server start up. Application was calling a global Redis cache and storing state that does not change often in an in-memory JVM cache. It is a common pattern that many applications use. In our case, application not only just warm the cache but it also first process some data and then cache the result in the local JVM cache.

Many times junior developers forget to start redis or any other depending service and then application fails to start on their local machine. Then, they need to spend few minutes reading the long Java stack trace to find the problem. These stack trace can be quite long. And, it is difficult to find needle in this haystack.

Recently, I learnt about a Spring Boot feature called FailureAnalyzer. FailureAnalyzer allows you to intercept exceptions that occur at the start-up of an application causing an application startup failure. Using FailureAnalyzer you can replace the stack trace of the exception with a more human readable message. The best example of this is when your code has cyclic dependencies. A common example of cyclic dependency is a bean A depending on bean B and vice versa as shown below.

Continue reading “How to create a custom Spring Boot FailureAnalyzer”

Using Java Flight Recorder to Profile Spring Boot applications

Few months back I had to do performance optimisation of a low latency application. The tool that helped me a lot was Java Flight Recorder. Today, I had to do some similar work and I completely forgot how I was able to launch flight recorder GUI. In this short post, I will show you the process that I followed to create flight recorder recordings.

From the official documentation,

Java Flight Recorder (JFR) is a tool for collecting diagnostic and profiling data about a running Java application. It is integrated into the Java Virtual Machine (JVM) and causes almost no performance overhead, so it can be used even in heavily loaded production environments. When default settings are used, both internal testing and customer feedback indicate that performance impact is less than one percent. For some applications, it can be significantly lower. However, for short-running applications (which are not the kind of applications running in production environments), relative startup and warmup times can be larger, which might impact the performance by more than one percent. JFR collects data about the JVM as well as the Java application running on it.

Continue reading “Using Java Flight Recorder to Profile Spring Boot applications”

The Ultimate Dockerfile for Spring Boot Maven and Gradle applications

For Maven users, the ultimate Dockerfile is below.

FROM openjdk:8-jdk-alpine as build
WORKDIR /workspace/app

COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
RUN ./mvnw dependency:go-offline

COPY src src
RUN ./mvnw package -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)

FROM openjdk:8-jre-alpine
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/target/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","FULL_NAME_OF_SPRING_BOOT_MAIN_CLASS"]

Please make sure to update FULL_NAME_OF_SPRING_BOOT_MAIN_CLASS with Spring Boot application main class.

For Gradle users, the ultimate Dockerfile is below.

FROM openjdk:8-jdk-alpine as build
WORKDIR /workspace/app

COPY gradlew .
COPY gradle gradle
COPY build.gradle .
RUN ./gradlew dependencies

COPY src src
RUN ./gradlew build unpack -x test
RUN mkdir -p build/dependency && (cd build/dependency; jar -xf ../libs/*.jar)

FROM openjdk:8-jre-alpine
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/build/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","FULL_NAME_OF_SPRING_BOOT_MAIN_CLASS"]

Please make sure to update FULL_NAME_OF_SPRING_BOOT_MAIN_CLASS with Spring Boot application main class.

You can watch this video to learn more about optimizing docker images for Spring Boot applications.

Solution: ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

Today, one of the teams was facing the issue ORA-12514, TNS:listener does not currently know of service requested in connect descriptor. They were trying to connect to Oracle using Spring Boot JPA application and getting the exception at application boot up.

Team was able to successfully connect to Oracle using SQLDeveloper. But, when connecting to Oracle using Spring Boot JPA application it was failing to boot up.

Like most developers, we googled around to find the answers. The popular answer that you will get is as mentioned in this stackoverflow question. The answer suggests that you have to update tnsnames.ora file and add your service to it.

I knew it is not the right answer as we are able to connect using SQL Developer.

So, I started looking into the Spring Data JPA configuration of the application. The configuration that was giving error is shown below.

spring.datasource.driverClassName=oracle.jdbc.driver.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@//myhost:1521/efsdev
spring.datasource.username=myuser
spring.datasource.password=mypassword
spring.jpa.database-platform=org.hibernate.dialect.Oracle12cDialect

In the above configuration, there is only one configuration property that could be possibly wrong — spring.datasource.url.

So, I googled around to find the correct way to specify JDBC url for Oracle.

I learned that there are two ways you can specify JDBC string URL. The two ways are:

1) jdbc:oracle:thin:@[HOST][:PORT]:SID

2) jdbc:oracle:thin:@//[HOST][:PORT]/SERVICE

As you can see above, we are using the second way to specify the URL. According to second URL syntax, efsdev is the service name.

Developers mentioned that efsdev is the SID. So, we need to use the first URL.

After changing the configuration to the one mentioned below, application was successfully able to connect with Oracle.

spring.datasource.driverClassName=oracle.jdbc.driver.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@myhost:1521:efsdev
spring.datasource.username=myuser
spring.datasource.password=mypassword
spring.jpa.database-platform=org.hibernate.dialect.Oracle12cDialect

That’s it for this post. I hope this saves someone’s day.

Enabling Https for local Spring Boot development with mkcert

Today, I discovered mkcert – a tool that generates valid TLS certificate. It works for any hostname or IP, including localhost. In this post, I will show you how to generate a valid PKCS12 format certificate using mkcert. Then, we will use that certificate in a Spring boot application.

We will start by installing mkcert on our local machine. If you are using Mac then we can use brew package manager. For installation instructions specific to your OS you can refer to the documentation.

brew install mkcert

Once mkcert is installed, you can use its CLI to create and install a CA. To do that, run the following command.

mkcert -install

Continue reading “Enabling Https for local Spring Boot development with mkcert”

Getting started with Apache Dubbo and Spring Boot

This week I decided to play with Apache Dubbo. I follow Github trending repositories daily and for many weeks and months Apache Dubbo is one of their popular Github Java repository. It has more than 20,000 stars. So, I decided to give it a shot. One of the reasons for Dubbo popularity is that it is created by software engineers at Alibaba. Alibaba is a Chinese multinational conglomerate specializing in e-commerce, retail, Internet, AI and technology.

From its website, Apache Dubbo is

A high-performance, light weight, java based RPC framework. Dubbo offers three key functionalities:

  1. Interface based remote call
  2. Fault tolerance and load balancing

  3. Automatic service registration & discovery

Continue reading “Getting started with Apache Dubbo and Spring Boot”

Configuring Spring Cache Manager with AWS ElastiCache Redis (cluster mode disabled) and Lettuce

We have Spring Boot 2 application that uses Redis as the cache manager. We deploy our application on Amazon AWS where we use AWS ElastiCache Redis service in cluster mode disabled. Our setup includes a Redis master with two Redis slaves. The default Java client for Redis with spring-boot-starter-data-redis dependency is lettuce-core. When you are working with single Redis node with no slaves, using AWS Elastic Cache Redis is as simple as providing the spring.redis.url with the value of AWS ElastiCache Redis instance URL. This was the set up that we were using till a month back. As the load on the system increased we decided to use ElastiCache Redis in replicated setup to scale our reads. In AWS, Redis implements replication in two ways:

  1. With a single shard that contains all of the cluster’s data in each node – Redis (cluster mode disabled)
  2. With data partitioned across up to 15 shards — Redis (cluster mode enabled)

In our case, cached data is less than 1 GB so it fits in RAM of single node. This made us choose cluster mode disabled setup.

Continue reading “Configuring Spring Cache Manager with AWS ElastiCache Redis (cluster mode disabled) and Lettuce”