We are living in a world where every other day we see a new technical innovation. For most of us mere mortal it is not easy to quickly figure out if the technology is worth our time. Making sense of new technical innovations is one of the core aspects of my current role so by taking the pain of writing this series I will help myself as well.
In this post, I will cover Dapr. Dapr stands for distributed application runtime.
What does distributed application runtime means?
These days most of us are developing distributed systems. If you are building an application that uses Microservices architecture then you are building a distributed system.
A distributed system is a collection of autonomous computing elements that appears to its users as a single coherent system.
Distributed application runtime provides you the facilities that you can use to build distributed systems. These facilities include:
State management. Most applications need to talk to a datastore to store state. Common examples like PostgreSQL, Redis, etc.
Pub/sub. For communication between different components and services.
Service to service communication. This also includes retries, circuit breaking.
Observability. To bring visibility into systems.
Secret management. For storing password and keys.
One important thing to note is that these are application level concerns. Distributed application runtime does not concern itself with infrastructure or network level concerns.
This is the guide I wish I had when I was starting my Kubernetes journey. Kubernetes is a complex technology with many new concepts that takes time to get your head around. In this guide, we will take an incremental approach to deploying applications on Kubernetes. We will cover what and why of Kubernetes and then we will learn how to deploy a real-world application on Kubernetes. We will first run application locally, then using Docker containers, and finally on Kubernetes. The guide will also cover Kubernetes architecture and important Kubernetes concepts like Pods, Services, Deployment.
In this guide, we will cover following topics:
What is Kubernetes?
The real reasons you need Kubernetes
Deploying a real world application on Kubernetes
What is Kubernetes?
Kubernetes is a platform for managing application containers across multiple hosts. It abstracts away the underlying hardware infrastructure and acts as a distributed operating system for your cluster.
Kubernetes is a greek for Helmsman or Pilot (the person holding the ship’s steering wheels)
Kubernetes play three important roles:
Kubernetes allocates and manages access to fixed resources using build in resource abstractions like Persistent Volume Claims, Resource Quotas, Services etc
Kubernetes provides an abstracted control plane for scheduling, prioritizing, and running processes.
Kubernetes provides a sandboxed environment so that applications do not interfere with each other.
Kubernetes allows users to specify the memory and CPU constraints on the application. It will ensure application remain in their limits.
Kubernetes provides communication mechanism so that services can talk among each other if required.
Kubernetes gives the illusion of single infinite compute resource by abstracting away the hardware infrastructure.
Kubernetes provides the illusion that you need not care about underlying infrastructure. It can run on a bare metal, in data centre, on the public cloud, or even hybrid cloud.
Kubernetes gives the illusion that applications need not care about where they will be running.
Kubernetes provides common abstractions like Services, Ingress, auto scaling, rolling deployment , volume management, etc.
Kubernetes comes with security primitives like Namespaces, RBAC that applications can use transparently
Today, I was interested to know how does Docker uses cgroups to set resource limits. In this short post, I will share with you what I learnt.
I will assume that you have a machine on which Docker is installed.
Docker allows you to pass resource limits using the command-line options. Let’s assume that you want to limit the IO read rate to 1mb per second for a container. You can start a new container with the device-read-bps option as shown below
$ docker run -it --device-read-bps /dev/sda:1mb centos
In the above command, we are instantiating a new centos container. We specified device-read-bps option to limit the read rate to 1mb per second for /dev/sda device.
Today, I was working with an application that uses Oracle as the database. We decided dockerize the application to make it easy for fellow developers to work with the beast. We found a working Oracle docker image by sath89. Oracle 12c Docker image is close to 5.7GB on disk so we are not talking about lightweight containers here :). Once image was dowloaded, running image was as easy as running the following command. Continue reading “Using wait-for-it with Oracle database docker image”
In the lastfewposts, we looked at various Docker utilities and how XL Deploy can make it easy for enterprises to adopt and use Docker. Docker streamlines software development and testing for teams that have started embracing it. The package once deploy anywhere (PODA) capability of Docker minimises the issue of environmental (like staging, quality assurance, and production) differences. Continue reading “Amazon ECS: The Modern Cluster Manager Part 1”
If you use Docker for Mac or something similar, Docker Compose will be installed along with it. Docker Compose has a different release timeline for Docker for Mac so you will not be able to try latest version of Docker compose until you upgrade Docker for Mac. This is limiting. You should be able to install Docker compose independently. To achieve that, you can run following command
Today, I watched DockerCon 2017 talk on Container Performance Analysis. Talk is given by Brendan Gregg, Senior Performance Architect at Netflix. In his talk, he shares various linux tools that can help you understand performance of your container platform. It is a great talk for anyone trying to do performance analysis of containers. In one of his slides, he shared 10 tools that he will use to start the investigation.
A few days back, I discovered a new Docker feature — multi-stage builds. The multi-stage build feature helps you create thin Docker images by giving possibility to divide image building process into multiple stages. Artifacts produced in one stage can be resused by another stage. This is very beneficial for languages like Java as multiple steps are required to build the Docker image. The main advantage of multi-stage build feature is that it can help you create smaller size images. This feature is not yet available in stable versions of Docker. It will become available in Docker 17.05. To use this feature, you have to use edge version of Docker CE.
To build a Docker image for a Java application, you first need to build the Java project. Java build process needs JDK and a build tool like Maven, Gradle, or Ant. Once Java binary artifact is produded, you can package the binary in a Docker image. For running a Java binary, you only need JRE so you don’t have to pay the cost of bundling the whole JDK.