There are two popular adages in software development that people use when talking about software estimates.
Hofstadter’s law states that it always takes longer than you expect, even when you take into account Hofstadter’s Law
Parkinson’s law states that work expands so as to fill the time available for its completion.
Hofstadter’s law suggests that you will always underestimate even if you add a buffer. On the other hand, Parkinson’s law suggests that if you give more time to a task it will take more time to do it. In short, we are doomed either way. So, let’s not do any estimates. #noestimates FTW! The reality is that all customers will ask you for estimates and timelines before they award you with a project.
Estimates are required for:
Making a business case for the new project;
Knowing when the project will be delivered;
Allocating money or teams of people for some amount of time;
Working backwards from the end date.
One of the least favorite parts of my current role is to provide estimates for new software development project bids. In this post, I will not be talking about sprint estimates or release/milestone estimates. Instead, I will talk about product development projects. These usually involve uncertainty, ambiguity, and customers not sharing/knowing complete details. Examples of such projects include a new mobile banking application, omni channel customer onboarding insurance app, a back office customer 360 degree platform, oracle to postgresql migration (involves stored procedures as well), reengineering a mainframe based pricing engine to modern real-time pricing engine, and many others. As a side note, I also actively participate in their architecture and solutioning.
Learn how to do text processing with sed, awk, and grep
Why you should be deploying Postgres primarily on Kubernetes – Link
Covers many reasons why you may want to run Postgres on Kubernetes. Reasons include production ready configuration, connection pooling, automated backup, monitoring, high availability setup, and few others. Also, it talks about an interesting project called Stackgres that automates all of that. With a few lines of YAML you can have your full setup automated and running on Kubernetes in less than an hour.
I have been using Docker since late 2013 and for me and many others Docker has revolutionised the way we build, package, and deploy software. As a community we are grateful to Docker and its creators. Docker is one of the first tools that I install on my dev machine. It used to be always running on my MacBook and anytime I wanted to try a new technology I preferred to install it using Docker. Just do a docker run <tech> and you are good to go. But, this has changed in the last couple of years. Docker for Mac is still installed but I no longer keep it running. The main reason for that has been the amount of resources it consumes, distracting fan noise, and MacBook becoming too hot. There are many issues filed in the Docker for Mac issue tracker on Github where developers have shared similar experience. Still, I kept using it as there was no good alternative available.
A couple of weeks back I learnt that Docker has changed its monetization strategy. Docker Desktop (Docker for Mac and Docker for Windows) will soon require subscription. From the Docker blog published on 31st August 2021 I quote:
Docker Desktop remainsfree for small businesses (fewer than 250 employees AND less than $10 million in annual revenue), personal use, education, and non-commercial open source projects.
It requires a paid subscription (Pro, Team or Business), starting at $5 per user per month, for professional use in larger businesses. You may directly purchase here, or share this post and our solution brief with your manager.
While the effective date of these terms is August 31, 2021, there is a grace period until January 31, 2022 for those that require a paid subscription to use Docker Desktop.
This week I had an interesting discussion with a close friend. We have known each other for more than twenty years and whenever we see each other we ask each other what we have learned since we last met. This one question leads to many other questions and we spend many hours discussing different aspects of life and work.
This week I had a discussion with one of the developers in my organization on pass-through services. A Pass-through service is a service that wraps an existing service without much logic. Its job is to delegate to the downstream service. The existing service could be legacy service or an external third party API. The developer was questioning the purpose of pass-through services and the amount of development effort that goes in writing and maintaining them. In this post I am sharing the reasons that I gave to the developer to help him understand why pass-through services are not a bad idea and they can be helpful in the long run.
This week at the office we started work on a big enterprise project. We decided to go with Microservices architecture for the following reasons:
We are working on a business domain composed of multiple subdomains
The application needs to scale to twice the current load. They already have an application built using legacy technologies. We have to modernize the technology stack along with scaling to twice the current load.
Since this is a big project we will have multiple teams working on it. Microservices architecture is well suited for a large and growing engineering team.
Help the organization become API first. All the business APIs will be exposed using an enterprise gateway that third parties can integrate with.
All of us or at least those, who use their LinkedIn account are quite used to seeing profile headlines like “Investor, Thought Leader, Digital Transformation Agent, Keynote Speaker, Builder” or “Keynote Speaker, Author, Writer, Builder” or people in the Agile world, who put out to the world all their hard-earned certificates. There are so many examples like these. People seem to be creative in coming up with such bloated, larger than life headlines.
We cover our vulnerabilities and insecurities by believing in a fake image that we build for ourselves. In the age of social media and instant gratification, we fall in a loop where we constantly want to show off that we are doing something new and evolving. We don’t want to accept that we are just like other people. We want to show that we are different. We are in a different league.
I am also guilty of doing this 8 years back when I was a technology evangelist. As far as I remember my headline used to be “Technology Evangelist | Speaker | Writer | Traveller,” something in those lines. I wanted to show off and tell the world that I have done so many things.
I soon realized that I am playing a status game. I am driven by constant validation and there is so much pressure I am putting on myself to be first in the race where I am the only runner.
All status games are zero sum.
Zero-sum is a situation in game theory in which one person’s gain is equivalent to another’s loss, so the net change in wealth or benefit is zero.
Today, while discussing this topic with a friend, I realized that we do this because most of us have not achieved much or have not accomplished a larger body of work that we can be proud of. There is so much pressure to be successful for whatever definition of success we have defined for ourselves. We cover this fear or call it insecurity by posting our larger than life image. Deep inside we know that we lack creativity and there is nothing that will outlive us.
Last week I was writing down why enterprises should use OpenShift as the foundation for building their enterprise platform and I wrote down following points.
When building an internal Microservices platform for an organization Kubernetes is just the foundation you need many more tools and workflows to build a platform. OpenShift is a Kubernetes superset combining over 200 open source projects into a fully integrated solution with strong focus on a developer experience, operational capabilities, monitoring, and management with strong and secure defaults. Some of the open source projects include Istio, Argo, Prometheus, Jaeger, ELK, Keycloak,etc. OpenShift is support Kubernetes along many other supported components.
OpenShift is secure by default.
CoreOS container Operating System. Reduce surface area for attacks.
You can define upgrade windows and schedule them
OpenShift is certified with over 200+ ISVs. These include Finacle, CloudEra, MongoDB, SAS Viya, and many other.
OpenShift is available as managed cloud offering on all the three clouds – Red Hat OpenShift for AWS, Azure Red Hat OpenShift, Red Hat OpenShift Container Platform on GCP.
Allows you to manage multiple clusters through a single pane of glass
In the last few months I have given a lot of thought on the minimal technical documentation that all projects should have. I consider it essential to building a quick understanding of the project and quickly onboard new developers. These documents should be maintained in the version control just like the code. The technical documentation should sit in the same version control repository as your code.