Issue #34: 10 Reads, A Handcrafted Weekly Newsletter For Software Developers


The time to read this newsletter is 210 minutes.

The general who wins a battle makes many calculations in his temple before the battle is fought. – Sun Tzu

  1. All the best engineering advice I stole from non-technical people20 mins read. The points that resonated with me:
    1. Know what people are asking you to be an expert in. This helps you avoid getting too much into other people territory.
    2. Thinking is also work. This is especially true when you move to management.
    3. Effective teams need trust. That’s not to say that frameworks for decision making or metrics tracking are not useful, they are critical — but replacing trust with process is called bureaucracy.
  2. Fast and flexible observability with canonical log lines20 mins read. Canonical logging is a simple technique where in addition to their normal log traces, requests also emit one long log line at the end that includes many of their key characteristics. The key points for me in this post are:
    1. Use logfmt to make logs machine readable
    2. We use canonical log lines to help address this. They’re a simple idea: in addition to their normal log traces, requests (or some other unit of work that’s executing) also emit one long log line at the end that pulls all its key telemetry into one place.
    3. Canonical lines are an ergonomic feature. By colocating everything that’s important to us, we make it accessible through queries that are easy for people to write, even under the duress of a production incident
  3. Why Some Platforms Thrive and Others Don’t25 mins read. When evaluating an opportunity involving a platform, entrepreneurs (and investors) should analyze the basic properties of the networks it will use and consider ways to strengthen network effects. It’s also critical to evaluate the feasibility of minimizing multi-homing, building global network structures, and using network bridging to increase scale while mitigating the risk of disintermediation. That exercise will illuminate the key challenges of growing and sustaining the platform and help businesspeople develop more-realistic assessments of the platform’s potential to capture value
  4. How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh30 mins read. A long read that makes case for distributed data mesh. It applies DDD principles to designing a data lake. A refreshing take on how to design date lakes.
  5. Re-Architecting the Video Gatekeeper15 mins read. This post cover how one of the Netflix tech team used Hollow to improve performance of their service. Hollow is a total high-density cache built by Netflix. The post covers why near cache was suitable for their use case. This is a detailed post covering the existing and new architecture. I learnt a lot while reading this article.
  6. Our not-so-magic journey scaling low latency, multi-region services on AWS20 mins read. This is another detailed post covering how Atlassian built a low latency service. They first tried DynamoDB but that didn’t cut for them. So, they also use a Caffeine based near cache to achieve the numbers expected from their service.
  7. Making Containers More Isolated: An Overview of Sandboxed Container Technologies25 mins read.
  8. Benchmarking: Do it with Transparency or don’t do it at all20 mins read. This is a detailed rebuttal by Ongress team on MongoDB blog where they dismissed the benchmark report created by Ongress.
  9. Deconstructing the Monolith: Designing Software that Maximizes Developer Productivity20 mins read. This article by Shopify is a must read for anyone planning to adopt Microservices architecture. It is practical and pragmatic. The key points for me in this post are:
    1. Application architecture evolve over time. The right way to think about evolution is to go from Monolith -> Modular monolith -> Microservices.
    2. Monolithic architecture has many advantages.
      1. Monolithic architecture can take an application very far since it’s easy to build and allows teams to move very quickly in the beginning to get their product in front of customers earlier.
      2. You’ll only need to maintain one repository, and be able to easily search and find all functionality in one folder.
      3. It also means only having to maintain one test and deployment pipeline, which, depending on the complexity of your application, may avoid a lot of overhead.
      4. One of the most compelling benefits of choosing the monolithic architecture over multiple separate services is that you can call into different components directly, rather than needing to communicate over web service API’s
    3. Disadvantages of Monolithic architecture
      1. As system grows challenge of building and testing new features increases
      2. High coupling and a lack of boundaries
      3. Developing in Shopify required a lot of context to make seemingly simple changes. When new Shopifolk onboarded and got to know the codebase, the amount of information they needed to take in before becoming effective was massive
    4. Microservices architecture increases deployment and operational complexity. The tools that works great for monolithic code bases stop working with Microservices architecture.
    5. A modular monolith is a system where all of the code powers a single application and there are strictly enforced boundaries between different domains.
    6. Approach to move to Modular monolith
      1. Reorganize code by real-world concepts and boundaries
      2. Ensure all tests work after reorganisation
      3. Build tools that help track progress of each component towards its goal of isolation. Shopify developed a tool called Wedge that highlights any violations of domain boundaries (when another component is accessed through anything but its publicly defined API), and data coupling across boundaries
    7. According to Martin Fowler, “almost all the cases where I’ve heard of a system that was built as a microservice system from scratch, it has ended in serious trouble… you shouldn’t start a new project with microservices, even if you’re sure your application will be big enough to make it worthwhile
  10. “It’s dead, Jim”: How we write an incident postmortem15 mins read. I believe it is a good exercise to do a post mortem even if you don’t follow SRE practices. The key points for me in this post are:
    1. A postmortem is the process by which we learn from failure, and a way to document and communicate those lessons.
    2. Why to write one?
      1. It allows us to document the incident, ensuring that it won’t be forgotten.
      2. They are the most effective mechanism we can use to drive improvement in our infrastructure.
    3. You should share postmortems because your customers deserve to know why their services didn’t behave as expected
    4. We shouldn’t be satisfied with identifying what triggered an incident (after all, there is no root cause), but should use the opportunity to investigate all the contributing factors that made it possible, and/or how our automation might have been able to prevent this from ever happening.
    5. What we want is to learn why our processes allowed for that mistake to happen, to understand if the person that made a mistake was operating under wrong assumptions.

Leave a comment