Today, I was looking at JDK 8
Collections.max function declaration and noticed a weird
& in the type declaration. Most normal Java developers will not remember exact function declaration so I am writing it below.
public static <T extends Object & Comparable<? super T>> T max(Collection<? extends T> coll)
The time to read this newsletter is 210 minutes.
The general who wins a battle makes many calculations in his temple before the battle is fought. – Sun Tzu
- All the best engineering advice I stole from non-technical people – 20 mins read. The points that resonated with me:
- Know what people are asking you to be an expert in. This helps you avoid getting too much into other people territory.
- Thinking is also work. This is especially true when you move to management.
- Effective teams need trust. That’s not to say that frameworks for decision making or metrics tracking are not useful, they are critical — but replacing trust with process is called bureaucracy.
- Fast and flexible observability with canonical log lines – 20 mins read. Canonical logging is a simple technique where in addition to their normal log traces, requests also emit one long log line at the end that includes many of their key characteristics. The key points for me in this post are:
- Use logfmt to make logs machine readable
- We use canonical log lines to help address this. They’re a simple idea: in addition to their normal log traces, requests (or some other unit of work that’s executing) also emit one long log line at the end that pulls all its key telemetry into one place.
- Canonical lines are an ergonomic feature. By colocating everything that’s important to us, we make it accessible through queries that are easy for people to write, even under the duress of a production incident
- Why Some Platforms Thrive and Others Don’t – 25 mins read. When evaluating an opportunity involving a platform, entrepreneurs (and investors) should analyze the basic properties of the networks it will use and consider ways to strengthen network effects. It’s also critical to evaluate the feasibility of minimizing multi-homing, building global network structures, and using network bridging to increase scale while mitigating the risk of disintermediation. That exercise will illuminate the key challenges of growing and sustaining the platform and help businesspeople develop more-realistic assessments of the platform’s potential to capture value
- How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh – 30 mins read. A long read that makes case for distributed data mesh. It applies DDD principles to designing a data lake. A refreshing take on how to design date lakes.
- Re-Architecting the Video Gatekeeper – 15 mins read. This post cover how one of the Netflix tech team used Hollow to improve performance of their service. Hollow is a total high-density cache built by Netflix. The post covers why near cache was suitable for their use case. This is a detailed post covering the existing and new architecture. I learnt a lot while reading this article.
- Our not-so-magic journey scaling low latency, multi-region services on AWS – 20 mins read. This is another detailed post covering how Atlassian built a low latency service. They first tried DynamoDB but that didn’t cut for them. So, they also use a Caffeine based near cache to achieve the numbers expected from their service.
- Making Containers More Isolated: An Overview of Sandboxed Container Technologies – 25 mins read.
- Benchmarking: Do it with Transparency or don’t do it at all – 20 mins read. This is a detailed rebuttal by Ongress team on MongoDB blog where they dismissed the benchmark report created by Ongress.
- Deconstructing the Monolith: Designing Software that Maximizes Developer Productivity – 20 mins read. This article by Shopify is a must read for anyone planning to adopt Microservices architecture. It is practical and pragmatic. The key points for me in this post are:
- Application architecture evolve over time. The right way to think about evolution is to go from Monolith -> Modular monolith -> Microservices.
- Monolithic architecture has many advantages.
- Monolithic architecture can take an application very far since it’s easy to build and allows teams to move very quickly in the beginning to get their product in front of customers earlier.
- You’ll only need to maintain one repository, and be able to easily search and find all functionality in one folder.
- It also means only having to maintain one test and deployment pipeline, which, depending on the complexity of your application, may avoid a lot of overhead.
- One of the most compelling benefits of choosing the monolithic architecture over multiple separate services is that you can call into different components directly, rather than needing to communicate over web service API’s
- Disadvantages of Monolithic architecture
- As system grows challenge of building and testing new features increases
- High coupling and a lack of boundaries
- Developing in Shopify required a lot of context to make seemingly simple changes. When new Shopifolk onboarded and got to know the codebase, the amount of information they needed to take in before becoming effective was massive
- Microservices architecture increases deployment and operational complexity. The tools that works great for monolithic code bases stop working with Microservices architecture.
- A modular monolith is a system where all of the code powers a single application and there are strictly enforced boundaries between different domains.
- Approach to move to Modular monolith
- Reorganize code by real-world concepts and boundaries
- Ensure all tests work after reorganisation
- Build tools that help track progress of each component towards its goal of isolation. Shopify developed a tool called Wedge that highlights any violations of domain boundaries (when another component is accessed through anything but its publicly defined API), and data coupling across boundaries
- According to Martin Fowler, “almost all the cases where I’ve heard of a system that was built as a microservice system from scratch, it has ended in serious trouble… you shouldn’t start a new project with microservices, even if you’re sure your application will be big enough to make it worthwhile
- “It’s dead, Jim”: How we write an incident postmortem – 15 mins read. I believe it is a good exercise to do a post mortem even if you don’t follow SRE practices. The key points for me in this post are:
- A postmortem is the process by which we learn from failure, and a way to document and communicate those lessons.
- Why to write one?
- It allows us to document the incident, ensuring that it won’t be forgotten.
- They are the most effective mechanism we can use to drive improvement in our infrastructure.
- You should share postmortems because your customers deserve to know why their services didn’t behave as expected
- We shouldn’t be satisfied with identifying what triggered an incident (after all, there is no root cause), but should use the opportunity to investigate all the contributing factors that made it possible, and/or how our automation might have been able to prevent this from ever happening.
- What we want is to learn why our processes allowed for that mistake to happen, to understand if the person that made a mistake was operating under wrong assumptions.
The time to read this newsletter is 160 minutes.
We change our behavior when the pain of staying the same becomes greater than the pain of changing. Consequences give us the pain that motivates us to change. — Henry Cloud
- Advertising is a cancer on society: 20 mins read. This is a long read. Author makes many valid points on why Advertising should be consider cancer. Advertising is a cancer because it has symptoms mentioned below.
- Privacy violations
- Outrage-inducing news reporting
- Decaying and ephemeral Internet services
- Some items from my “reliability list”: 15 mins read. This post make thoughtful points that software architects should keep in mind while designing or reviewing systems.
- Can you handle rollbacks?
Are new states forward compatible? This is related to Postel’s law
- > Be conservative in what you do, be liberal in what you accept from others
Do you use a strong data exchange format like Protobuf or Thrift?
Why should use JSON as data exchange format between systems?
How I built a spreadsheet app with Python to make data science easier – 15 mins read. One of the cool open source projects that I have discovered in recent times.
Announcing PartiQL: One query language for all your data – 20 mins read. Looks like finally we have found a way to standardise on SQL for working across different data storage solutions be it RDBMS or NoSQL or File based. PartiQL extends SQL by adding minimal extensions required for working with different data models. SQL won! Like it or not SQL is still the best and most powerful query language.
Parallelism in PostgreSQL – 15 mins read. The post covers how modern Postgres implements parallelism for sequential scans, aggregations, and B-tree scans.
Who Actually Feels Satisfied About Money? – 20 mins read. The post makes a good point on anxiety people have regarding money. More money does not always translate to more happiness. It’s not just how much you have — it’s what you do with it.
Top Seven Myths of Robust Systems – 15 mins read. The number one myth we hear out in the field is that if a system is unreliable, we can fix that with redundancy. In some cases, redundant systems do happen to be more robust. In others, this is demonstrably false. It turns out that redundancy is often orthogonal to robustness, and in many cases it is absolutely a contributing factor to catastrophic failure. The problem is, you can’t really tell which of those it is until after an incident definitively proves it’s the latter.
Safely Rewriting Mixpanel’s Highest Throughput Service in Golang – 15 mins read. This post covers how Mixpanel made use of Diffy to safely migrate high throughput service from Python to Golang. Diffy is a service that accepts HTTP requests, and forwards them to two copies of an existing HTTP service and one copy of a candidate HTTP service.
The Business Executive’s Guide to Kubernetes – 10 mins read. A lot of useful advice on Kubernetes. The key points for me are:
- Stateful data is hard. Don’t try to reinvent AWS RDS. Stateful sets have limitations.
- Upgrading Kubernetes is hard. The advice is to run more than Kubernetes cluster in production.
- Managed Kubernetes does not take away all the problems.
- When a rewrite isn’t: rebuilding Slack on the desktop – 15 mins read. The approach used was at once incremental and all-encompassing, rewriting a piece at a time into a gradually growing “modern” section of the application that utilized React and Redux. And the results? 50% reduction of memory use and 33% improvement in load time
Video of the week