Issue #39: 10 Reads, A Handcrafted Weekly Newsletter For Software Developers

The time to read this newsletter is 145 minutes.

Strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat. – Sun Tzu

  1. Using the hunger I experienced as a kid to teach mine about generosity: 10 mins read. We all become too specific and choosy when it comes to helping others. We don’t want to offer the best we have. These are the best words I have read in a long time

    > When you give the best you have to someone in need, it translates into something much deeper to the receiver. It means they are worthy.
    >
    > If it’s not good enough for you, it’s not good enough for those in need either. Giving the best you have does more than feed an empty belly—it feeds the soul.

  2. Calendar Versioning: 10 mins read. CalVer is a versioning convention based on your project’s release calendar, instead of arbitrary numbers.

  3. Doing a database join with CSV files: 10 mins read. xsv is a tool that you can use to join two CSV files. The author shows examples of inner join, left join, and right join. Very useful indeed.

  4. SQL, NoSQL, and Scale: How DynamoDB scales where relational databases don’t: 20 mins read. This post provides a good overview on why RDBMS fail to scale and how DynamoDB can be used to build web scale applications.

  5. Why databases use ordered indexes but programming uses hash tables: 15 mins read. This post explains why databases uses b-tree and programs use hash tables. The main reasons shared by author are:

    1. Ordered data structures perform much better when n is large. With hash based collections, one collision can cause O(n) performance. Range queries becomes O(n) if implemented using hash tables
    2. Ordering helps in indexes and we can reuse one index in multiple ways. With hash tables, we have to implement separate indexes
    3. Ordered collection achieve locality of reference.
  6. Xor Filters: Faster and Smaller Than Bloom Filters: 15 mins read. In this post, author talks about Xor filters to solve problems where you need to check whether an item exist in cache or not. Usually we solve such problems using a hash based collection but this can be solve using Xor filters as well. Xor filters take a bit longer to build, but once built, it uses less memory and is about 25% faster. Bloom filters and cuckoo filters are two other common approaches to solve these kind of problem as well.

  7. Distributed architecture concepts I learned while building a large payments system: 20 mins read.The author described important distributed system concepts. He covers consistency, durability, SLA, and many other concepts.

  8. From 15,000 database connections to under 100: DigitalOcean’s tale of tech debt: 20 mins read. This post by Digital Ocean is a must read for every developer. They talked about how they incrementally moved their legacy DB based message queue to the one based on RabbitMQ. Key points from the post are:

    1. Like GitHub, Shopify, and Airbnb, DigitalOcean began as a Rails application in 2011. The Rails application, internally known as Cloud, managed all user interactions in both the UI and public API. Aiding the Rails service were two Perl services: Scheduler and DOBE (DigitalOcean BackEnd). Scheduler scheduled and assigned Droplets to hypervisors, while DOBE was in charge of creating the actual Droplet virtual machines. While the Cloud and Scheduler ran as stand-alone services, DOBE ran on every server in the fleet.
    2. For four years, the database message queue formed the backbone of DigitalOcean’s technology stack. During this period, we adopted a microservice architecture, replaced HTTPS with gRPC for internal traffic, and ousted Perl in favor of Golang for the backend services. However, all roads still led to that MySQL database.
    3. By the start of 2016, the database had over 15,000 direct connections, each one querying for new events every one to five seconds. If that was not bad enough, the SQL query that each hypervisor used to fetch new Droplet events had also grown in complexity. It had become a colossus over 150 lines long and JOINed across 18 tables.
    4. When Event Router went live, it slashed the number of database connections from over 15,000 to less than 100.
    5. Unfortunately, removing the database’s message queue was not an easy feat. The first step was preventing services from having direct access to it. The database needed an abstraction layer.
    6. Now the real work began. Having complete control of the event system meant that Harpoon had the freedom to reinvent the Droplet workflow.
    7. Harpoon’s first task was to extract the message queue responsibilities from the database into itself. To do this, Harpoon created an internal messaging queue of its own that was made up of RabbitMQ and asynchronous workers. As of this writing in 2019, this is where the Droplet event architecture stands.
  9. Why do we need distributed systems?: 10 mins read. We build distributed systems because
    1. Distributed systems offer better availability
    2. Distributed systems offer better durability
    3. Distributed systems offer better scalability
    4. Distributed systems offer better efficiency
  10. On Kubernetes, Hybrid and Multi-cloud: 15 mins read. The key points in the post are:
    1. The first thing to consider is agility—cloud services offer significant advantages on how quickly you can spin infrastructure up and down, allowing you to concentrate on creating value on the software and data side.
    2. But the flip side of this agility is our second factor, which is cost. The agility and convenience of cloud infrastructure comes with a price premium that you pay over time, particularly for “higher level” services than raw compute and storage.
    3. The third factor is control. If you want full control over the hardware or network or security environment that your data lives in, then you will probably want to manage that on-premises.

Tools I discovered this week

  1. Broot: It is a CLI tool that you can use to get an overview of a directory, even a big one. It is written in Rust programming language. I use it as an alternative to tree command.
  2. xsv: It is a CLI tool for working with CSV files. It can concatenate, count, join, flatten, and many other things. It is Swiss army tool for CSV. It is written in Rust programming language.
  3. pigz: A parallel implementation of gzip.

Video of the week

Issue #38: 10 Reads, A Handcrafted Weekly Newsletter For Software Developers

The time to read this newsletter is 135 minutes.

The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge. – Stephen Hawking

  1. System design hack: Postgres is a great pub/sub & job server: 10 mins read. I have read multiple times that people are using Postgres as a job queue or as a pub/sub solution. It does require you to mess with SQL and write PSQL functions but I think it could be a good solution if you don’t want to manage some other pub/sub server.
  2. Developers mentoring other developers: practices I’ve seen work well: 20 mins read. The article covers how we can build good mentorship programs at our work.
  3. Head In The Clouds: 15 min read. This articles covers how folks at FreeAgent planned their cloud migration journey. The key points from the post are:
    1. Co-locating has been a terrific win for us over the years, providing us with a cost-effective, high performance compute platform that has allowed us to scale to over 95,000 customers with close to 5 9’s reliability.
    2. Growth often acts as a forcing function with regards to infrastructure. Head count has doubled. Customer count is growing quickly.
    3. Desire for new features is another forcing function. They wanted more datacenters to increase resilience. They were reaching hardware limitations. The ops team was pressed and it was challenging to find ops engineers with the right skills. They were experimenting with ML. Serverless was becoming a go to for production. They wanted to improve deployment. And scaling the database was a challenge.
    4. Experiments were run to research moving to AWS: Granted, any infrastructure migration would be expensive, the project complex and it would come with many challenges, but the advantages and opportunities that a full cloud migration would open up in the future were undeniable.
    5. The decision was made to migrate to AWS!
    6. Early on in the R&D phase we became customers of Gruntwork.io and have relied heavily on their Infrastructure as Code library and training to accelerate the project.
  4. We built network isolation for 1,500 services to make Monzo more secure: 20 mins read. In the Security team at Monzo, one of our goals is to move towards a completely zero trust platform. This means that in theory, we’d be able to run malicious code inside our platform with no risk – the code wouldn’t be able to interact with anything dangerous without the security team granting special access.
  5. Scaling in the presence of errors—don’t ignore them: 20 mins read. The secret to error handling at scale isn’t giving up, ignoring the problem, or even it trying again—it is structuring a program for recovery, making errors stand out, allowing other parts of the program to make decisions. Techniques like fail-fast, crash-only-software, process supervision, but also things like clever use of version numbers, and occasionally the odd bit of statelessness or idempotence. What these all have in common is that they’re all methods of recovery. Recovery is the secret to handling errors. Especially at scale. Giving up early so other things have a chance, continuing on so other things can catch up, restarting from a clean state to try again, saving progress so that things do not have to be repeated. That, or put it off for a while. Buy a lot of disks, hire a few SREs, and add another graph to the dashboard.
  6. Modern Data Practice and the SQL Tradition: 15 mins read. Over the last one year I have read multiple posts suggesting we should start with relational database route. SQL is becoming the defacto language for all things data. Most developers start looking at alternatives too early in the cycle before understanding pros and cons of using a technology. The key points from the post are:
    1. The more I work with existing NoSQL deployments however, the more I believe that their schemaless nature has become an excuse for sloppiness and unwillingness to dwell on a project’s data model beforehand.
    2. One can now model the “known” part of his data model in a typical relational manner and dump his “raw and unstructured” data into JSON columns. No need to “denormalize all the things” just because some element of the domain is “unstructured”.
    3. The good thing with this approach is that one can have a single database for both their structured and unstructured data without sacrificing ACID-compliance.
    4. SQL and relational databases have come a long way and nowadays offer almost any function a data scientist could ask.
    5. Relational databases usually make more sense financially too. Distributed systems like MongoDB and ElasticSearch are money-hungry beasts and can kill your technology and human resources budget; unless you are absolutely certain and have run the numbers and decided that they do really make sense for your case.
    6. Performance and stability with relational databases can be better out of the box
  7. Hash join in MySQL 8: 10 mins read. You should read this blog if you want to learn how hash joins are implemented by databases. It will give you a good and detailed understanding on the subject.
  8. Managing a Go monorepo with Bazel: 10 mins read. I don’t think we still have a winner between monorepo and multiple repo approach when building Microservices. We have big organisations like Google and Facebook that prefer Monorepo approach and then we have organizations like Netflix that recommend multi repo approach. This post covers how you can manage a Go monorepo using Bazel build tool. I have not used Bazel so far but I am seriously considering it for my personal projects.
  9. The Value in Go’s Simplicity: 10 mins read. Go is one language that I really want to spend more time on. It is a popular language used almost everywhere these days. In this blog, author makes the case for Go’s simplicity. As author mentioned, Go core development team has take simplicity to another level. To keep language simple they are not allowing many good features like Generics implemented in Go.
  10. When XML beats JSON: UI layouts: 5 mins read. UI layouts are represented as component trees. And XML is ideal for representing tree structures. It’s a match made in heaven! In fact, the most popular UI frameworks in the world (HTML and Android) use XML syntax to define layouts.

Video of the week

Issue #36: 10 Reads, A Handcrafted Weekly Newsletter For Software Developers

The time to read this newsletter is 200 minutes.

Religion is the opium of the masses – Karl Marx

  1. A Technical Introduction to MemSQL: 20 mins read. MemSQL is s fast, commercial, ANSI SQL compliant, highly scalable HTAP database. HTAP databases are those that support both OLTP and OLAP workloads. It supports ACID transactions just like a regular relational database .It also supports document and geospatial data types. I have also written a quick post on MemSQL that you can read.

  2. It’s later than you think: 20 mins read. We all regret working too hard in the end. Give it a read it is an awesome write up on a heart breaking story.

  3. Modern applications at AWS: 10 mins read. To succeed in using application development to increase agility and innovation speed, organizations must adopt five elements, in any order: microservices; purpose-built databases; automated software release pipelines; a serverless operational model; and automated, continuous security.

  4. 1 Year of Event Sourcing and CQRS: 30 mins read. This is a long read that covers DDD, CQRS, and Event Sourcing. In this post author covered how they implemented this architecture style and issues they faced.

  5. The Single Most Important Internal Email in the History of Amazon: 20 mins read. This is a long read on how different organisations are organised. Some organisations are collocated and prefer synchronous mode of communication while others are distributed with asynchronous mode of communications. An organization’s communication system can be one of the most important leverages you can have to make an impact on productivity. Be very intentional about it.

  6. Lessons from Design School for Software Engineers: 20 mins read. Great advice from an Engineer at Github. All the lessons resonated with me.

    1. You are not your audience
    2. Constructive, objective feedback is always better than reductive, subjective feedback
    3. You are not your designs/work
    4. Iteration is key for improvement
    5. Always critique your work
  7. A Multithreaded Fork of Redis That’s 5X Faster Than Redis : 20 mins read. This is interesting. A fork of Redis that makes use of multi-threading to make Redis 5x faster. From the post:

    > KeyDB has a different philosophy on how the codebase should evolve. We feel that ease of use, high performance, and a “batteries included” approach is the best way to create a good user experience. While we have great respect for the Redis maintainers it is our opinion that the Redis approach focusses too much on simplicity of the code base at the expense of complexity for the user. This results in the need for external components and workarounds to solve common problems.

  8. Why we decided to go for the Big Rewrite: 20 mins. This post goes into detail how channable did rewrite of their main data processing system. It has a lot of good advice that you can apply in your work as well.

  9. How to Write Fast Code in Ruby on Rails: 15 mins read. This post contains general advice to write fast and performant Ruby code. Many of the lessons can be applied even if you use any other programming language.

  10. Cascading Cache Invalidation: 25 mins read. This is an interesting article covering flaw in one of the best practice most people use for asset caching i.e content hashes in filenames and far-future expiry. Author also shared three possible solutions to the problem.

Video of the week

This week video: Intel and Rust: the Future of Systems Programming

Issue #35: 10 Reads, A Handcrafted Weekly Newsletter For Software Developers

The time to read this newsletter is 130 minutes.

A busy calendar and a busy mind will destroy your ability to create anything great. – Naval Ravikant

  1. GitHub stars won’t pay your rent: 20 mins read. The key point in the post is that you should not feel bad about charging money for your work. I think we software developers have taken it too far. Most of us feel that by making our work open source we are making the world better. But, the reality is that if you loose your job and need financial support then no user of your open source project will come to help. We need to become practical and keep financial reality in mind.
  2. Building a Kubernetes platform at Pinterest: 15 mins read. A lot of things you can learn about Kubernetes from this post by Pinterest engineering team. The key points for me are:
    1. You can use CRD to define your organisation specific service. Look at PinterestService.
    2. CRD can be used as an alternative to Helm
    3. Infrastructure team has three main priorities: 1) Service Reliability 2) Developer Productivity 3) Infra Efficiency
  3. Six Shades of Coupling: 15 mins read.
  4. When Redundancy Actually Helps: 10 mins read.
  5. The (not so) hidden cost of sharing code between iOS and Android: 10 mins read. So, we have come back the full circle. Organisations are moving away from code sharing approach when building same application for different mobile platforms. I have seen multiple organisations using C++ to write share code. The use of C++ limits number of developers you can find in the market and overall slows you down. You have to build tools to support your custom journey.
  6. 3 Strategies for implementing a microservices architecture: 5 mins read. The three strategies
    1. The Strangler method
    2. The Lego strategy
    3. The nuclear option
  7. Microservices, Apache Kafka, and Domain-Driven Design: 20 mins read.
  8. Habits vs. Goals: A Look at the Benefits of a Systematic Approach to Life: 10 mins read.
  9. Building an analytics stack from scratch: 15 mins read.
  10. Cutting Through Indecision & Overthinking: 10 mins read. Take action. Half the battle is won if you get started.

Video of the week

Issue #34: 10 Reads, A Handcrafted Weekly Newsletter For Software Developers

The time to read this newsletter is 210 minutes.

The general who wins a battle makes many calculations in his temple before the battle is fought. – Sun Tzu

  1. All the best engineering advice I stole from non-technical people20 mins read. The points that resonated with me:
    1. Know what people are asking you to be an expert in. This helps you avoid getting too much into other people territory.
    2. Thinking is also work. This is especially true when you move to management.
    3. Effective teams need trust. That’s not to say that frameworks for decision making or metrics tracking are not useful, they are critical — but replacing trust with process is called bureaucracy.
  2. Fast and flexible observability with canonical log lines20 mins read. Canonical logging is a simple technique where in addition to their normal log traces, requests also emit one long log line at the end that includes many of their key characteristics. The key points for me in this post are:
    1. Use logfmt to make logs machine readable
    2. We use canonical log lines to help address this. They’re a simple idea: in addition to their normal log traces, requests (or some other unit of work that’s executing) also emit one long log line at the end that pulls all its key telemetry into one place.
    3. Canonical lines are an ergonomic feature. By colocating everything that’s important to us, we make it accessible through queries that are easy for people to write, even under the duress of a production incident
  3. Why Some Platforms Thrive and Others Don’t25 mins read. When evaluating an opportunity involving a platform, entrepreneurs (and investors) should analyze the basic properties of the networks it will use and consider ways to strengthen network effects. It’s also critical to evaluate the feasibility of minimizing multi-homing, building global network structures, and using network bridging to increase scale while mitigating the risk of disintermediation. That exercise will illuminate the key challenges of growing and sustaining the platform and help businesspeople develop more-realistic assessments of the platform’s potential to capture value
  4. How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh30 mins read. A long read that makes case for distributed data mesh. It applies DDD principles to designing a data lake. A refreshing take on how to design date lakes.
  5. Re-Architecting the Video Gatekeeper15 mins read. This post cover how one of the Netflix tech team used Hollow to improve performance of their service. Hollow is a total high-density cache built by Netflix. The post covers why near cache was suitable for their use case. This is a detailed post covering the existing and new architecture. I learnt a lot while reading this article.
  6. Our not-so-magic journey scaling low latency, multi-region services on AWS20 mins read. This is another detailed post covering how Atlassian built a low latency service. They first tried DynamoDB but that didn’t cut for them. So, they also use a Caffeine based near cache to achieve the numbers expected from their service.
  7. Making Containers More Isolated: An Overview of Sandboxed Container Technologies25 mins read.
  8. Benchmarking: Do it with Transparency or don’t do it at all20 mins read. This is a detailed rebuttal by Ongress team on MongoDB blog where they dismissed the benchmark report created by Ongress.
  9. Deconstructing the Monolith: Designing Software that Maximizes Developer Productivity20 mins read. This article by Shopify is a must read for anyone planning to adopt Microservices architecture. It is practical and pragmatic. The key points for me in this post are:
    1. Application architecture evolve over time. The right way to think about evolution is to go from Monolith -> Modular monolith -> Microservices.
    2. Monolithic architecture has many advantages.
      1. Monolithic architecture can take an application very far since it’s easy to build and allows teams to move very quickly in the beginning to get their product in front of customers earlier.
      2. You’ll only need to maintain one repository, and be able to easily search and find all functionality in one folder.
      3. It also means only having to maintain one test and deployment pipeline, which, depending on the complexity of your application, may avoid a lot of overhead.
      4. One of the most compelling benefits of choosing the monolithic architecture over multiple separate services is that you can call into different components directly, rather than needing to communicate over web service API’s
    3. Disadvantages of Monolithic architecture
      1. As system grows challenge of building and testing new features increases
      2. High coupling and a lack of boundaries
      3. Developing in Shopify required a lot of context to make seemingly simple changes. When new Shopifolk onboarded and got to know the codebase, the amount of information they needed to take in before becoming effective was massive
    4. Microservices architecture increases deployment and operational complexity. The tools that works great for monolithic code bases stop working with Microservices architecture.
    5. A modular monolith is a system where all of the code powers a single application and there are strictly enforced boundaries between different domains.
    6. Approach to move to Modular monolith
      1. Reorganize code by real-world concepts and boundaries
      2. Ensure all tests work after reorganisation
      3. Build tools that help track progress of each component towards its goal of isolation. Shopify developed a tool called Wedge that highlights any violations of domain boundaries (when another component is accessed through anything but its publicly defined API), and data coupling across boundaries
    7. According to Martin Fowler, “almost all the cases where I’ve heard of a system that was built as a microservice system from scratch, it has ended in serious trouble… you shouldn’t start a new project with microservices, even if you’re sure your application will be big enough to make it worthwhile
  10. “It’s dead, Jim”: How we write an incident postmortem15 mins read. I believe it is a good exercise to do a post mortem even if you don’t follow SRE practices. The key points for me in this post are:
    1. A postmortem is the process by which we learn from failure, and a way to document and communicate those lessons.
    2. Why to write one?
      1. It allows us to document the incident, ensuring that it won’t be forgotten.
      2. They are the most effective mechanism we can use to drive improvement in our infrastructure.
    3. You should share postmortems because your customers deserve to know why their services didn’t behave as expected
    4. We shouldn’t be satisfied with identifying what triggered an incident (after all, there is no root cause), but should use the opportunity to investigate all the contributing factors that made it possible, and/or how our automation might have been able to prevent this from ever happening.
    5. What we want is to learn why our processes allowed for that mistake to happen, to understand if the person that made a mistake was operating under wrong assumptions.

Issue #33: 10 Reads, A Handcrafted Weekly Newsletter For Software Developers

The time to read this newsletter is 160 minutes.

We change our behavior when the pain of staying the same becomes greater than the pain of changing. Consequences give us the pain that motivates us to change. — Henry Cloud

  1. Advertising is a cancer on society: 20 mins read. This is a long read. Author makes many valid points on why Advertising should be consider cancer. Advertising is a cancer because it has symptoms mentioned below.
    1. Privacy violations
    2. Outrage-inducing news reporting
    3. Influencers
    4. Decaying and ephemeral Internet services
  2. Some items from my “reliability list”: 15 mins read. This post make thoughtful points that software architects should keep in mind while designing or reviewing systems.
    1. Can you handle rollbacks?

    2. Are new states forward compatible? This is related to Postel’s law

    • > Be conservative in what you do, be liberal in what you accept from others
    1. Do you use a strong data exchange format like Protobuf or Thrift?

    2. Why should use JSON as data exchange format between systems?

  3. How I built a spreadsheet app with Python to make data science easier15 mins read. One of the cool open source projects that I have discovered in recent times.

  4. Announcing PartiQL: One query language for all your data20 mins read. Looks like finally we have found a way to standardise on SQL for working across different data storage solutions be it RDBMS or NoSQL or File based. PartiQL extends SQL by adding minimal extensions required for working with different data models. SQL won! Like it or not SQL is still the best and most powerful query language.

  5. Parallelism in PostgreSQL15 mins read. The post covers how modern Postgres implements parallelism for sequential scans, aggregations, and B-tree scans.

  6. Who Actually Feels Satisfied About Money?20 mins read. The post makes a good point on anxiety people have regarding money. More money does not always translate to more happiness. It’s not just how much you have — it’s what you do with it.

  7. Top Seven Myths of Robust Systems15 mins read. The number one myth we hear out in the field is that if a system is unreliable, we can fix that with redundancy. In some cases, redundant systems do happen to be more robust. In others, this is demonstrably false. It turns out that redundancy is often orthogonal to robustness, and in many cases it is absolutely a contributing factor to catastrophic failure. The problem is, you can’t really tell which of those it is until after an incident definitively proves it’s the latter.

  8. Safely Rewriting Mixpanel’s Highest Throughput Service in Golang15 mins read. This post covers how Mixpanel made use of Diffy to safely migrate high throughput service from Python to Golang. Diffy is a service that accepts HTTP requests, and forwards them to two copies of an existing HTTP service and one copy of a candidate HTTP service.

  9. The Business Executive’s Guide to Kubernetes10 mins read. A lot of useful advice on Kubernetes. The key points for me are:

    1. Stateful data is hard. Don’t try to reinvent AWS RDS. Stateful sets have limitations.
    2. Upgrading Kubernetes is hard. The advice is to run more than Kubernetes cluster in production.
    3. Managed Kubernetes does not take away all the problems.
  10. When a rewrite isn’t: rebuilding Slack on the desktop15 mins read. The approach used was at once incremental and all-encompassing, rewriting a piece at a time into a gradually growing “modern” section of the application that utilized React and Redux. And the results? 50% reduction of memory use and 33% improvement in load time

Video of the week

The video for this week is What’s new in JavaScript from Google I/O 2019

Issue #32: 10 Reads, A Handcrafted Weekly Newsletter For Software Developers

The time to read this newsletter is 150 minutes.

Writing may be the skill with the highest return of all – Seth Godin

  1. Undervalued Software Engineering Skills: Writing Well: 5 mins read. I echo with the author. Being a senior engineer in my organization this is one advice I usually end up giving to people. The key points from the post are:
    1. In a large engineering organization writing is the only medium that will help you propagate your message forward.
    2. You can learn to improve your writing skills. It is a learnable skill.
    3. Writing code is not the only activity in software development.
    4. When you write things down, you build better understanding of the topic. I personally find that writing helps me think clearly about a problem.
  2. Why Github used Haskell for Semantic?20 mins read. We need more such posts from the community. These type of posts can help developers understand how organizations take technical choices. The key lessons for me in this post are:
    1. The problem Github is trying to solve with Semantic is related to domain of programming language theory. This domain is an active research area and most of the researchers in this domain use Haskell for its brevity, power, and focus on correctness. Writing in Haskell allows us to build on top of the work of others rather than getting stuck in a cycle of reading, porting, and bug-fixing
    2. Haskell makes it nigh-impossible to build programs that contain such bugs
    3. Haskell used in industry at scale. Facebook open sourced a project called Haxi that is written in Haskell.
  3. Let’s build a SQL parser in Go!: 20 mins read. I enjoyed reading this post. It shows in a step by step manner how to write a SQL parser. The author implemented it in Go.
  4. How I decimated Postgres response times for my SaaS: 15 mins read. There are two key points in this post:
    1. You can only fix a problem if you can successfully reproduce in your local environment. I know this sound common sense but ask yourself honestly how many times you have tried solutions without reproducing the problem in a local environment only to discover that your proposed solution does not work. This happened to me this week.
    2. Composite index in PostgreSQL can help you avoid heap sort if you do order by in your query.
  5. 13 Tips for Writing a Technical Book: 15 mins read. A lot of useful advice. I wrote a similar post when I published my first book. The key points for me in this post were:
    1. Pad the timeline
    2. Schedule regular time to write: every morning, every weekend, etc
    3. Use lots of TODOs to keep track of what’s left
    4. Getting good technical editors is hard
    5. Writing is lonely
  6. Remote working – Bringing sanity to mind & lessons worth learning: 20 mins read. This post covers the other side of remote working — anxiety and depression. The article shared tips that can help.
    1. First thing you need to do is to get out of the denial mode. Mental issues can happen with anyone. Setup a weekly wellbeing check-up with yourself.
    2. Create a schedule and stick to it. Know when to stop and detach from work.
    3. Setup a separate remote working space.
    4. Limit your digital life. Talk to people
  7. Amdahl’s law: 10 mins read. Amdahl’s Law is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. In essence, it says you can’t speed up beyond the sequential part of your program irrespective of how many cores you add. Gene Amdahl’s said, If 50% of the execution time is sequential, the maximum speed up is 2, no matter how many cores you use. Good video that you can watch on Amdahl’w Law is by Professor Ben H. Juurlink.
  8. Blameless PostMortems and a Just Culture: 15 mins read. Having a “blameless” Post-Mortem process means that engineers whose actions have contributed to an accident can give a detailed account of:
    1. what actions they took at what time,
    2. what effects they observed,
    3. expectations they had,
    4. assumptions they had made,
    5. and their understanding of timeline of events as they occurred.
  9. Love DevOps? Wait until you meet SRE: 10 mins read. If you have not heard about SRE then this post will help you get started. SRE as defined by its mastermind Ben Treynor is “SRE is what happens when a software engineer is tasked with what used to be called operations”.
  10. 3 Mindfulness Rituals That Will Make You Happy20 mins read. You are not your thoughts.

Video of the week