Welcome to the third post of distributed system series. So far in this series, we have looked at service discovery and CAP theorem. Before we move along in our distributed system learning journey, I thought it will be useful to refresh our memory with understanding of ACID transactions. ACID transactions are at the heart of relational databases. The knowledge of ACID transactions is useful when building distributed applications.
Understanding ACID transactions
A transaction is a sequence of operations that form a single logical unit of work. These transactions are executed on a shared database system to perform a higher-level function. An example of higher-level function is transferring money from one account to another. Transactions represent a basic unit of change in the database. It either executed in its entirety or not at all.
ACID (Atomicity, Consistency, Isolation, and Durability) refers to a set of properties that a database transaction should guarantee even in the event of errors, power failure, etc. The canonical example of ACID transaction is transfer of funds from one bank account to another. In a single fund transferring transaction, you have to check the account balance, debit one account, and credit another transaction. ACID properties guarantee that either money transfer from one account to other occur correctly and permanently or in case of failure both accounts have the same initial state. It would be unacceptable if one account was debited but the other account was credited.
Database transactions are motivated by two independent requirements:
- Concurrent database access: Multiple clients can access the system at the same time. This is achieved by the Isolation property of ACID transaction.
- Resiliency to system failures: System remains in consistent state in case of a system failure. This is provided by Atomicity, Consistency, and Durability properties of ACID transaction.
Let’s look at each of the ACID properties in detail. To understand these properties, we will take an example of transferring $500 from account A to account B. Let’s assume account A has $1000 and account B has $500.
Below are the actions that will happen in the transaction.
CHECK if A has $500 DEDUCT $500 from A ADD $500 to B
At the end of successful transaction, A will have $500 and B will have $1000.
Atomicity property guarantee that either all the actions in a transaction happen, or none happen. It provides illusion to the developer that all the actions in a transaction succeed and fail as a single unit. In our example, if the transaction was not atomic and it got aborted after deducting $500 from account A. This will lead to account A balance deducted by $500 but account B still $500. This is bad and atomic system help avoid such situations.
There are two approaches used by DBMS to support atomicity.
Approach 1: Logging
In this approach, all modifications are written to a log file before they are applied. In most databases, this approach is called Write Ahead Logging (WAL). Most of the DBMS uses this approach to implement atomicity example PostgreSQL, SQLite, MongoDB, SQL Server, etc.
This is how this approach works:
- Let’s assume a write operation is received by the DBMS. DBMS will write that to the end of the write-ahead log file. All operations are appended to the file. It is known fact that appending to the end of a file is much faster compared to updating the file in the middle.
- DBMS will send success acknowledgement only when modification is written to the write-ahead log file.
- Once write-ahead log file is updated, DBMS will update the database pages with modifications.
The above does not explains how atomicity is achieved in case of failures.
Assume that while updating the database pages from WAL file power goes down. This could lead to partial updation of the database pages. The way WAL helps achieve in atomicity is that DBMS will use WAL file to repair the partially written database changes on restart.
Approach 2: Shadow paging
This approach assumes that database is partitioned into fixed length blocks referred to as pages. DBMS creates a copy of the pages before executing actions in a transaction. It uses copy-on-write technique avoiding in-place update of pages. During the life of transactions, two copies of a page exists — current page and shadow page. This is what happens during the transaction:
- A db pointer is maintained on the disk that points to the current page
- The actions in a transaction are performed on the shadow page. The shadow page will get modified if the transaction has a write action.
- If the transaction completes successfully, the db pointer is updated to point to shadow copy and current page is deleted.
- Else if the transaction fails, then nothing changes as db pointer is pointing to the current page copy.
Consistency guarantee that database moves from one consistent state at the beginning of the transaction to another other consistent state at the end of the transaction. Consistent state means that data integrity constraints(primary key constraints, referential integrity, check constraints, and other SQL constraints like Null check, unique check, etc) are enforced.
In the example of transferring money from account A to account B. The consistent requirement could be that sum of A and B accounts remain unchanged the execution of the transaction. This ensures that money is not destroyed or created by the transaction.
It is the responsibility of the application developer to ensure consistency for an individual transaction. This mean before the transaction end, application developer has to check if consistency requirement is met. If consistency requirement is not then application developer should rollback the transaction.
Isolation property gives an illusion that one transaction has full control over the database and it is the only transaction running at the moment even though multiple transactions are running concurrently. This property ensures that when multiple transactions are executed concurrently, their operations does not interfere with each other. In other words, if two transactions T1 and T2 are running concurrently then T1 can assume that either T2 executed before it or it will start execution after it has finished.
To understand Isolation through over example, let’s assume we have two transactions
- T1 : transfer $500 from A to B – A has starting balance of $1000 and B has starting balance of $500
- T2: calculate total amount in the bank assuming bank only has two accounts A and B
When T1 is executing, there will be a time when $500 has been deducted from A but not added to B. At this moment, balance of A will be $500 and balance of B will also be $500. If at the same intermediate time T2 starts execution and reads balance of A and B then it will read A and B values as $500. So, total balance in the bank will be $1000. As you can see this is incorrect. Isolation property guards us against such inconsistent states by executing transactions in serial order. There are only two serial orders possible for T1 and T2. Either T1 executes first and T2 executes second. Or T2 executes first and T1 executes later.
Isolation guarantee that when multiple transactions execute concurrently, the end system state could be either of the states that could be achieved by executing transactions in some serial order.
Most DBMS system gives an illusion to their clients that transactions run serially so that clients can reason about the end state but internally DBMS do run transactions concurrently so that they can achieve higher throughput and greater resource utilisation. This also reduces waiting time for clients given that some transactions could be long running whereas some can be of short duration. Clients will be unhappy if they have to wait for a long running transaction to finish before their transaction executes.
DBMS allows transactions to interleave only when the schedules generated by them lead to result of some serial execution of transactions. The interleaving schedules that give same result as some serial order execution of transactions are called Serializable schedules. Conflict serializability is used to find schedules that are conflict equivalent to serial schedules. Two transaction T1 and T2 are Conflict equivalent, if T1 can be transformed to T2 by performing a series of swaps of nonconflicting instructions. You can do a swap of instructions if they are in conflict. By conflict, it means different transactions perform operations on the same data item and out of all the operations one of the operation is a write operation.
DBMS provides application developer the choice pick a weaker consistency model than the one exposed by serializability. This is accomplished through different isolation levels. The SQL standards specifies four transaction levels:
- Serializable: This is the strictest isolation level which ensures that serializable execution of transactions. In this level, concurrent transactions produce the same result as if they were run in serial order.
- Repeatable Read: In this level, during a transaction multiple reads of an item produce the same results, regardless of committed writes to the item in other transactions. The changes committed changes by other transactions does not become visible in the middle of transaction.
- Read Committed: In this model, during a transaction changes from other committed transactions commit become visible i.e the value of item change during a transaction, if other transactions writes to item and then commit it.
- Read Uncommitted: In this level, during a transaction uncommitted changes to an item in other transaction become visible immediately.
I will write another post going deeper into isolation levels and how database implements isolation.
This property guarantees that once a transaction completes successfully, all the changes made by the transaction persist even in the case of a system failure after the transaction completes execution. This usually means that changes made by successfully completed transactions are saved to the disk.
Let’s again look at the example of transferring money from account A to account B. In our example, if after transaction was successfully committed power goes down but the results were not yet written to disk then all the changes are lost. This will not match the user expectation that once transaction commits changes are persisted.
Atomicity and durability are achieved using the similar approaches discussed in the atomicity section.