Choosing Primary Key Type in Postgres


In relational database design one of the key decisions is choosing the right primary key type for tables. In this post I am talking about surrogate or synthetic primary keys. They are called surrogate or synthetic as these keys are not derived from application data. In my experience I have seen very few teams giving a proper thought process to deciding primary key types. For each table they go with the default type that is used in their organization. This would mean all tables will either have a int/bigint type or uuid or varchar. In this post I am not giving any recommendation I am only discussing how different keys affect insertion speed, data size, and how they compare against each other. You will have to do your own analysis to choose the right data type based on your use case. Different tables have different needs so you should make judgement accordingly. There is no one size fit solution.

In this post, I am using Postgres 13.3. You can install it using your operating system package manager. I am running it in a Docker container.

To follow along you can create a sample database and run SQL queries in that.

create database choosing_pk_type;

Connect to this database

\c choosing_pk_type;

Primary Key Types Comparison Summary

The numbers shown above are derived from the below table.

Now the TLDR version.

Different Primary Key Types

Based on my research and experience following are the primary key types used by different organizations.

BIGSERIAL

This is the most common used primary key type. It is not a true type but merly a convience for creating unique identifier columns. BigSerial represents 64 bit integer. If you don’t need 64 bit integer for your table then you can use either SMALLSERIAL(16 bits) or SERIAL(32 bits ) types.

To create a table that uses BIGSERIAL you can run the following command

create table exp_bs(id bigserial primary key, n bigint not null);

If you describe this table you will see that the above command created a sequence exp_bs_id_seq that is used to assign id. Also, note that type of id is bigint.

\d+ exp_bs;
                                               Table "public.exp_bs"
 Column |  Type  | Collation | Nullable |              Default               | Storage | Stats target | Description
--------+--------+-----------+----------+------------------------------------+---------+--------------+-------------
 id     | bigint |           | not null | nextval('exp_bs_id_seq'::regclass) | plain   |              |
 n      | bigint |           | not null |                                    | plain   |              |
Indexes:
    "exp_bs_pkey" PRIMARY KEY, btree (id)
Access method: heap

You can insert the data into the exp_bs table by running the following command.

insert into exp_bs (n) values(1);

As you can see above we have not specified id value. It will be assigned by the database itself using the exp_bs_id_seq.

select * from exp_bs;
 id | n
----+---
  1 | 1
(1 row)

You can view the last value of the sequence by running the following SQL query.

select last_value from exp_bs_id_seq;
 last_value
------------
          1
(1 row)

Let’s understand how this key type performs when we insert 1 million and 10 million records.

Make sure timing is on

\timing

For 1 million records

insert into exp_bs(n) select g.n from generate_series(1,1000000) as g(n);
INSERT 0 1000000
Time: 3899.654 ms (00:03.900)

We can get the total disk space used by the table using the following SQL query. This includes data, all indexes, and TOAST data.

select pg_size_pretty(pg_total_relation_size('exp_bs')) as total_table_size;
 total_table_size
----------------
 64 MB
(1 row)

You can also look at data and index size individually as well using the following SQL queries.

select pg_size_pretty(pg_relation_size('exp_bs')) as data_size;
 data_size
-----------
 42 MB
(1 row)

Time: 1.646 ms
select pg_size_pretty(pg_indexes_size('exp_bs')) as index_size;
 index_size
------------
 21 MB
(1 row)

Time: 1.429 ms

Lets now see how it changes if we insert 10 million records.

drop table exp_bs;
vacuum full;
insert into exp_bs(n) select g.n from generate_series(1,10000000) as g(n);
INSERT 0 10000000
Time: 27541.747 ms (00:27.542)

Total disk size

select pg_size_pretty(pg_total_relation_size('exp_bs')) as total_table_size;
 total_table_size
------------------
 637 MB
(1 row)

Data vs index size growth

select pg_size_pretty(pg_relation_size('exp_bs')) as data_size;
 data_size
-----------
 422 MB
(1 row)
select pg_size_pretty(pg_indexes_size('exp_bs')) as index_size;
 index_size
------------
 214 MB
(1 row)

This is how above stacks up

Parameters1 million10 millionGrowth Rate
Time to insert (ms)3899.65427541.7477.062612991
Insertion Rate (IPS)256.4330066363.08517391.415906551
Total Table size (MB)646379.953125
Data Size (MB)4242210.04761905
Index Size (MB)2121410.19047619

Let’s now look at pros and cons of this type.

Pros

  • Developer and human friendly
  • Scales linearly both in terms of insertion rate and size
  • Better suited for b-tree indexes because of their sequential nature
  • Space efficient. For smaller tables you can choose serial or smallserial types as well.

Cons

  • Predictable and not opaque to users. For endpoints with proper AuthN/Z this should not be an issue. But, there are still open endpoints in applications where you might face this issue. One such example is user onboarding journey where you don’t have authenticated user.
  • Because smallserial, serial and bigserial are implemented using sequences, there may be “holes” or gaps in the sequence of values which appears in the column, even if no rows are ever deleted. A value allocated from the sequence is still “used up” even if a row containing that value is never successfully inserted into the table column. This may happen, for example, if the inserting transaction rolls back.
  • Application needs to make an extra round trip to the database to know the generated ID
  • You might need a centeralized id generator like Twitter Snowflake if you need to scale writes to multiple nodes

Postgres Native UUID v4

This identifier is a 128-bit quantity that is generated by an algorithm chosen to make it very unlikely that the same identifier will be generated by anyone else in the known universe using the same algorithm. Therefore, for distributed systems, these identifiers provide a better uniqueness guarantee than sequence generators, which are only unique within a single database. Postgres 13 added support to generate UUID v4 natively in version 13.

create table exp_uuid(id uuid primary key default gen_random_uuid(), n bigint not null);

Inserting 1million records

insert into exp_uuid(n) select g.n from generate_series(1,1000000) as g(n);
INSERT 0 1000000
Time: 5032.132 ms (00:05.032)

Size of the table

select pg_size_pretty(pg_total_relation_size('exp_uuid')) as total_table_size;
 total_table_size
------------------
 80 MB
(1 row)

Data vs index size growth

select pg_size_pretty(pg_relation_size('exp_uuid')) as data_size;
 data_size
-----------
 50 MB
(1 row)
select pg_size_pretty(pg_indexes_size('exp_uuid')) as index_size;
 index_size
------------
 30 MB
(1 row)

Let’s now insert 10million records

insert into exp_uuid(n) select g.n from generate_series(1,10000000) as g(n);
INSERT 0 10000000
Time: 106122.171 ms (01:46.122)

Size of the table

select pg_size_pretty(pg_total_relation_size('exp_uuid')) as total_table_size;
 total_table_size
------------------
 798 MB
(1 row)

Data vs index size growth

select pg_size_pretty(pg_relation_size('exp_uuid')) as data_size;
 data_size
-----------
 498 MB
(1 row)
select pg_size_pretty(pg_indexes_size('exp_uuid')) as index_size;
 index_size
------------
 301 MB
(1 row)

This is how above stacks up

Parameters1 million10 millionGrowth Rate
Time to insert (ms)5032.132106122.17121.08890844
Insertion Rate (IPS)198.72292794.231016060.4741829113
Total Table size (MB)807989.975
Data Size (MB)504989.96
Index Size (MB)3030110.03333333

Let’s now look at pros and cons of this type.

Pros

  • Opaque to users
  • Data and index size grows linearly
  • Avoid extra round trip since they can generated at client side

Cons

  • Insertion rate drops down to half as we go from 1 million to 10million records. UUID insertion rate is 1/4 of bigserial insertion rate.
  • Their random nature cause write amplification where more WAL logs need to transferred during replication. This impacts write performance
  • Index size takes more space. It is 1.4 times the size of bigserial

Postgres uuid-ossp extension

uuid-ossp extension provides a function uuid_generate_v4() to generate a version 4 UUID, which is derived entirely from random numbers. It is commonly used in Postgres versions below 13 as Postgres added gen_random_uuid() in Postgres 13. This extension also supports generating v1,v3, and v5 UUID types.

CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
create table exp_ossp_uuid(id uuid primary key default uuid_generate_v4(), n bigint not null);

Inserting 1million records

insert into exp_ossp_uuid(n) select g.n from generate_series(1,1000000) as g(n);
INSERT 0 1000000
Time: 21088.345 ms (00:21.088)

Size of the table

select pg_size_pretty(pg_total_relation_size('exp_ossp_uuid')) as total_table_size;
 total_table_size
------------------
 80 MB
(1 row)

Data vs index size growth

select pg_size_pretty(pg_relation_size('exp_ossp_uuid')) as data_size;
 data_size
-----------
 50 MB
(1 row)
select pg_size_pretty(pg_indexes_size('exp_ossp_uuid')) as index_size;
 index_size
------------
 30 MB
(1 row)

Let’s now insert 10million records

insert into exp_ossp_uuid(n) select g.n from generate_series(1,10000000) as g(n);
INSERT 0 10000000
Time: 271333.274 ms (04:31.333)

Size of the table

select pg_size_pretty(pg_total_relation_size('exp_ossp_uuid')) as total_table_size;
 total_table_size
------------------
 798 MB
(1 row)

Data vs index size growth

select pg_size_pretty(pg_relation_size('exp_ossp_uuid')) as data_size;
 data_size
-----------
 498 MB
(1 row)
select pg_size_pretty(pg_indexes_size('exp_ossp_uuid')) as index_size;
 index_size
------------
 301 MB
(1 row)

This is how above stacks up

Parameters1 million10 millionGrowth Rate
Time to insert (ms)21088.345271333.27412.86650394
Insertion Rate (IPS)47.4195580536.855044910.7772119021
Total Table size (MB)807989.975
Data Size (MB)504989.96
Index Size (MB)3030110.03333333

Pros and cons are same as Postgres Native UUID v4

UUID v4 saved as VARCHAR(36)

This is UUID saved as VARCHAR(36) rather than using native UUID type. The native UUID type is stored as 128 bit and comparisions are done numerically.

create table exp_uuid_vc(id varchar(36) primary key default gen_random_uuid(), n bigint not null);

Inserting 1million records

insert into exp_uuid_vc(n) select g.n from generate_series(1,1000000) as g(n);
INSERT 0 1000000
Time: 7771.671 ms (00:07.772)

Size of the table

select pg_size_pretty(pg_total_relation_size('exp_uuid_vc')) as total_table_size;
 total_table_size
------------------
 147 MB
(1 row)

Data vs index size growth

select pg_size_pretty(pg_relation_size('exp_uuid_vc')) as data_size;
 data_size
-----------
 73 MB
(1 row)
select pg_size_pretty(pg_indexes_size('exp_uuid_vc')) as index_size;
 index_size
------------
 73 MB
(1 row)

Let’s now insert 10million records

insert into exp_uuid_vc(n) select g.n from generate_series(1,10000000) as g(n);
INSERT 0 10000000
Time: 234989.152 ms (03:54.989)

Size of the table

select pg_size_pretty(pg_total_relation_size('exp_uuid_vc')) as total_table_size;
 total_table_size
------------------
 1293 MB
(1 row)

Data vs index size growth

select pg_size_pretty(pg_relation_size('exp_uuid_vc')) as data_size;
 data_size
-----------
 730 MB
(1 row)
select pg_size_pretty(pg_indexes_size('exp_uuid_vc')) as index_size;
 index_size
------------
 563 MB
(1 row)

This is how above stacks up

Parameters1 million10 millionGrowth Rate
Time to insert (ms)7771.671234989.15230.23663148
Insertion Rate (IPS)128.672456742.555155910.3307246711
Total Table size (MB)14712938.795918367
Data Size (MB)7373010
Index Size (MB)735637.712328767

Let’s now look at pros and cons of this type.

Pros

  • There are no pros that I am aware of storing UUID as VARCHAR(36)

Cons

  • Insertion rate drops to 30% as we grow from 1 million records to 10 million records.
  • It is not space efficient. Compared to UUID type it takes 1.6 times more space.

BIGSERIAL Internal, UUID External

This combines bigserial and UUID types. We use bigserial as the primary key of our table. To share externally we will use UUID. So, rather than storing one key we store two.

create table exp_bs_uuid(id bigserial primary key, u uuid not null default gen_random_uuid());
create unique index u_idx on exp_bs_uuid(u);
\d+ exp_bs_uuid
                                               Table "public.exp_bs_uuid"
 Column |  Type  | Collation | Nullable |                 Default                 | Storage | Stats target | Description
--------+--------+-----------+----------+-----------------------------------------+---------+--------------+-------------
 id     | bigint |           | not null | nextval('exp_bs_uuid_id_seq'::regclass) | plain   |              |
 u      | uuid   |           | not null | gen_random_uuid()                       | plain   |              |
Indexes:
    "exp_bs_uuid_pkey" PRIMARY KEY, btree (id)
    "u_idx" UNIQUE, btree (u)
Access method: heap
insert into exp_bs_uuid(id) select g.id from generate_series(1,1000000) as g(id);
INSERT 0 1000000
Time: 6222.200 ms (00:06.222)
select pg_size_pretty(pg_total_relation_size('exp_bs_uuid')) as total_table_size;
 pg_size_pretty
----------------
 101 MB
(1 row)

Data vs index size growth

select pg_size_pretty(pg_relation_size('exp_bs_uuid')) as data_size;
 data_size
-----------
 50 MB
(1 row)
select pg_size_pretty(pg_indexes_size('exp_bs_uuid')) as index_size;
 index_size
------------
 52 MB
(1 row)

Let’s now insert 10million records

insert into exp_bs_uuid(id) select g.id from generate_series(1,10000000) as g(id);
INSERT 0 10000000
Time: 111950.836 ms (01:51.951)

Size of the table

select pg_size_pretty(pg_total_relation_size('exp_bs_uuid')) as total_table_size;
 total_table_size
------------------
 1099 MB
(1 row)

Data vs index size growth

select pg_size_pretty(pg_relation_size('exp_bs_uuid')) as data_size;
 data_size
-----------
 498 MB
(1 row)
select pg_size_pretty(pg_indexes_size('exp_bs_uuid')) as index_size;
 index_size
------------
 601 MB
(1 row)

This is how above stacks up

Parameters1 million10 millionGrowth Rate
Time to insert (ms)6222.2111950.83617.9921629
Insertion Rate (IPS)160.714859789.324924740.5557975467
Total Table size (MB)101109910.88118812
Data Size (MB)504989.96
Index Size (MB)5160111.78431373

Let’s now look at pros and cons of this type.

Pros

  • It combines best of both worlds so you can use efficient bigint in your queries and for sharing with external users you can use UUID
  • Table size grows linearly from 1 million to 10 million rows

Cons

  • Insertion rate drops to half as we grow from 1 million records to 10 million records

ULID

They are an alternative to UUID. Universally unique Lexicographically sortable Identifiers. ULID ensures that the IDs can be monotonically ordered and can be sorted even when generated within a millisecond They are are encoded as 26 character string, as opposed to 36 character UUID. They have monotonic sort order. The first 48 bits are for unix time in milliseconds and the remaining 80 bits are for randomness. You can read more about them in ULID spec https://github.com/ulid/spec

I have used following ULID implementation https://github.com/geckoboard/pgulid.

create table exp_ulid(id VARCHAR(26) primary key default generate_ulid(), n bigint not null);
insert into exp_ulid(n) select g.n from generate_series(1,1000000) as g(n);
INSERT 0 1000000
Time: 18831.113 ms (00:18.831)
select pg_size_pretty(pg_total_relation_size('exp_ulid'));
 pg_size_pretty
----------------
 113 MB
(1 row)

Data vs index size growth

select pg_size_pretty(pg_relation_size('exp_ulid')) as data_size;
 data_size
-----------
 65 MB
(1 row)
select pg_size_pretty(pg_indexes_size('exp_ulid')) as index_size;
 index_size
------------
 47 MB
(1 row)

Let’s now insert 10million records

insert into exp_ulid(n) select g.n from generate_series(1,10000000) as g(n);
INSERT 0 10000000
Time: 335824.005 ms (05:35.824)

Size of the table

select pg_size_pretty(pg_total_relation_size('exp_ulid')) as total_table_size;
 total_table_size
------------------
 1125 MB
(1 row)

Data vs index size growth

select pg_size_pretty(pg_relation_size('exp_ulid')) as data_size;
 data_size
-----------
 651 MB
(1 row)
select pg_size_pretty(pg_indexes_size('exp_ulid')) as index_size;
 index_size
------------
 474 MB
(1 row)

This is how above stacks up

Parameters1 million10 millionGrowth Rate
Time to insert (ms)18831.113335824.00517.83346555
Insertion Rate (IPS)53.1036057229.777502060.5607435061
Total Table size (MB)11311259.955752212
Data Size (MB)6565110.01538462
Index Size (MB)4747410.08510638

Let’s now look at pros and cons of this type.

Pros

  • Lexicographically sortable
  • Canonically encoded as a 26 character string, as opposed to the 36 character UUID
  • Uses Crockford’s base32 for better efficiency and readability (5 bits per character)

Cons

  • Postgres does not support it natively yet. The implementation I have used is not performant. You might want to generate them at client side
  • Timing based UUIDs (type 1) or ULIDs can leak (surprise!) timing information. If you can see them then you can infer the rate at which some resource is being created. For example the rate at which a service is creating users or business transactions. This can be valuable competitive information you don’t want to share with the world.

UUID v6

Lastly, I also experimented with UUID v6 using the https://gist.github.com/fabiolimace/515a0440e3e40efeb234e12644a6a346 function.

create table exp_uuid_v6(id uuid primary key default fn_uuid_time_ordered(), n bigint not null);
insert into exp_uuid_v6(n) select g.n from generate_series(1,1000000) as g(n);
INSERT 0 1000000
Time: 23132.748 ms (00:23.133)
select pg_size_pretty(pg_total_relation_size('exp_uuid_v6'));
 pg_size_pretty
----------------
 80 MB
(1 row)

Data vs index size growth

select pg_size_pretty(pg_relation_size('exp_uuid_v6')) as data_size;
 data_size
-----------
 50 MB
(1 row)
select pg_size_pretty(pg_indexes_size('exp_uuid_v6')) as index_size;
 index_size
------------
 30 MB
(1 row)

Let’s now insert 10million records

insert into exp_uuid_v6(n) select g.n from generate_series(1,10000000) as g(n);
INSERT 0 10000000
Time: 227129.634 ms (03:47.130)

Size of the table

select pg_size_pretty(pg_total_relation_size('exp_uuid_v6')) as total_table_size;
 total_table_size
------------------
 798 MB
(1 row)

Data vs index size growth

select pg_size_pretty(pg_relation_size('exp_uuid_v6')) as data_size;
 data_size
-----------
 498 MB
(1 row)
select pg_size_pretty(pg_indexes_size('exp_uuid_v6')) as index_size;
 index_size
------------
 301 MB
(1 row)

This is how above stacks up

Parameters1 million10 millionGrowth Rate
Time to insert (ms)23132.748227129.6349.818532325
Insertion Rate (IPS)43.2287595144.027720311.018482159
Total Table size (MB)807989.975
Data Size (MB)504989.96
Index Size (MB)3030010

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: