Today, again I had a conversation with a friend who wanted some help with creating schema for a service he was building. The problem he had was small and we came across the issue of logging in our conversation. I eventually told him about the Bark Project. He was quick to exclaim “The primary key is an integer. Shouldn’t you be using a bigger data-type like a UUID or something?!!!”

The answer that I explained to him gives rise to this post. This post is for him, and he was the third guy who told me that a BIGSERIAL as a Primary Key was short-sightedness. So here is some math.

How Big is BIGINT?

BIGINT is a 64-bit, signed data type in PostgreSQL. I am sure there is a counterpart for that in other databases and any serious programming language must also have a data type for a signed, 64-bit integer.

BIGSERIAL is a syntactic sugar that PostgreSQL has for a Positive, Auto-incrementing, BIGINT. It is basically just a BIGINT column whose default value is set to a value of a sequence. If you want to learn more, go search about those on Google. There are tons of articles (and the official PostgreSQL documentation) that explain it well enough.

So how big is BIGINT? It is 64 bits wide. But since we are talking about positive integers, it is 63 bits that we are considering.

How big is it in reality?

So we are talking about auto-incrementing from (typically) 1 to 2^63 (2 raised to the power of 63). This value increments on inserting a new row in the database. That number goes to: 9223372036854775807. Some people still don’t get the idea. So let me show it in another way:

  • 2^63 = 2^32 * 2^31
  • 2^31 is a little over 2 Billion! (2147483648 to be exact)
  • 2^32 is a little over 4 Billion! (4294967296 to be exact)

Given that math and the conditions laid out above, we have this fact (read it and take some time to absorb it): If you insert 2 Billion records per second, it would take you 4 Billion seconds to reach that limit. How much is 4 Billion seconds, you ask?

4294967296 / (60 * 60 * 24 * 365.25) = 136.099300834

That math above is trying to calculate the number of years it would take to fill up the BIGSERIAL column (as a primary key) if you keep inserting about 2 billion records per second. And yes, it would take more than 130 years!

What if you are inserting just about a million records per second? It would take more than 272000 years to reach that upper limit!

Worry about your age, not BIGINT

I am quite sure we would face other, bigger problems than reaching that number before that time passes, including facing our own death. No one reading this article in the year 2025 (or even 2045) would have to worry about filling up a BIGSERIAL column.


People seriously under-estimate how big a 64bit signed integer actually is for primary keys. Please stop doing it. You might be in need of a UUID as a primary key for any number of reasons; I assure you ‘running out of 64-bit integers’ is not one of them; unless you are facebook and you store every single event taking place on the site in a single table with BIGSERIAL primary key.

PS: Yes, I wrote this on 1st of April, 2025. No, it is not a joke or an attempt to fool anyone.