Skip to content

Glossary

Unix timestamp

Seconds since the Unix epoch

A Unix timestamp (or POSIX time, or seconds-since-epoch) is the number of seconds that have elapsed since 1970-01-01 00:00:00 UTC, ignoring leap seconds. As of 2026-05-16 the Unix timestamp is roughly 1,779,000,000.

Why this format dominates computing:

  • It’s a single integer. No timezone, calendar, locale, or DST ambiguity baked in.
  • Time arithmetic is trivial — subtract two timestamps to get a duration in seconds.
  • Sortable as numbers. Database indexes and time-series queries don’t need fancy comparators.
  • Compact in storage and over the wire.

Three flavours you’ll meet:

  • Seconds. The original. 1700000000 = Nov 14 2023 22:13:20 UTC.
  • Milliseconds. JavaScript’s Date.now() returns this. Multiply the seconds value by 1000.
  • Microseconds / nanoseconds. High-resolution telemetry, distributed traces. Postgres’s timestamptz stores microseconds.

Two well-known problems:

  • Year 2038 (Y2K38). A 32-bit signed integer of seconds overflows on 2038-01-19 at 03:14:07 UTC. Modern systems use 64-bit (good for ~292 billion years); legacy embedded systems may not.
  • Leap seconds. UTC inserts occasional leap seconds to stay in sync with Earth’s rotation. Unix time ignores them — the timestamp at the leap second instant is ambiguous. Most production systems use a smear (Google, AWS) that spreads the leap second across hours.

Convert Unix timestamps to readable dates (and back) by feeding them into a date library. The conversion to a wall-clock time always requires a timezone choice — the same instant is “3pm in NYC” and “midnight in Tokyo” depending on which zone you display it in. See our time zone converter.

Related

Published May 16, 2026