DeveloperMar 15, 20266 min read

Unix Timestamps Explained: What They Are and How to Convert Them

Open any server log, examine an API response, or look at a database record's created_at field and you will likely see a number like 1741996800. This is a Unix timestamp — a count of seconds elapsed since a specific moment in history. Understanding what that number represents, and how to convert it to a human-readable date, is one of those foundational developer skills that never becomes obsolete.

What Is a Unix Timestamp?

A Unix timestamp is the number of seconds that have elapsed since January 1, 1970, 00:00:00 UTC — a moment known as the Unix epoch. This reference point was chosen by the early Unix developers as a convenient, round number in the recent past. The timestamp increments by one every second, continuously, regardless of timezone or daylight saving time.

As of early 2026, the Unix timestamp is approximately 1.74 billion. The number grows by 86,400 every day (60 seconds × 60 minutes × 24 hours). A timestamp of 0 represents the epoch itself; timestamps before 1970 are negative.

Why Timestamps Are Universal

The core advantage of Unix timestamps is that they are timezone-independent. When two servers in different timezones record the same event, they record the same timestamp. When you store a timestamp in a database, you don't need to worry about what timezone the data was written from — it's always UTC seconds from epoch. Timezone conversions happen at display time, not at storage time. This eliminates an entire category of date-related bugs.

Timestamps are also trivially sortable (higher number = later time), easy to compare (subtract two timestamps to get the elapsed seconds), and compact to store (a 32-bit integer holds dates through 2038; a 64-bit integer covers billions of years).

The Milliseconds vs Seconds Gotcha

JavaScript's Date.now() returns milliseconds since epoch, not seconds. So does the getTime() method on a Date object. Many other platforms — Java's System.currentTimeMillis(), for example — also use milliseconds. Unix itself traditionally uses seconds.

The practical implication: if you receive a timestamp of 1741996800000 (13 digits), it's in milliseconds. A timestamp of 1741996800 (10 digits) is in seconds. Mixing them up produces dates in 1970 (if you treat ms as seconds) or dates in the year 56,000 (if you treat seconds as ms). Always check which unit an API returns.

Converting Timestamps in JavaScript

To get the current timestamp in seconds: Math.floor(Date.now() / 1000). To convert a seconds timestamp to a JavaScript Date: new Date(timestamp * 1000). The Date constructor expects milliseconds, so multiply by 1000. To format it as a readable string: new Date(ts * 1000).toISOString() gives ISO 8601 format; toLocaleString() formats according to the user's locale.

Negative Timestamps and the Y2K38 Problem

Negative Unix timestamps represent dates before January 1, 1970 — useful for historical records. The Unix epoch itself isn't a meaningful historical limit; it's just when the clock starts at zero.

The Y2K38 problem (also called the Unix epoch overflow) is a known issue: on January 19, 2038, at 03:14:07 UTC, a 32-bit signed integer used to store Unix timestamps overflows and wraps to a large negative number, representing December 13, 1901. Systems still using 32-bit timestamps — embedded devices, some legacy databases — will be affected. Modern systems use 64-bit timestamps, which won't overflow for approximately 292 billion years.

Use our Unix Timestamp Converter to convert any timestamp to a readable date or convert a date to its timestamp — instantly in your browser, with support for both seconds and milliseconds.