In low-level coding, we usually have to specify the length of a variable. We have many types, including int32 and int64, which represent numbers in binary, using 32 bits or 64 bits, respectively.

The int32, of number 4 for example, looks like this:

00000000 00000000 00000000 00000100

That’s it – 32 zeros or ones to represent a number.

There is a whole arithmetic system behind this concept, called binary arithmetic. It uses 0s and 1s to represent normal numbers. The concept is simple and by observing how binary changes with each increment, you can see how it works :

DecimalBinary
000000000 00000000 00000000 00000000
100000000 00000000 00000000 00000001
200000000 00000000 00000000 00000010
300000000 00000000 00000000 00000011
400000000 00000000 00000000 00000100
500000000 00000000 00000000 00000101
600000000 00000000 00000000 00000110

We use this system because computers can only store 0s and 1s, representing the states of electricity: charged (1) and not charged (0). A computer’s memory consists of millions of small metallic squares that are either electrically charged (1) or not (0).

In low-level programming, specifying the amount of memory needed means defining the maximum length of 0s and 1s for each number.
An int32 has 32 bits and can store values from -2147483648 to 2147483647 (if it’s a signed int, where the first bit indicates negativity). An int64 has 64 bits and can store values from -9223372036854775808 to 9223372036854775807 (again, if it’s signed).

TypeMin (Decimal)Max (Decimal)
Signed int32-21474836482147483647
Signed int64-92233720368547758089223372036854775807

TypeMin (Binary)Max (Binary)
Signed int3210000000
00000000
00000000
00000000
01111111
11111111
11111111
11111111
Signed int6410000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
01111111
11111111
11111111
11111111
11111111
11111111
11111111
11111111

Time in computing is often saved as a timestamp, which counts the seconds from a specific moment – usually that moment is “UNIX epoch”: 1970 Jan 1 12:00:00

Unix TimestampCommentDate
01970 Jan 1 1970 12:00:00
11970 Jan 1 1970 12:00:01
21970 Jan 1 1970 12:00:02
86400an hour after “epoch”1970 Jan 2 1970 12:00:00
1720080673seconds from “epoch” until now2024 Jul 4 2024 08:11:13

This raises the question: What happens when we reach 2147483647, the maximum value for type signed int32, on systems that store timestamp using that type ? This will occur on 2038 Jan 19 03:14:07, which is not far from now – just 14 years from 2024. And to understand if this is important or not : do many systems store timestamps in int32? Yes, most used to, as it was the standard originating from UNIX.

Many systems have already implemented a solution to this issue.

  • Linux, as a successor to Unix, used to store timestamps in 32-bit integers for 32-bit systems. With version 5.6 (2020), it switched to 64-bit. Still, individual applications, protocols, and file formats must be updated as well. [1]
  • Windows, in the 32-bit version of gmtime in the C runtime libraries, stores timestamps in 32-bit integers. Many individual applications have already addressed this issue, like Oracle’s Access Manager from version 10.1.4.3. [2]

However, some systems have not addressed this issue.

  • Many databases still store timestamps as int32 by default.
  • SAP’s S/4HANA, used for business management and the successor of the world’s most used ERP (SAP’s ERP), supports only “finish” dates up to 2048 Jan 19.
    ( The situation is similar to the previously mentioned systems, except the timestamp starts from 1980 Jan 1, instead of the UNIX epoch which is 1970 Jan 1 12:00:00 )

A “panacea to this issue is to store timestamps as 64-bit integers in any new system design, avoiding future migrations and fixes. Storing in 64-bit, as Arnd Bergmann suggested for Linux [1], allows for a maximum timestamp of 9223372036854775807, or : 292277026596 April 12 15:30:07. This is 292 billion years from now [3], far beyond our immediate concerns.

TypeMin (Date)Max (Date)
Signed int321901 Dec 13 20:45:522038 Jan 19 03:14:07
Signed int64Before universe [4]292277026596 Apr 12 15:30:07 [3]

However, this solution is not simple, as storing values with double the length is not always optimal. For a large database, this would result in significant and impractical memory usage.

This journey through a man-made abstraction of time – from infinity to plain 0s and 1s within a fixed size – leads us to a question : Does storing all timestamps as int64 justify the cost, or are there better scenarios for the future?

We are going to ask SAP for this :

Dear SAP Team,

I hope this message finds you well.

Im a Computer Science undergraduate from Hellenic Open University.

I am writing to seek your expert opinion on a matter concerning the storage of timestamps in databases. In low-level programming, specifying the length of a variable is crucial, and traditionally, many systems have used 32-bit integers (int32) for storing timestamps. However, this approach poses significant limitations, particularly with the impending 2038 problem when the maximum value for a signed int32 timestamp will be reached.

Given that some systems, including many databases, still use int32 for timestamps, I am curious about SAP's perspective on this issue. Specifically, I would like to know whether SAP recommends storing timestamps as 64-bit integers (int64) in databases for future-proofing, or if there are more efficient or practical solutions that should be considered.

While storing timestamps as int64 can provide a vast range, extending the timestamp limit significantly into the future, it also involves increased memory usage, which might not be optimal for all applications, particularly large databases.

Could you please share your insights on the following:

- Is storing timestamps as int64 a recommended practice for databases from SAP's perspective?
- Are there alternative solutions that balance future-proofing and memory efficiency more effectively?
- How does SAP handle timestamp storage in its own systems, such as S/4HANA, especially considering the 2048 limitation mentioned in your documentation?

Your expert guidance on this matter would be greatly appreciated, as it will help in making informed decisions for designing and maintaining robust database systems.

I would also like, with your permission of course, to post your answer on my blog, if you're okay with that. I'm writing an article about it here : https://blog.simplecode.gr/?p=1606

Thank you for your time and assistance.

Best regards,

Rantouan M. Achmet
Computer Science Undergraduate, Hellenic Open University

[1] https://lkml.org/lkml/2020/1/29/355?anz=web

[2] https://forums.oracle.com/ords/apexds/post/oracle-access-manager-5412#323523248071078005693687052982423282497

[3]

// max-int64-date.go :

package main

import (
"fmt"
"time"
)

func main() {
var x int64 = 9223372036854775807
fmt.Println(time.Unix(x, 0).Format(time.RFC3339))
}

// result : 292277026596-12-04T15:30:07Z

[4]

// minus-int64-date.go :

package main

import (
"fmt"
"time"
)

func main() {
var x int64 = -434700000000000000 // -9223372036854775808 bugs and returns 292277026596-12-04T15:30:08Z
fmt.Println(time.Unix(x, 0).Format(time.RFC3339))
}

// result : -13775092659-02-02T00:00:00Z.
// Age of the universe is estimated to be ~13.77 billion years.
Last modified: 22 Ιουλίου, 2024

Author

Comments

Write a Reply or Comment

Your email address will not be published.