The Year 2038 problem could begin today. Similar to the Y2K problem, certain operating systems cannot handle dates after about 3 AM Universal Time on January 19th, 2038. If your bank is handling a 30 year mortgage starting today, funny things could happen starting now.
The Y2K problem occurred because the amount of space allocated in computer hardware and software to store the date was insufficient to handle a year greater than 1999. A huge amount of effort and funds were spent in preparation for Y2K. Arguments have been made that the problem was overblown (including the fact that across the globe, countries that spent less time and money on this problem did not have extra difficulties compared to countries where huge efforts and piles of money were spent). Arguments have also been made that the efforts were not overblown. The latter arguments mostly consist of things like “Well, we don’t know if we spent too much money on Y2K, but as a result, you got a shiny new computer, so that’s good, right?”
The Y2K28 problem is similar but different. This problem is caused by the fact that Unix like systems (but also software implementations using the C programming language) tend to use an entity known as time_t to store dates. The dates are counted in seconds since January 1, 1970. Older dates are sometimes represented as negative numbers.
The number that is stored in this place (time_t) is in many cases a 32 bit signed integer. This means no decimal places (integer) but it can be positive or negative (signed). A 32 bit signed integer can have a maximum positive value representing seconds since January 1, 1980, translated into time, of 2038, 03:14:07.
That is on 32 bit systems. How many bits a system has depends partly on hardware and partly on software. Generally, desktop PC’s and certain larger computers use a 32 bit architecture, but increasingly, PC’s are manufactured to use 64 bit architecture, and the software to run on them uses this architecture as well. Using time_t on a 64 bit machine results in a maximum date about 290 billion years into the future. So, even if all of the computers and operating systems switch over to 64 bits, this problem will plague us once again. Eventually.
The Y2K38 problem has already surfaced, once, according to a Wikipedia article.
In May 2006, reports surfaced of an early Y2038 problem in the AOLserver software. The software would specify that a database request should “never” time out by specifying a timeout date one billion seconds in the future. One billion seconds (just over 31 years 251 days and 12 hours) after 21:27:28 on 12 May 2006 is beyond the 2038 cutoff date, so after this date, the timeout calculation overflowed and calculated a timeout date that was actually in the past, causing the software to crash.
In my view, this problem is one of a larger category of problems that relate to the link between hardware, software, and real life for computers. Computers use binary numbers and these binary numbers are usually stored in places that have a fixed storage place linked to the hardware. So, for example, an 8 bit system stores everything in physical places that are either 8 or 16 bits wide (a single or a double space). Within this context, if a decimal place is used or if the number can be signed (positive or negative), some of the storage space for the number is used up.
Binary is different from decimal when it comes to certain calculations. Division does not produce exactly the same results in binary and decimal systems. So, you are working in decimal with almost everything you do, but the computer you use may be translating back and forth between decimal and binary, doing the calculations in binary and giving you converted results. Since binary and decimal systems are different, you can get strange results. For instance, 9 divided by 100 using software that does “real” decimal division is, not surprisingly, 0.09. But software that does not emulate decimal calculations, but rather simply converts the number to binary, does the calculation, and converts the result back to decimal, may give you: 0.08999996. Ooops.
Most computer languages in use today (but not all!) emulate true decimal calculations one way or another. This has a few disadvantages. There is a loss of efficiency in storage space, a loss of speed, and the somewhat more esoteric problem that the solution is a kludge re-implemented by a range of different implementors. Thus, if you write a computer program in one language and “port” it to a different language, or in the same language to a different system, you can’t necessarily be sure that he kludge is working the same way.
It seems to me that the ability to have a number of arbitrary size, sign, and precision, and to be unambiguously correctly manipulated in a commonly used base system (like decimal) should be something that happens very close to the hardware level. That is actually true to some extent now because there are machine components that process the math this way if they are available and if they are used by the software (math processors). But the fact that this hardware may or may not exist and may or may not be used is just more of the same … a kludge.
Rather than fixing Y2K, of Y2K38, or rounding errors in calculations on an ad hoc basis, we need a decimal counting machine.
By the way, I set the scheduled posting time of this blog post at January 19th, 3:14:07 AM to see what would happen. But just to be sure, I used EST, not UTC . I don’t want to take any chances….
Year 2038 problem. (2008, January 16). In Wikipedia, The Free Encyclopedia. Retrieved 13:18, January 16, 2008, from http://en.wikipedia.org/w/index.php?title=Year_2038_problem&oldid=184617273