From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,92471489ebbc99c6 X-Google-Attributes: gid103376,public From: gwinn@ma.ultranet.com (Joe Gwinn) Subject: Re: Y2K Issues Date: 1998/10/19 Message-ID: #1/1 X-Deja-AN: 402997824 References: <362B53A3.64E266AB@res.raytheon.com> X-Ultra-Time: 19 Oct 1998 23:38:11 GMT X-Complaints-To: abuse@ultra.net Organization: Gwinn Instruments Newsgroups: comp.lang.ada Date: 1998-10-19T00:00:00+00:00 List-Id: In article , stt@houdini.camb.inmet.com (Tucker Taft) wrote: > Chances are most Unix-hosted Ada compilers have a Y2038 problem (when > the Unix 32-bit time goes negative). > > : I also note that the Ada 95 LRM (RM96;6.0) Ada.Calendar package defines > : the Year_Number as range 1901..2099. What happens in 2099? The world > : ends? > > There is admittedly a "Y2.1K" problem in Ada, but it is pretty benign, because > it is so straightforward to extend the definition of Calendar.Year_Number > to be 1901..2999 (or more). No code outside of Calendar would have to > change. UNIX/POSIX time rolls over in 2100 as well. This is the same problem as the 2038 problem, but using unsigned arithmetic instead. The UNIX/POSIX time format consists of two unsigned 32-bit integers, one being the number of seconds since 00:00 UTC 21 January 1970 AD (the "Epoch", or timescale zero), the other being the number of decimal nanoseconds into the current second. (Ref: IEEE Std 1003.1-1996 chapter 14) > I presume the reason for the original range was to minimize the amount > of code inside Calendar which needed to be devoted to leap year calculations. > 1901..2099 is the longest consecutive interval which includes today for > which the simple rule that every 4 years is a leap year applies. Neither 1900 > nor 2100 are leap years, due to the second order correction in the > leap year formula. This is also partly the reason that UNIX/POSIX time is defined to roll over in 2100 AD. The other reason was to avoid the complexities of multiprecision artithmetic, and the largest universally available integer was and is still 32 bits. > We saw no reason to change the Year_Number range during the > Ada 9X revision. We figured we wanted to leave something to the > Ada 205X revision ;-). By then, everybody will have at least 64-bit integer arithmetic, and 32-bit machines will be used only in toys and toasters. Joe Gwinn