From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,9a586954b11ae008 X-Google-Attributes: gid103376,public From: dewar@merv.cs.nyu.edu (Robert Dewar) Subject: Re: Overflows (lisp fixnum-bignum conversion) Date: 1997/04/07 Message-ID: #1/1 X-Deja-AN: 231401999 References: <1997Apr2.202514.1843@nosc.mil> <01bc42b0$a88691c0$90f482c1@xhv46.dial.pipex.com> <1997Apr7.130018.1@eisner> Organization: New York University Newsgroups: comp.lang.ada Date: 1997-04-07T00:00:00+00:00 List-Id: Larry said <> First, the 10GB drives you are used to using on small machines are by no means the largest storage devices in use to day, many large mainframe installations approach terabyte storage capacity, and some exceed it. Second, your calculation assumes that you are linearly mapping the disks into memory, that's not at all the way it works, instead you would map them into a hierarchical arrangemnent, allowing LOTS of extra space for expansion in each node. The whole idea of a 64-bit address space is that you can "waste" it lavishly to allow for growth of structures, and not run out. For instance, in an Ada environment in a 64-bit machine, it makes perfectly good sense to give every task a gigabyte stack, even if you have thousands of tasks, but you cannot use this approach at all on a 32-bit machine.