From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: * X-Spam-Status: No, score=1.3 required=5.0 tests=BAYES_00,INVALID_MSGID, MSGID_RANDY autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,aeb3845dd355cf6c X-Google-Attributes: gid103376,public From: Robert Dewar Subject: Re: Sequential_IO Date: 1999/08/19 Message-ID: <7pi0jo$ko3$1@nnrp1.deja.com>#1/1 X-Deja-AN: 514843386 References: <0a0133f8.48529b21@usw-ex0102-014.remarq.com> <37BC277E.B17EEEFF@averstar.com> <7phfpm$78r$1@nnrp1.deja.com> X-Http-Proxy: 1.0 x22.deja.com:80 (Squid/1.1.22) for client 205.232.38.14 Organization: Deja.com - Share what you know. Learn what you don't. X-Article-Creation-Date: Thu Aug 19 22:31:20 1999 GMT X-MyDeja-Info: XMYDJUIDrobert_dewar Newsgroups: comp.lang.ada X-Http-User-Agent: Mozilla/4.04 [en] (OS/2; I) Date: 1999-08-19T00:00:00+00:00 List-Id: In article <7phfpm$78r$1@nnrp1.deja.com>, Ted Dennison wrote: > Not here. My vote goes for DOS's upper memory/lower > memory/extended memory foolishness. This is of course off-topic, but the above does need a response. This was not an "operating system" idea, but rather a hardware specification that is fundamental to the early x86 architectures. I don't see how an OS could somehow completely hide the fact that you have two completely different segments of memory. The trick that was used to add 64K memory to the lower one megabyte by turning off A20 was indeed kludgy, but in a battle with a very difficult architecture, one can hardly blame the OS for failing to mask all the fundamental difficulties in the architecture when there really was no simple way to do so. This kind of thing is just not type-comparable at all with the decision to make file names case sensitive in Unix, which is indeed a pure OS design decision. By the way, even complaining at the kludgy memory architecture of the early x86 is off base to some extent. The history is interesting (read more about it in my Microprocessors book if you want). Intel wanted the 8086 to be a simple upgrade to the 8080 that would retain upwards compatibility in the instruction set, while allowing 128K of memory by dual code/data addressing a la PDP11 (at the time, Intel thought that nearly all their customers could manage fine with 64K, but that some really huge applications might possibly need 128K, so, planning for the future .... :-) Steve Morse designed the segmented architecture of the 8086 to provide up to a megabyte of addressing space without going to addresses longer than 16 bits, or registers longer than 16 bits. Intel was VERY concerned about code size so they really wanted to stay with a 16-bit design, and avoid 32-bit addresses. Steve's design was a clever way of meeting this requirement while avoiding the 128K restriction. Just imagine if the horrible 640K limit had instead been 80K :-) Of course I suppose you could say that if the 8088 had been that limited, IBM would have been more likely to choose the 68K architecture for the PC. Wouldn't it have made a difference if PC's had used this much cleaner architecture from the start. It seems amazing that most computing today is still done on an architecture which has the amazing distinction of having no two registers that are functionally equivalent (and only 8 of them). Sent via Deja.com http://www.deja.com/ Share what you know. Learn what you don't.