comp.lang.ada
 help / color / mirror / Atom feed
From: Robert Dewar <robert_dewar@my-deja.com>
Subject: Re: Sequential_IO
Date: 1999/08/19
Date: 1999-08-19T00:00:00+00:00	[thread overview]
Message-ID: <7pi0jo$ko3$1@nnrp1.deja.com> (raw)
In-Reply-To: 7phfpm$78r$1@nnrp1.deja.com

In article <7phfpm$78r$1@nnrp1.deja.com>,
  Ted Dennison <dennison@telepath.com> wrote:
> Not here. My vote goes for DOS's upper memory/lower
> memory/extended memory foolishness.

This is of course off-topic, but the above does need a response.
This was not an "operating system" idea, but rather a hardware
specification that is fundamental to the early x86
architectures. I don't see how an OS could somehow completely
hide the fact that you have two completely different segments
of memory.

The trick that was used to add 64K memory to the lower one
megabyte by turning off A20 was indeed kludgy, but in a battle
with a very difficult architecture, one can hardly blame the
OS for failing to mask all the fundamental difficulties in
the architecture when there really was no simple way to do so.

This kind of thing is just not type-comparable at all with the
decision to make file names case sensitive in Unix, which is
indeed a pure OS design decision.

By the way, even complaining at the kludgy memory architecture
of the early x86 is off base to some extent. The history is
interesting (read more about it in my Microprocessors book if
you want).

Intel wanted the 8086 to be a simple upgrade to the 8080 that
would retain upwards compatibility in the instruction set,
while allowing 128K of memory by dual code/data addressing
a la PDP11 (at the time, Intel thought that nearly all their
customers could manage fine with 64K, but that some really
huge applications might possibly need 128K, so, planning for
the future .... :-)

Steve Morse designed the segmented architecture of the 8086 to
provide up to a megabyte of addressing space without going to
addresses longer than 16 bits, or registers longer than 16 bits.
Intel was VERY concerned about code size so they really wanted
to stay with a 16-bit design, and avoid 32-bit addresses.
Steve's design was a clever way of meeting this requirement
while avoiding the 128K restriction. Just imagine if the
horrible 640K limit had instead been 80K :-)

Of course I suppose you could say that if the 8088 had been
that limited, IBM would have been more likely to choose the
68K architecture for the PC. Wouldn't it have made a difference
if PC's had used this much cleaner architecture from the start.

It seems amazing that most computing today is still done on an
architecture which has the amazing distinction of having no
two registers that are functionally equivalent (and only 8 of
them).


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.




  reply	other threads:[~1999-08-19  0:00 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
1999-08-18  0:00 Sequential_IO Shawn Barber
1999-08-18  0:00 ` Sequential_IO Marin David Condic
1999-08-18  0:00 ` Sequential_IO Ted Dennison
1999-08-18  0:00 ` Sequential_IO Larry Kilgallen
1999-08-19  0:00 ` Sequential_IO Shawn Barber
1999-08-19  0:00   ` Sequential_IO David C. Hoos, Sr.
1999-08-19  0:00   ` Sequential_IO Marin David Condic
1999-08-19  0:00   ` Sequential_IO Ted Dennison
1999-08-19  0:00 ` Sequential_IO Tucker Taft
1999-08-19  0:00   ` Sequential_IO Ted Dennison
1999-08-19  0:00     ` Robert Dewar [this message]
1999-08-21  0:00     ` Sequential_IO Simon Wright
1999-08-19  0:00   ` Sequential_IO Marin David Condic
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox