From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,5412c98a3943e746 X-Google-NewGroupId: yes X-Google-Attributes: gida07f3367d7,domainid0,public,usenet X-Google-Language: ENGLISH,ASCII-7-bit Received: by 10.68.230.165 with SMTP id sz5mr2531359pbc.1.1331329910988; Fri, 09 Mar 2012 13:51:50 -0800 (PST) Path: h9ni7685pbe.0!nntp.google.com!news2.google.com!goblin1!goblin.stu.neva.ru!eternal-september.org!feeder.eternal-september.org!mx04.eternal-september.org!.POSTED!not-for-mail From: Brian Drummond Newsgroups: comp.lang.ada Subject: Re: Verified compilers? Date: Fri, 9 Mar 2012 21:51:50 +0000 (UTC) Organization: A noiseless patient Spider Message-ID: References: <15276255.3865.1331124143041.JavaMail.geo-discussion-forums@vbze11> <87d38nv4zf.fsf@adaheads.sparre-andersen.dk> <23491947.985.1331304183125.JavaMail.geo-discussion-forums@ynlt17> Mime-Version: 1.0 Injection-Date: Fri, 9 Mar 2012 21:51:50 +0000 (UTC) Injection-Info: mx04.eternal-september.org; posting-host="DkTdSjxOCm6DqG+Uf7eArg"; logging-data="865"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX197LpuFYvYYQs3UEHv+bgLSZftwSnz4PZA=" User-Agent: Pan/0.134 (Wait for Me; GIT cb32159 master) Cancel-Lock: sha1:RqHeFB+SYH8vAvG6CfRSKsFvJqc= Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Date: 2012-03-09T21:51:50+00:00 List-Id: On Fri, 09 Mar 2012 06:43:03 -0800, Shark8 wrote: > On Friday, March 9, 2012 7:56:34 AM UTC-6, Brian Drummond wrote: >> >> There was a lot of interesting hardware at the time. I was intrigued by >> the similarity between Transputer and Lilith instruction formats, which >> might be worth re-examining where code density (or making it fit in >> cache) is important. >> >> - Brian > > Which others of those interesting technologies (or the idea behind them) > do you suppose was a missed opportunity? Well, the Linn Rekursiv of course! Realistically - I don't know. At the time I postulated that RISC (as it was then) was a flash in the pan, taking advantage of short term opportunities (limited transistor count - wasting some on complex decoders or features really could hurt performance) and I still do. I knew that longer term, some form of CISC would win out - but I imagined a cleaner design with hindsight to inform it, not an 8086 with layers upon layers of add-ons! And like others here, I regretted that the 80286 segment debacle got in the way of exploiting segments (or Rekursiv objects) to provide so much of the protection we need. (Aside : I remember a campfire conversation 20 years ago in Wyoming where - quite unexpectedly - I met a guy from ARM, when I argued that RISC would always have a disadvantage from fixed length instructions : you want something like Huffman encoding so the commonest symbols are shortest, like push/pop and other 1-byte insts on x86. Fixed instructions play to early RISC's limited transistor count; longer term, a few thousand gates for decoding aren't even worth counting) So I think the biggest lost opportunity was that the Hennessy/Patterson orthodoxy became so dominant. They're not wrong : you can optimise what you have by measuring any feature's benefit, and cost. However... One trouble is, their measure of benefit tends to be performance (because it is easy to measure) and not safety, ease of use, security, bug count, ease of debugging (all of which are harder). Another is, you can't measure a design you haven't built yet. ("Built" can include simulation for this purpose). And you can't build a new design if you can't prove it will be better. (And even if you build it, all the benchmarks are in C, which is too low level to exploit more sophisticated hardware) And you can't prove it will be better if you can't measure its performance... So I think it has funneled design to some local peak on the landscape, but made it impossible to jump across the valleys to other mountain ranges. In language of the time, the big problem (or "problem") addressed by Lisp machines and the Rekursiv - and the Transputer in its own way - was "bridging the semantic gap" between high level languages, and the primitive machine operations. Alas, we have eliminated the semantic gap ... by programming in C. So the Transputer for example provided means for parallel programming - directly supported in hardware, by simple interconnections between multiple chips. (Oddly enough, the fact that a single T414 in 1985 ran at 20MHz when other CPUs struggled to 10MHz seems to have been lost) I believe it was also the first CPU to divorce the core clock (20MHz) from the motherboard clock (5 MHz) so that faster chips need only change the clock multiplier. But I digress. Coming back on topic, if the orthodoxy had embraced that, we would certainly be a lot further ahead now that transistor budgets are screaming for multicore - and the market would need a language that really understood multiprocessing... I'll leave the Rekursiv for another time. It demonstrated a lot of unorthodox ideas, many of which have potential value. But it was a prototype, developed by a tiny team working fast, focused on the new ideas rather than ultimate performance. (So a lot of basic ideas, like pipelining or caches, were ignored - unless you could implement them in microcode.) Two things it taught me : (1) you can't beat hardware support when you need performance. (2) If the hardware support isn't what you wanted, you lose that advantage. Putting those together in 1990, and revisiting the FPGA, was quite an eye- opener. Previously, I thought that an ASIC that forgot what it was when you switched the power off, was a REALLY DUMB idea... > I remember reading about the Lisp-Machines that came out, and some of > the development tools they had. The "step into / modify [executing] > code" debugging seems worlds ahead of anything I've seen on the *nix > front (admittedly only a cursory glance) and substantially ahead of the > windows debuggers. (MS's .NET debugging w/ VS doesn't allow for > on-the-fly modification; Borland's Delphi debugger allowed you to modify > variables / registers on-the-fly, IIRC.) Today you would have to sandbox such a debugger, in such a way that you could guarantee its execution of modified code wasn't exploited by bad people. - Brian