From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,LOTS_OF_MONEY autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!news.eternal-september.org!news.eternal-september.org!feeder.eternal-september.org!news.stack.nl!reality.xs3.de!news.jacob-sparre.dk!loke.jacob-sparre.dk!pnx.dk!.POSTED!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: Your wish list for Ada 202X Date: Thu, 10 Apr 2014 16:52:15 -0500 Organization: Jacob Sparre Andersen Research & Innovation Message-ID: References: <7f1c01c5-3563-4b94-9831-152dbbf2ecdc@googlegroups.com> <8bhozh836pyt$.1qctlysud0s2q$.dlg@40tude.net> <1cdsyxjzsfgzm.1synpaujysv21$.dlg@40tude.net> <1aa804jg9qq4o$.wdiq33yo621l.dlg@40tude.net> <1w6eh0aiksmdh$.1h16p7y0b8c6h.dlg@40tude.net> <53465969$0$6708$9b4e6d93@newsspool3.arcor-online.net> NNTP-Posting-Host: static-69-95-181-76.mad.choiceone.net X-Trace: loke.gir.dk 1397166736 10850 69.95.181.76 (10 Apr 2014 21:52:16 GMT) X-Complaints-To: news@jacob-sparre.dk NNTP-Posting-Date: Thu, 10 Apr 2014 21:52:16 +0000 (UTC) X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-RFC2646: Format=Flowed; Response X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Xref: news.eternal-september.org comp.lang.ada:19231 Date: 2014-04-10T16:52:15-05:00 List-Id: "Georg Bauhaus" wrote in message news:53465969$0$6708$9b4e6d93@newsspool3.arcor-online.net... > On 10/04/14 05:28, Randy Brukardt wrote: ... >>> >I can build a perfect hash at bind/link time when all involved tags >>> >become >>> >known. >> "can" means "possible", not "practical". That might be possible if the >> number of tags is small, but that's not the case in a lot of OO systems. >> (A >> Claw program starts with somewhere around 80.) Finding a perfect hash >> gets >> very time consuming if the number of items is large - I don't think >> people >> will be very happy with binding that takes 12 hours. > > Are you certain about the cost of computing perfect hashes? Certain? Only an idiot is certain about anything. But my understanding is that calculating perfect hashes is an NP-complete problem. It's possible to speed up the algorithm (maybe), but it always is going to be quadratic in the size of the items. So while handling 100 items might only take a second or two, ten times as many must take at least 100 times more time. The problem with that in a compilation system is that the first customer with a program with a lot of tags would find performance unacceptable. (What "a lot" is is debatable.) Two stories about this: (1) A friend of mine was working for a company developing a computer typesetting system. They has been testing prototypes for several weeks, and all was going well. A demonstration for company executives was arranged. The executive asked for a 72 point headline to be produced. The machine took several minutes to produce each letter. The algorithm used was quadratic in the type size, and all of the testing had been on small text. Within a week, the product was canceled, and the entire team fired. My friend was out of work. (2) Early in the development of Janus/Ada, we created and debugged an expression resolution algorithm. An algorithm as complex as Ada resolution hadn't really been used in compilers before, and the few articles on the topic didn't make much sense. So we built our own. It worked great for several years. An new ACVC version came out. The compiler hung on one of the new tests. We were unable to find the infinite loop causing the problem. Eventually we figured out that that was because the loop wasn't infinite; it was coming from resolving an expression with 50 concats in a row. Experiments showed that the compiler was working just fine, but the expected compilation time on that expression was roughly 20.5 years. The problem being that there were a lot of "&" operations being tested, and each level tested those operations for all possible combinations of types. So something like 8**50 tests were being performed, and that was a problem. Obviously, we needed to compile the ACVC (now ACATS) a lot quicker than that. I found a quick fix by testing the operands in the reverse order. Doing it left to right caused the most ambiguous operations to be tested first, which maximized the runtime. You could still trigger the problem with lots of parens, but just writing A & B & C & ... wouldn't cause trouble. (I don't know if we ever actually fixed the problem, or if it just never appeared again.) Moral of both of these stories: One does not want to knowingly depend on quadratic operations, because as soon as someone sticks in a larger than anticipated input, you will have trouble. Randy.