From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,d1df6bc3799debed X-Google-Attributes: gid103376,public From: dewar@merv.cs.nyu.edu (Robert Dewar) Subject: Re: Not intended for use in medical, Date: 1997/05/14 Message-ID: #1/1 X-Deja-AN: 241480218 References: <3.0.32.19970423164855.00746db8@mail.4dcomm.com> <5kmek2$9re@bcrkh13.bnr.ca> Organization: New York University Newsgroups: comp.lang.ada Date: 1997-05-14T00:00:00+00:00 List-Id: Matthew said <> A good example of reading other people's ill-informed assessments. If you see someone say this about Algol-68, it means they don't know much about Algol-68, or, more likely, in an attempt to impress their friends and neighbours they are quoting opinions which they do not understand. I am willing to bet that Matthew is quite unaware of what type coercions mean in Algol-68. For example a := b; where a and b are integer variables, is considered a type conversion (mode coercion) in Algol-68, since the right side must be converted from ref int to int by dereferencing -- gosh isn't it awful a language having automatic type conversions like that (note that Bliss actually agrees in this case, but it is unusual in its opinion). Do NOT assume that famous people know what they are talking about when it comes to programming languages. Many well known people in the field break my rule about not criticizing languages without having written a substantial amount of code -- and they break it spectacularly, often criticizing features that just don't exist. A few years ago, I reviewed a proposal from some Canadian Professor which dismissed Algol-68 on the silly grounds that Matthew mentions above (too may type coercions), and gave as an example the following where a is ref ref int a := 4; is terrible, because it automatically deferences a and therefore you don't know what variable is being assigned. There is only one problem with this. Algol-68 has NEVER allowed automatic cooercion on the left side of an assignment, so the above is illegal (you have to write an explicit conversion, analogous to the use of .all in Ada, or * in C) An irony is that I know exactly where he got this from, there is a well known paper by a very well known author, who is considered an expert in programming languages, which presents EXACTLY this wrong example. Needless to say, I gave the proposal a very low rating, anyone who simply adopts other people's opinion uncritically as their own is unlikely to make a good scientific investigator! <> As I noted previously, writing any compiler is hard. Writing a good compiler for C is difficult. If you think this means that a language is bad, then all languages are bad. Yes, it is true that modern languages like C++, Fortran-90, OO COBOL, and Ada 95 are harder to compile, but so what. That's what computers are for -- doing work that we would have to do ourselves otherwise. These comprehensive languages do a lot for us that we would have to do for ourselves <> Of course I disagree, Les Hatten does not know what he is talking about. In fact the extent to which the standard has needed interpreting is very small. Almost all of the AI's for Ada 83, all of which are resolved for Ada 95, are for marginal issues that affect few or no programmers. Furthermore, other languages if anything have MORE interpretations that have to be issued (I wonder if Les has for example read the JOD for COBOL) <> This is not necessarily Tony Hoare's assessment today, and if you read his Turing award lecture carefully , you will find it is not so absolute. In particular, he specifically notes that a subset of Ada *would* be a suitable vehicle. Since all high reliability systems using Ada do in fact use a well chosen subset, I see no conflict here. Indeed when I talked with Tony some years ago, he expressed frustration that his lecture had been taken as condemning Ada out of hand. He was simply using it as an example of some trends in language design to be worried about, and he felt that people had misread the talk if they felt it was wholly negative to Ada.