From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: a07f3367d7,f3bebae566a54cab X-Google-Attributes: gida07f3367d7,public,usenet X-Google-NewGroupId: yes X-Google-Language: ENGLISH,UTF8 Path: g2news2.google.com!news2.google.com!npeer02.iad.highwinds-media.com!feed-me.highwinds-media.com!cyclone02.ams2.highwinds-media.com!news.highwinds-media.com!voer-me.highwinds-media.com!feeder.news-service.com!212.27.60.6.MISMATCH!feeder1-1.proxad.net!proxad.net!feeder2-2.proxad.net!newsfeed.arcor.de!newsspool1.arcor-online.net!news.arcor.de.POSTED!not-for-mail Date: Fri, 24 Jun 2011 12:32:50 +0200 From: Georg Bauhaus User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.18) Gecko/20110613 Thunderbird/3.1.11 MIME-Version: 1.0 Newsgroups: comp.lang.ada Subject: Re: Some exciting new trends in concurrency and software design References: <8a5765ba-622a-42cd-9886-28ed7cfed31e@s17g2000yqs.googlegroups.com> <4dff5be5$0$6565$9b4e6d93@newsspool3.arcor-online.net> <9b65f3c7-caee-440f-99ed-0b257221ce58@m24g2000yqc.googlegroups.com> <1v2auyktde5q4.1wqpdg3fval5k.dlg@40tude.net> <4e03bb73$0$6584$9b4e6d93@newsspool3.arcor-online.net> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Message-ID: <4e0467d3$0$6577$9b4e6d93@newsspool3.arcor-online.net> Organization: Arcor NNTP-Posting-Date: 24 Jun 2011 12:32:51 CEST NNTP-Posting-Host: f9c9521f.newsspool3.arcor-online.net X-Trace: DXC==^eim`0bZ;f9kIfcjg:0fdMcF=Q^Z^V3h4Fo<]lROoRa8kF_ljHilPCY\c7>ejVhaY[:oKF\Qmo3\`YcZ4QP>o X-Complaints-To: usenet-abuse@arcor.de Xref: g2news2.google.com comp.lang.ada:20994 Date: 2011-06-24T12:32:51+02:00 List-Id: On 6/24/11 3:27 AM, Yannick Duchêne (Hibou57) wrote: > Le Fri, 24 Jun 2011 00:17:23 +0200, Georg Bauhaus a écrit: >> Ignoring things is a pattern in programming > Funny wording :) > >> whose consequences, >> while known to turn out expensive, are consistently ignored, too. >> A pattern, incidentally, that seems to have been strikingly absent >> when the HOLWG collected a catalog of language requirements more >> than 30 years ago. > Please, tell more That's Ada history. Find it looking for "Requirements for High Order Computer Programming Languages". > I was not talking about FPL in the large, as I know only one, which is SML (others are too much different). And this one was not designed to be cute, but to be sound. It was specified prior the first implementation (as you know I guess), just like Ada was, and this specification was not based on The Great Cute Levels Catalog, but made of coherence proofs instead. Unfortunately, you've made this error so much early, that a large part of what you wrote since here is unclear due to that assumption. But you are still OK, as its my sole fault I was too much imprecise about the meaning of “forget” (will be clarified a bit later). My topic is the pattern of deliberately ignoring things, and how it is related to forgetting, simplifying, or refining. ML has one feature whose characteristics and effects were basically ignored. It may be that, when designing the feature, something essential about writing programs had just been forgotten, but certainly the feature was not to be refined, later. On the contrary. If you forget things, they tend to be added later, or at least you will concede that there is a bug. This can be fixed. Refining is possible only when there had been an assumption of something to be refined. Ignoring is different from refining. The pattern of ignoring things is present in ML syntax. Andrew Appel expounds in his Critique of ML. In one paragraph, he mentions that ML syntax does not get him into trouble any more. But in later paragraphs, he explains at some length where and how ML syntax was *not* given the same attention as the semantics of Standard ML, and why this is not the best of things. I believe that ML syntax wasn't forgotten. Just ignored. It wasn't to be refined later, either. It wasn't simplified for modeling reasons. It was just ignored. With SML out the door, we, the programmers, can get our load of condescension when we complain about inscrutable error messages when some expression gulps the rest of the compilation unit: "Why didn't you put the optional semicolon there, STUPID!? If your expression awaits another function, then the compiler is quite right to consider the next fun definition in the file to be that function!" Or similar. (Do you recognize the familiar argument: Joe Bloggs, the programmer, writes something obviously correct, but it isn't correct; a language "feature" has caught Joe; then someone says that Joe is stupid (and shouldn't be programming) because Joe doesn't know the language "feature".) Next they address us in managerial style to talk about the Practical Programmer who would just accept things the way they are. Just don't have the makers of ML admit they have made a serious mistake when ignoring the very manifestation of human-computer interaction: the syntax. And once the example is out, others will copy it. Syntax is the means by which humans structure what they wish to express, a program in our case. How can this be forgotten? When does industry accept the time loss spent in training awareness of idiosyncrasies and in tackling the effects of things ignored? >> So they are seen as having potential. > Not really, as people complains the crowd did not adopted it (true for Haskell too). F#, Scala, R, ... >> - when the scripting language is used to process international text >> in a concurrent environment and fails, since char issues and concurrency >> have been ignored. > Not a language failure, but an application design error. When a language is basically not thread safe, I'll call it a language issue. >> - when the program's data model turns out to be relational, but >> the object-as-a-table magic does not quite work. > Wrong analysis. No language can avoid this, as this is often prior to enough text in any language, except human's languages. Let a problem P have a relational solution R. Demand that R be implemented using a non-relational, but acclaimed language O. Ignore programmers who mention that the admittedly fashionable choice O might turn out wrong, and costly in the long run. Another instance of the pattern of ignoring things at work. The motive here is the relative weight of feeling the immediate advantages of crowd decision on the one hand and concerns about long-term technical consequences on the other. If the crowd is the current source of profit, the choice seems clear. >> The puzzling issue is why don't functional programmers just copy >> the solution from Lisp, such as CMUCL? Or from Python? Are they >> ignoring even "their own relatives"? >> >> * (let ((i 4000000000)) >> (* i i)) >> >> 16000000000000000000 >> * >> >>>>> i = 4000000000 >>>>> i * i >> 16000000000000000000L > I could not understand that point. This is the crucial bit. The goal is a language based on ML, like ATS. But by the pattern, the designers of ATS seem to ignore both ML's int (which raises Overflow in place of erroneous execution) and they also ignore alternative "functional ints". Instead, for "practical reasons" I guess, they choose C int behavior. Assume that the new language is out the door, and is being used on commercial projects. Will it continue to offer only C int and associated semantics or will it move towards better defined int? Will sacred backwards compatibility be in the way of rectifying int towards ML's, for example? >> How can a safe ATS pointer be safe in the presence of int ? > What is the question precisely ? I am not sure I've understood the question. Assume, for the sake of generalization, that a pointer points safely to a sum of addresses computed from int offsets.