From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=BAYES_00,MSGID_RANDY autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,75a8a3664688f227 X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2001-01-12 06:13:32 PST Path: supernews.google.com!sn-xit-02!supernews.com!news.gv.tsc.tdk.com!newsfeed.berkeley.edu!ucberkeley!nntp2.deja.com!nnrp1.deja.com!not-for-mail From: Robert Dewar Newsgroups: comp.lang.ada Subject: Re: Parameter Modes, In In Out and Out Date: Fri, 12 Jan 2001 13:55:40 GMT Organization: Deja.com Message-ID: <93n2co$alq$1@nnrp1.deja.com> References: <7Cx56.90736$A06.3322588@news1.frmt1.sfba.home.com> <937jab$s23$1@nnrp1.deja.com> <3A57CD7F.2228BFD5@brighton.ac.uk> <938p3u$omv$1@nnrp1.deja.com> <93cagm$c1j$1@nnrp1.deja.com> <93e4e6$ucg$1@nnrp1.deja.com> <93encq$brm$1@nnrp1.deja.com> <93f6ar$m44$1@nnrp1.deja.com> <93flab$2mh$1@nnrp1.deja.com> <93fqau$6m2$1@nnrp1.deja.com> <93h9mo$bbm$1@nnrp1.deja.com> <93il87$iqo$1@nnrp1.deja.com> <93k6dv$qt6$1@nnrp1.deja.com> <93ko49$auq$1@nnrp1.deja.com> <93modu$36k$1@nnrp1.deja.com> NNTP-Posting-Host: 205.232.38.14 X-Article-Creation-Date: Fri Jan 12 13:55:40 2001 GMT X-Http-User-Agent: Mozilla/4.61 [en] (OS/2; U) X-Http-Proxy: 1.0 x64.deja.com:80 (Squid/1.1.22) for client 205.232.38.14 X-MyDeja-Info: XMYDJUIDrobert_dewar Xref: supernews.google.com comp.lang.ada:3951 Date: 2001-01-12T13:55:40+00:00 List-Id: In article <93modu$36k$1@nnrp1.deja.com>, dmitry6243@my-deja.com wrote: > Sorry but this is an incorrect analogy. There are three > classes of computers: micro, mini and mainframes. PDP-11 was > a mini computer, that's right. Which means that the nowaday > analog would be a workstation, not the Palm Pilot. Well perhaps you were not around those days, but your view is not an accurate reflection of the situation then. In those days, mini-computers were just that, tiny computers with, by standards of the day, small memories and limited capabilities. Today, work stations typically have more processing power (at least in CPU terms, if not in IO bandwidth) than mainframes and often have very large memories (the notebook I am typing on has half a gigabyte of physiscal memory, and many more gigabytes of virtual memory). So it is not the case today that work stations are somehow small computers compared to mainframes, at least in terms of the important parameter we are discussing which is memory size. In the days of the PDP-11, the remarkable thing was that people could write small programs that had impressive functionality. For example, Unix was a very small fraction of the size of mainframe OS's. So I looked for an analogy today, and the closest I could find was the Palm Pilot, where again, people can write small programs with impressive functionality (of course everything is relative, a typical Palm Pilot these days has 8 megs of memory, whereas even the fanciest PDP 11's only had 128K bytes :-) In any case, if you want to talk about typical sizes of large programs, you look at mainframes THEN, and NOW you can look at ordinary PC's, since PC's have plenty of memory these days. > In which units do you measure complexity and functionality? > Which value of Balance = Complexity / Functionality is the > goal. 2.5 of "balance units"? (:-)) It would be nice if these critical measures could be quantified so easily, but they can't. That makes it all the harder, but does not mean these can be ignored as criteria (surely you don't go around an art gallery limiting yourself only to objective criteria for judging quality). The analogy is not at all unreasonable, since part of language design is precisely related to human and aesthetic issues, as you would certainly be painfully aware if you had ever been involved in formal language design efforts. > This is not a scientific issue. It is about > feeling and belief. If science to you is strictly restricted to things that can be objectively measured (not all scientists would accept this limitation (*)) then that's absolutely right. But it is probably more accurate to replace "feeling and belief" with expert judgment (going back to my earlier analogy, the excellence of certain paintings is a little more than just some individuals feeling and belief, it is the result of a consensus of expert judgment). > From your experience you feel that MD would be > too expensive [= useless (:-))]. I respect your opinion, for > I know your qualification. You missed my point in earlier messages. If it was just me that felt that way, you would be quite justified in your reaction. But in fact this is an area in which there seems to be clear consensus. If I find that my opinion is not shared by anyone else then I conclude that either a) I am arguing my case incompetently b) I am wrong I never end up being the 1 in an N-1 vote on language design issues, since neither a) nor b) justifies such a position. > But it is not enough for me to change my feeling > that MD will be universally adopted in the near future But an individual "feeling" is not very convincing if it is not backed up by good arguments, especially when you are in a small technical minority. So far, you really have not presented any arguments. Once again, the basic argument against MD is that the added functionality does not justify the additional complexity. One viewpoint I have found useful in language design is to realize that adding *ANY* new feature to a language damages it by increasing complexity. So the burden is to show that the gain outweighs this damage. I do NOT accept the "PL/1 style" argument that says "never mind, if you don't want to use this feature, you don't have to, so it can't harm you". To make your case, why not do the following a) propose, in rough form, no need to tie up the details an MD addition to Ada b) show one example where this MD addition really adds to expressive power. I think that's a reasonable request, the burden of proof is definitely on your side for adding new features. What is interesting then is to contrast the best possible solution without MD to the example you show. This is how a lot of the design work on Ada (and all other programming languages) is conducted when it comes to looking at new features. If you just sit around the table at a standards committee meeting and say "I feel we should add feature XXX", and don't add any supporting technical argument, then you won't get far. Saying that something should be done just for the sake of uniformity is not good enough. yes, that is one argument, since it may reduce complexity (**) but it is not enough! > I used this word instead of "complexity". Because it seems > that you have reserved "complexity" for compiler > implementation (:-)). Well see the (**) below, but actually I was talking about complexity of the semantic definition here, not of the implementation. > Restrictions like "a type can be derived from only one base", > or "an operation shall dispatch on parameters of same type > having actual values of same type", or "not all types may > have dispatching subroutines", or "instances of packages > having some interesting types shall be global (instantiated > at the lirary level)", or "a derived type contains a > representation of the base" are viewed (at least by some > language users) as irregularities. I would consider such > things as problems, no matter whether I or somebody else > knows how to solve these problems. Yes, that's right, any non-unformity is a problem, but that does not mean there is an acceptable solution. Also, a casual approach to considering the semantics often covers up complex details. For example, in the case of multiple inheritance, the mess we get into when we have common base classes. > Did I say that MD is simple? No, you didn't and of course that's a missing part of your argument. I was hoping as I scrolled down your message to find a specific technical example, but there is still none. I think that the ONLY way you can convince people to add a new feature (which apparently you understand to be "not simple") is to show convincing examples. (*) the issue of whether science should be about purely what can be objectively observed and reasoned about is a very old one. It actually is relevant to "computer science", because there is an ongoing debate on whether it is indeed a science. My own background is in Chemistry (my PhD was in crystallography -- though to be fair, it was highly computer related, I did the first fully computer automated computer solution of a crystal structure, and amazingly some of my ancient Fortran code is still in use over thirty years later). I personally prefer to reserve the word science for fields with a strong objective empirical component, where the game is to construct theories that match observations, and test the theories by experimentation. According to this definitely, language design (and most of the rest of so-called computer science) is NOT a science (and for sure social science etc are not sciences either -- someone once said that any field that feels compelled to call itself a science isn't one :-) Certainly language design is not a science in the empirical objective sense, we definitely do NOT have objective metrics which can be used to determine that one language design is better than another. (**) One useful observation that I made during the Ada 95 language design effort was that everyone is in favor of simplicity, and opposed to complexity, but in fact these terms refer to lots of different things: 1. Simplicity in implementation 2. Simplicity in formal semantic description 3. Simplicity of learning for non-experts 4. Simplicity of resulting code in the language 5. Simplicity of informal (text book) description The trouble is that these are now different criteria, which often are in exact opposition. Here is an example. Records are a simple concept Arrays are a simple concept It is useful to have records have fields of any type It is useful to have arrays that have dynamic length BUT, what about records that have fields that are dynamic length arrays. Well of course you want to allow that from the point of view of uniformity of description (and indeed Ada takes that position). This also aids ease of use and makes programs simpler. HOWEVER, it is a very nasty glitch in the implementation, and certainly makes the implementation more complex, why? because the simple paradigm of record implementation (fixed offsets from a base address) breaks. Now, of course we have taken what I think is the right decision here in Ada to settle for added complexity of implementation to simplify the use of the language, but you see from this example that simplicity is not such a simple concept :-) Robert Dewar Sent via Deja.com http://www.deja.com/