From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,21960280f1d61e84 X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII Newsgroups: comp.lang.ada Subject: Re: How come Ada isn't more popular? References: <1169531612.200010.153120@38g2000cwa.googlegroups.com> <1mahvxskejxe1$.tx7bjdqyo2oj$.dlg@40tude.net> <2tfy9vgph3.fsf@hod.lan.m-e-leypold.de> <1g7m33bys8v4p.6p9cpsh3k031$.dlg@40tude.net> <14hm72xd3b0bq$.axktv523vay8$.dlg@40tude.net> From: Markus E Leypold Organization: N/A Date: Wed, 31 Jan 2007 16:16:20 +0100 Message-ID: <4zwt33xm4b.fsf@hod.lan.m-e-leypold.de> User-Agent: Some cool user agent (SCUG) Cancel-Lock: sha1:7eWpOYfvJDy4PVSWHezXjtIPzos= MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit NNTP-Posting-Host: 88.72.207.215 X-Trace: news.arcor-ip.de 1170256291 88.72.207.215 (31 Jan 2007 16:11:31 +0200) X-Complaints-To: abuse@arcor-ip.de Path: g2news2.google.com!news4.google.com!border1.nntp.dca.giganews.com!nntp.giganews.com!newsfeed00.sul.t-online.de!t-online.de!130.59.10.21.MISMATCH!kanaga.switch.ch!switch.ch!news-fra1.dfn.de!newsfeed.arcor-ip.de!news.arcor-ip.de!not-for-mail Xref: g2news2.google.com comp.lang.ada:8809 Date: 2007-01-31T16:16:20+01:00 List-Id: "Dmitry A. Kazakov" writes: > On Mon, 29 Jan 2007 16:50:16 +0100, Markus E Leypold wrote: > >> "Dmitry A. Kazakov" writes: >> >>> On Sun, 28 Jan 2007 16:06:48 +0100, Markus E Leypold wrote: >>> >>>> "Dmitry A. Kazakov" writes: >>>> >>>>> On Sun, 28 Jan 2007 00:24:27 +0100, Markus E Leypold wrote: >>> >>>>> Generics is a wrong answer and always was. As well as built-in lists you >>>>> are praising is, because what about trees of strings, trees of lists etc. >>>>> You cannot build every and each type of containers in. >>>> >>>> You missed my point. :-). A language with a Hinldey-Milner type system >>>> has a 'tree of something' type where something can be anything in >>>> every given instance of usage. >>> >>> You have X of Y. Granted, you can play with Y, but what about X? >> >> I don't understand the question ... > > X = tree, Y = something. What about X = something, Y = character. > > Example: fixed size strings, unbounded strings, suffix tree. Here the > container (X) varies, the element does not. It is a quite common situation > when you wished to change the containers on the fly. This situation -- in my experience is much less common. But if you have it, at the place where you do the manipulation you still need a common "access protocol" for all thos containers, that is something like for K in keys of container; loop do something with container.item(K); end loop; That is what classes are good for (different from parametrized types). The types of K could well be different here with a Hindley-Milner type system. If I consider your challenge in the context of a function programming language, the situation you address might not arise as often since you'd abstract of the iteration as well as probably construct the item sequence at the location of calling the iteration. let l = list_of_x ...;; let t = tree_of_x ...;; let do_something = ...;; (* single iteration step *) let fooify_items = fold do_something u ;; (* define the iteration *) ... ... fooify_items l ... fooify_items (Tree.fringe t) (* extract the list of items from t and iterate *) Efficiency concerns are usually countered (a) by a really good garbage collection and (b) by lazyiness. >>> The point is, the language should not have either X or Y built-in. What >> >> Pursuing this argument further, a language also shouldn't have strings >> built in etc. :-). > > Yes, sure. The standard library can provide them, but there must be no > magic in. Here I disgree a bit: Separate syntax is a good thing for the most common constructs. Else you could as well argue: "A language can provide constructs for conditional processing or loops, but there must be no magic" -- meaning that you would want only GOTOs and structured programming exists only in the mind of the user (I've been programming like this on micros twenty years ago ...). And if I have extra syntax i can as well provide special optimization rules. > The user should be able to describe string type in language terms > without loss of either performance or compatibility. No. I think, strings and lists are so common, that their _semantics_ must be expressible in the core language, but the compiler should have the license to provide special constructs and optimizations. >> What I think, is, that having at least lists in the "standard library" >> and by that I mean functional lists, not arrays, not "containers", >> helps tremendously because that is such a common use case. >> >>> should be built-in is "of". The type system should be able to handle >>> parallel type hierarchies of X's and Y's bound by the relation "of". >> >> Yes. And exactly that is what a Hindley-Milner type system has built >> in. The lists in Haskell and Ocaml are just in the standard library >> (conceptually -- knowing that they are here, makes it easy to compile >> them by special rules and get a more efficient implementation or have >> special instructions in the underlying VM or reduction mechanism). > > I doubt it can. I am too lazy to check. (:-)) But the uncomfortable If it can't, then there os no need to forbid it, isn't there :-). > questions you (the type system) should answer are like: > > Y1 <: Y2 => X of Y1 <: X of Y2 > is container of subtypes a subtype? No. Never. > X1 <: X2 => X1 of Y <: X2 of Y > is sub-container a subtype? Sub-container is a difficult concept. If it doesn't have binary methods ... then the answer (off the top of my head) is yes. > There is no universal answer. Sometimes yes, sometimes not. Consider yes > examples: Have a look at the Hindley-Milner type system and OCaml's OO extensions. IMHO the answer doesn't vary at all: The only criterion is wether the type safety (in the sense given in the Cardelli tutorial) stays intact. Sometimes I think it is a mistake to mix interface contracts with type systems: Type systems are too weak to express contracts, the ad hoc way in which contracts often work, cripple type systems (i.e. there has always been a minority who wanted termination guarantees via the type system: Apart from the practical problems, that doesn't belong into a type system, IMHO). > String vs. Wide_String (case 1) > Unbounded_String vs. String (case 2) > Unbounded_String vs. Wide_String (mixed case) > >>>> So you get 'tree of foo' from it and 'tree of bar' when you use it >>>> like this, but there are no generics instantiated. There are trade >>>> offs, of course (like that you probably can't design such a language >>>> without GC and without loosing a certain degree of control of the >>>> representation of types), >>> >>> It is not obvious. I saw to good arguments for GC, so far. (it tend to >>> repeat in c.l.a and comp.object, a lot of foam around mouths... (:-)) >> >> Foam, yes :-). But you have to admit, that allowing a GC without ever >> having an actual implementation (execpt Martin Krischik's adaption of >> the B�hm collector) just too much teasing: You could have it, but no, >> it's not there. > > It was left open for those who might need it. It seems that nobody came... > (:-)) My impression was that implementing a full Ada 95 compiler was diifficult enough. Then you don't want GC in embedded programming and Ada has more or less drawn back from general programming in the last whatever years. The absence of a GC implementation almost everywhere is a clear indication of that in my eyes. > No seriously, I don't consider GC as a problem. Just give me "X of Y." Then > I would take X = access. An do whatever memory collection I want! See? No? > There is absolutely no need to have built-in GC, if you have abstract > referential (access) interfaces. Note additional advantage: you can have > different GC's handling objects of same types in the same application! But there is. Only under very restricted circumstances the necessity of deallocation can be decided locally. With reference sharing (representation sharing) between values (think Lisp-like lists -- and I think you need that for efficience reasons sooner or later -- you're either back to a "manual" reference counting scheme (inefficient) or you really need GC. Let me repeat my mantra: Manual allocation and deallocation hinders modularity. Not much, because you can survive without it in much the same fashion as the old FORTRAN programmers survived with GOTOs only, but ... the question is not wether I need it, but wether I want it in 2007. For, mind you, PC/workstation application programming (as opposed to embedded in this case). Ada could have been a nice language straddling a wide range of application areas. But where other languages have developed, mutated and where necessary spawned bastards (e.g. C#), with Ada I notice the majority opinion seems to be that if I don't absolutely need it in embedded programming, it should not be in the language. Well -- what was the question again: "How come Ada isn't more popular?". Me thinks the answer is: Because it don't want any more. >>> To me GC is about indeterminable scopes, upward closures and >>> other things I don't want to have... >> >> If yout have only downward scopes for "closures" and memory allocation >> this will, finally, interact badly with the fact that "implicit >> (i.e. static type safe) casting" of classes is also only possible >> downwards. My impression is, that all these things together rule out >> some useful designs, that would otherwise possible. Or to say it >> differenty: Object orientation w/o indeterminable scopes, upward >> closures and GC doesn't work well. Some abstractions cannot be >> implemented. > Umm, I cannot tell. I think I can tell, but the discussion on this topic (what is functional programming good for) does still rage on c.l.f and the relevant mailing lists. I notice, that nobody that actually has tried FP doubts the superiority of the style in general (they are bitching about efficiency, sometimes, and availability of libraries, mor often). > It is an interesting point. > I am not sure if it is > true, because we already can return T'Class, and surely we should develop > the language towards making X of Y'Class possible. Yes, all languages in question are Turing complete. So you can always write a Lisp-interpreter and embed it etc. That, though, is not the point. It does not count wether I can do things "in principle" but how I can do them and how well the applied abstractions can be seen at a glance by somebody reading the code later. BTW -- another argument _for_ a builtin list syntax. > As a simple intermediate > stage we could allow X (T: Tag) of Y'Class (T). In Ada syntax: > > type Coherent_Array (Element_Tag : ) is > array (Integer range <>) of > Element'Class (Element_Type); > > Here the discriminant of the array determines the specific types of all its > elements. > >> This, of course is just a gut feeling. I do not know about research or >> empiricial studies that examine the influence that these various >> restrictions have on each other and how they act together. > > My feeling is that upward closures destroy the idea of type. However, So how come that OCaml and Haskell have it? Those languages have a static type system. Without even RTTI ... :-) > somehow we surely need type objects in some form, for making distributed > systems, for instance (how to marshal non-objects?) Alice ML does it well. Usually they're just marschalling a cluster of blocks on the heap, but some languages (Alice ML) are more sophisticated. > >>> Let types-1 be values, make them values. Then they will need no >>> declarations, just literals. Soon you will discover some types-2 which >>> still require declarations. Make them values too. An so on. At some level n >>> you will have to stop. Now, rename type-n = "type". value, type-k>> "value". See? You are where you have started. >> >> I do not understand your argument here. Care to give some example and >> I'll try write down how it is down in i.e. OCaml? Perhaps we're >> talking on cross purposes too, since I'm not sure I really wanted to >> desire the thing you insist I want. :-) > > You have the following hierarchy: > > values > type-1 = types (sets of values + operations) type = types? Either you have a type (IMHO) or types. A still fail to follow you here. > type-2 = types of types (sets of types + operations to compose types) > type-3 = types of types of types (...) > ... > type-n > ... > type-oo > > [ value-k = type-k-1 ] > > As an example of type-2 you can consider parametrized types. Instances of > them are type-1 (=value-2). Types declared in generic Ada packages are > type-1. All of them considered together ("generic type") is a type-2. > Another example of type-2 in Ada is formal generic type: > > generic > type T is range <>; > > "range <>" is type-2. The actual T is type-1 (=value-2). Perhaps the problem is that with parametrized types you try to express a restriction of the kind List of something where something is in the followin family of types ... With parametrized types in an Hindley-Milner type systems things are vastly more simple: There is a 'general' 'anything' type and ther are concrete types. So we have: - List of anything (really anything) which becomes List of int, List of float, List of List of Tree of Int at any give point of use. Of course there are also types systems as you seem to decribe them but I don't know much about them. Hindley-Milner is still simple enough to be decidable and understandable and buys you a good deal of things. If on the other side you want to express structured types (i.e. types with associated operations) the thing to use in ML are modules and functors. Which are not values but a "static", still typed, language for constructing modules. I fail to see how that brings me all back to the starting point. Are we still talking about the same topic? >>>> Languages of the Algol-FORTRAN-C-Pascal-Ada group are all far from >>>> that ideal. Since a lot of programming these days is general list >>>> manipulation, everyday jobs become painful. >>> >>> There always was Lisp, for those who prefer declarations spelt in the form >>> of bracket structures... >> >> I'm not talking about syntax. I'm talking about types. Which -- >> strictly speaking -- lisp hasn't. there is only one (static) type in a >> dynamic type system, which is one large union (discriminant) type. > > I think one could argue that types in Lisp are (), (()), ((())), ... but I > don't believe it deserves much attention. (:-)) No. () and ((()) are given values of the same (static) type in Lisp, because there is not static type checking that keeps you from passing () where you passed a ((()) just 2 lines before. See? > >>>> Has anybody here aver wondered about the popularity of "scripting", >>>> like with Perl, PHP, Python and so on? >>> >>> I did. >> >> And? What was the result of your considerations? :-) > > The Original Sin is the reason. They are sent upon us as a punishment... > (:-)) I agree at least as far as PHP is concerned. Perl pleads for some mitigating circumstances: It has not started as Web language but as an awk substitute and has striven hard to mitigate the problems it causes with safemode and this like. The jury is still out on that. But PHP certainly: A punishment. >>>>> In my view there are three great innovations Ada made, which weren't >>>>> explored at full: >>>>> >>>>> 1. Constrained subtypes (discriminants) >>>>> 2. Implementation inheritance without values (type N is new T;) >>>>> 3. Typed classes (T /= T'Class) >>>> >>>> Here 2 things are missing: >>>> >>>> - parameterized types (see Java generics and the ML type system, the >>>> first is just borrowing from the latter). >>> >>> See pos.1. The constraint is the parameter of. In rare cases you wanted >>> different values of parameters producing isolated types you would apply 2 >>> to 1. >> >> Again I do not know what you're denying here... > I deny them as types-2. The huge advantage of pos.1 is that the result is > type-1. The consequence: you can have common values. With type-2 values > (value-1) of different values (type-1) of type-2 are of different types => > you cannot put them into one container. We must be talking on cross purposes. I admittedly do not understand most of the terminology you're using here and certainly cannot apply it here: Why come, the Hindley-Milner type systems have parametrized types and don't seem to labor under that kind of problem? >>>> - Separation of implementation and type or to put it differently >>>> inheritance and subtyping compatibility. See the ocaml type system. >>> That should be interface inheritance from concrete types. Yes, Ada misses >>> that. No, no, no. Inheritance should never ever decide on on a subtype relationship. It can't. In the most general case (again, see the OCaml manual on objects) objects of a class B derived per inheritance of a class A cannot be subtypes of the objects of class A (their parent class). >> >> No, what I want is even different. 2 values / objects in the OCaml way >> of objects are just compatible if their type signatures (which are >> calculated by the type inference engine) agree or better one is >> contained in the other. This is a weaker contract than in Ada where at >> least a behavioural part of the contract is implied by inheriting the >> implementation, but which (except for generic packages) is useless, >> since you can destroy behaviour by overwriting a method with a >> misbehaved procedure. > I don't see difference yet. When you inherit only interface, you drop all > the implementation or its parts. This is one issue. I don't even need a implementation at the start. One point is, that the type fitting into a slot in a functor or as a paramter of a procedure might well never have been defined explicitely but is a result of type inference. Why should anyone bother with inheriting interfaces from an implementation (espcially if that wouldn't give a guarantee as far as subtyping compatibility goes). > Type inference, if you mean that, is a different one. I doubt that > inference is a right way. And I don't. I'm actually convinced it is the right way. > >>>> I admit the contracts are weaker for allowing to instante a >>>> generic with a package with the "right type signature" as >>>> parameter instead of requiring an instance of another specific >>>> generic. >>> >>> There should be no generics at all... >> >> I'm not sure. Generics provide a relatively strict contract model. I >> like this. But the instantation requirement is cumbersome if compared >> with the way parameterized types work in other langauges (and that is >> exactly what I'm pulling from generics most of the time: parameterized >> types). Parameterized types are now 1:1 substitute for generics. ML >> has functors too, for a good reason. But they are what makes everyday >> live bearable. > We don't need parametrized types (type-2). We can well do almost everything > with parametrized subtypes (type-1). That is the pos.1. Compare: "Almost". There was a time when I thought I could do everything in turbo pascal. After a while one dicovers abstracting over the elements of a container. The need for generics is borne. And so it goes on and on and on. Abstraction and the need to implement concepts in a reusable way drive the development. > type String is array (Positive range <>) of Character; > -- type-1, subtype is parametrized by the bounds > > generic > First : Positive; > Last : Positive; > package > type String is array (First..Last) of Character; > -- type-2, type is parametrized by the bounds > > I don't want the latter. Everything that can be made within type-1 must be > done there. > >>>> But what is absolutely annoying, is, that the compatibility of >>>> objects is determined by inheritance instead by the type >>>> signature. >>> >>> I see it otherwise. Because "compatibility" is undecidable (for both the >>> computer and the programmer), the language must be strongly and >>> manifestedly typed. >> >> Since the contract can be broken by new methods anyway, the only thing >> that counts from a type safety point of view, is, not to break the >> abstraction to the underlying processor / VM, that is, to be able to >> call the methods with the right number of parameters and the right >> representation of the parameters (at the machine level). So the type >> signature is enough. > This is a strange argument. Yes, you cannot verify the semantics, exactly > therefore types come into play. Type only names the behavior, so that it > can be checked formally *without* looking into actual behavior. But it can't: If B derives from A I have no guarantee, none that B behaves a an A. >> It's really bothersome that one cannot supply a class X which is >> compatible to another class Y by just writing out the right methods. > > This is interface inheritance + supertyping + inheritance. It works as > follows: Really cumbersome. Why not just use the type signatur to decide on the compatibility? > Given: X, Y independent types. > > Required: To use Y where the interface of X is expected. > > You create a supertype Z for Y which is a subtype of X. The compiler will > check the contract and require you to implement necessary adaptors. Done. >>>> This makes things like the factory pattern necessary >>>> and it really doesn't work in general case. (And yes, Java and C++ >>>> share this defect). >>> >>> I am not sure what you mean, but when 3 is considered as 1, then >>> dispatching on bare type tags might become possible. >> >> 3? 1? Please elaborate? Is "dispatching on bare type tags" a good or a >> bad thing? You lost me there ... (my fault probably). > You can consider type tag as a discriminant so pos.3 is a case of pos.1. I still don't see the positions -- where is the numbering? > When you make an abstract factory it is usually so that you know somehow > the type you want to create (=you have the tag of), but you don't have yet > any object of that type. I.e. you have to dispatch on the bare tag to the > factory function. Yes I know that. If you look into concrete application it rarely works (well) as a mechanism to abstract over implementation (i.e. the true base classes). > >> But my dear Dmitry -- What does your sentence "All strings have fixed >> length ..." mean than in this context, eh? > > That for any given X there exist a function X'Length. We should carefully > distinguish properties of values and types. And in C this function is? Regards -- Markus