From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,21960280f1d61e84 X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII-7-bit Newsgroups: comp.lang.ada Subject: Re: How come Ada isn't more popular? References: <1169636785.504223.139630@j27g2000cwj.googlegroups.com> <45b8361a_5@news.bluewin.ch> <3pejpgfbki.fsf@hod.lan.m-e-leypold.de> From: Markus E Leypold Organization: N/A Date: Fri, 02 Feb 2007 14:57:17 +0100 Message-ID: User-Agent: Some cool user agent (SCUG) Cancel-Lock: sha1:Zk2kt/CbJJTSjN4Z6YYZLszaU9g= MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii NNTP-Posting-Host: 88.74.62.184 X-Trace: news.arcor-ip.de 1170424342 88.74.62.184 (2 Feb 2007 14:52:22 +0200) X-Complaints-To: abuse@arcor-ip.de Path: g2news2.google.com!news1.google.com!news3.google.com!border1.nntp.dca.giganews.com!nntp.giganews.com!newsfeed00.sul.t-online.de!newsfeed01.sul.t-online.de!t-online.de!newsfeed.arcor-ip.de!news.arcor-ip.de!not-for-mail Xref: g2news2.google.com comp.lang.ada:8856 Date: 2007-02-02T14:57:17+01:00 List-Id: Maciej Sobczak writes: > Markus E Leypold wrote: > >>>> But from what I remember in >>>> the 1997s to 1998s >>> Most programming languages were terrible at that time, that's true. >> Not Ada 95 ... :-). > > Ada 95 *is* terrible. It doesn't have containers nor unbounded strings > and it cannot even return limited types from functions. Yuck! ;-) Can't it? Wasn't there a trick with renaming somewhere? Like A : Limited_Type renames some_function(...); I seem to remember something like this. Might be mistaken: I usually end up to eliminate limited types in my programs against my will, since they play bad with unlimited base classes (like found in GtkAda). >> Oh yes, misconceptions perhaps. But I'v only been talking about >> peoples motivations (which say a lot about their perceived problems). > > Everybody has full rights to use misconceptions as a basis for > perception and motivation. That's a natural limitation of human brain. Exactly. And we talked about "Why isn't Ada more popular?". This is basically about how Ada is perceived, not about "technical truth", i.e. is Ada really as it is perceived. >>> I've even heard that Java is better, because it has a String class and >>> there is no need to use char* as in C++ (!). FUD can buy a lot. >> As far as that goes I have seen people getting tripped up really bad >> by string::c_str(). And I think you need it, if you don't program pure >> C++, which at that time nobody did. > > Good point. It is true that people get tripped when interfacing with > old C code. > What about interfacing with old C code *from Java*? Are They don't. And that is the interesting things. The people transitioning C->Java where a totally different class from those doing (almost at the same time, not exactly, but almost) the C->C++ transition. The C++ adopters were often (not necessarily) always motivated by being able to integrate their existing code base and take their C-knowhow with them. They felt that they didn't get more removed from the system, but still had all the interfaces and libraries available. In a sense this was a no-brainer for the applications folks. They thought they got OO for free (no downside). Of course the downside was that old hand C programmers don't necessarily make good C++ programmers or good OO developers. The Java adopters on the other side knew that they were giving up their legacy code (if they had any) but on the up side were rewarded with a much more complete standard runtime library. Adopting Java removed you a further step from your host system, so old know-how was only partly applicable if at all (yes, there is JNI, but it wasn't commonly used). So Java was adopted by people who (a) thought the win worth the price they had to pay (basically start from the beginning), (b) people who realized that their existing code base was crap anyway :-) and (c) newcomers (companies or students who didn't have any specific know-how yet and decided to get that know-how in Java. The last point in my eyes accounts for relatively high numbers of clueless look-what-we-have-newly-invented (old win in new bottles) newbies in the Java sector who gave Java a bad name. I still have the reflex to groan deeply when I hear the words: "We now we are trying to re-implement this in Java". It's probably unjustified these days but some years ago that sentence to me was the typical hallmark of a buzzword chasing newbie who thought that by choosing his favourite language (probably only his 1st or 2nd one), all problems would magically go away. (Sorry, no proofs or sources here, folks. I think the recent history of programming language development and adoption would bear some more research. You might apply at my e-mail address to sponsor this research :-) When I talk about all those transitions, I see, that there was no C->Ada transition, at least no mass movement. So we come back to the C->initial question: Why not? I think some of the posts here have already given answers to that: Historical reasons. Those transitions would have had to happen around 1995-2000 which in my eyes was a period where people were looking for new languages (GUI development in C and all this became rather unfeasible at the time). But a process of bringing the candidate languages into the public awareness would have to have started earlier. Was the Ada 95 standard just a tiny bit too late (it is understandable that Ada 83 was not a serious contender for this, people were looking for OO really urgently)? Or was it the vendor situation? GCC has had C++ for some time, but did GNAT come too late? I think this is very much the case of being "at the right place at the right time" -- when people were looking for ways out of their pain, Java and C++ were (more or less) ready to at least promise salvation. I wonder if Ada was ready ... -- at least it wasn't in public discussion then. > What about interfacing with old C code *from Java*? Are > there less opportunities for getting tripped, or what? As I said: It's less part of the overall migration strategy usually associated with a transition to Java. > Another misconception. Interfacing to old C is tricky from Ada as > well. Yes :-). I never denied that. But then you're less tempted to mix Ada and C freely, as you're in C++/C. So in Ada (and in Java and in every other language with a useful foreign function call interface) you get a clear interface (in the original as in the programming sense of the word) to C. In C++ the temptation / opportunity to get messed up is much greater. > (Strangely, in "pure" C++ you have to use .c_str() even when opening a > file stream, because fstream constructor does not understand string - > that's real oops, but actually I've never seen anybody getting tripped > here.) If you just do f(s.c_str()) and f _is_ properly behaved, that is, only reads from the pointer or does a strdup(), everything is fine, but, I note, not thread safe. I wouldn't exclude the possibility that the resulting race condition is hidden in a nice number of C++ programs out there. >> s.c_str() returned a pointer >> into some internal data structure of s which promptly changed when s >> was modified. > > Yes. > >> The only "safe" way to use it was strdup(s.c_str()) > > No, the only "safe" way to use it is making an immutable copy of the string: > > string someString = "Hello"; > const string safeCopy(someString); > some_old_C_function(safeCopy.c_str()); Brr. Yes, that's another way to solve this problem. > > // modify someString here without influencing anything > // ... > > > and >> that is not threadsafe as anybody can see. > Why? There is nothing about threads here. Your solution is thread safe, if the strings package is (which it wasn't in the past). My "solution" isn't since, if any other thread holds a reference to the string in question and modifies it between c_str() and strdup() we're not working only with suddenly modified data (which shouldn't happen), but with pointers to invalid memory. That means: The race has the potential not to be just a race, but to break type safety! Which is an interaction between the presence of threads and the semantics of a program which is just so bad bad bad. >> I see "the need to use >> char* in C++" rumour as the result of people having been burned >> by similarly quirks at that time. > > Yes, I understand it. Still, most of the rumour comes from misconceptions. I think this is not about, that you/someone _can_ handle C++ safely. It's about how probable that is to happen without having to study up on arcane knowledge, by just doing the next best "reasonable" thing. And that is the area where C++ will trip up, yes, especially the newcomer. >>> I think that at the end of the day the embedded C++ will disappear >>> from the market as the "full" C++ gets wider compiler support on >>> embedded platforms. >> That is no question of compiler support, as I understand it, but of >> verifiability and safety. A bit like Ravenscar, but -- of course -- >> not as highly integer (ahem ... :-). > > I agree with it, but restrictions for embedded C++ are not catering > for the same objectives as those of Ravenscar. For example: EC++ does > not have templates. Nor even namespaces (:-|). Does it have anything > to do with schedulability or necessary runtime support? No. Result - But perhaps with trying to attach at least a feeble resemblance of semantics to the remaining language and avoid -- heuristically -- the most common handling errors (namespaces + overloading + "last identifier defined wins" make a nice mess in C++). > some people got frustrated and invented this: > > http://www.iar.com/p7371/p7371_eng.php Obviously another market: Minus verifiability (well, of a sort), plus the ability to compile to really small targets. Useful. > > Funny? > > That's why I believe that Embedded C++ will die. That might be. >>> Subsetting C++ would be beneficial in the sense similar to Ravenscar >>> or by extracting some core and using it with formal methods (sort of >>> "SPARK++"), but I doubt it will ever happen. >> It already did (and perhaps died) >> http://en.wikipedia.org/wiki/Embedded_C++ > Exactly. It will die, because it's just a subset. If it was a subset > extended with annotations ("SPARK++") or with anything else, the > situation would be different, because it would provide new > possibilities instead of only limiting them. There is some truth in that. > >>> The interesting thing is that memory management is *said* to be >>> painful. > >> I disagree. The only-downward-closures style of C++ and Ada, which >> allows only to mimic "upward closures" by using classes, heavily >> influences the way the programmer thinks. Higher level abstractions >> (as in functional languages) would require full closure -- and since >> this means that memory life time cannot bound to scope any more, this >> would be the point where manual memory management becomes painful. > You can have it by refcounting function frames (and preserving some > determinism of destructors). GC is not needed for full closures, as > far as I perceive it (with all my misconceptions behind ;-) ). Yes, one could do it like that. Ref-counting is rumoured to be inefficient, but if you don't have too many closure that might just work. > On the other hand, GC is convincing with some lock-free algorithms. > Now, *this* is a tough subject for Ada community, right? ;-) :-). > >> Furthermore I've been convinced that manual memory management hinders >> modularity. > Whereas I say that I don't care about manual memory management in my > programs. You can have modularity without GC. Certainly. But you can have more with GC. George Bauhaus recently refered to the "A Critique of Standard ML" by Andrew W. Appel. http://www.cs.princeton.edu/research/techreps/TR-364-92 I re-read that paper cursorily and noticed that there a some nice points about the desirability of GC in there (approx. 1 page) I suggest you read that: It says it better than I could say it, that w/o GC the responsibility for freeing/disposing of allocated storage is always/often a difficult question in general. People who don't have GC often say that they can do anything with manual memory management. I humbly suggest that might be, because they already think about their solutions in terms compatible with manual memory management. Which means, they are missing the perception of those opportunities where GC would buy a vastly simpler architecture / solution / whatever. > (And if I think about all these funny GC-related effects like > resurrection of objects in Java, then I'm not sure what kind of > modularity you are referring to. ;-) ) Resurrection? You're talking about finalization in Java? Well -- the way this is designed it's just a perversion. >>> Reference-oriented languages have completely >>> different ratio of "new per kLOC" so GC is not a feature there, it's a >>> must. >> I wonder, if it is really possible to do OO without being >> reference-oriented. I somewhat doubt it. > Why? OO is about encapsulation and polymorphism, these don't need > references everywhere. Yes, but -- you want to keep, say, a list of Shape(s). Those can be Triangle(s), Circle(s) etc, which are all derived from class Shape. How do you store this list? An array of Shape'Class is out of question because of the different allocation requirements for the descendants of Shape(s). >>> But then the question is not whether GC is better, but whether >>> reference-oriented languages are better than value-oriented ones. Many >>> people get seduced by GC before they even start asking such questions. >> Value-oriented in my world would be functional -- languages which all >> heavily rely on GC. > What about maintainability and reasoning? What about it? It's easy with value-oriented languages (i.e. languages that just produce new values from old ones in a non-destructive fashion). Functional languages do this therefore reasoning is a well developed art there. But the representations of all those values (trees, lists, ...) (a) rely heavily on representation sharing and (b) use references because of that. They need and use GC. >> I also admit being one of the seduced, but that is not surprising >> since my main focus is not in embedded programming and in everything >> else it's sheer folly not to have GC. > > I disagree. I'm not seduced. So stay pure. :-) Reminds me a bit to the folks in youth who didn't want to use high level languages like (gasp) C or Pascal, and insisted on Assembler, because they (a) wanted to be efficient all the time, (b) didn't trust the compiler. I've decided, if I want to deliver any interesting functionality to the end user, my resources (developer time) are limited, I have to leave everything I can to automation (i.e. compilers, garbage collectors, even libraries), to be able to reach my lofty goals. >> The arguments against GC often read like arguments against virtual >> memory, against high level languages as opposed to assembler, >> against file systems (yes there was a time when some people thought >> that the application would best do allocation of disc cylinders >> itself since it knows its access patterns better than the FS). > Valid points. Still, > Blue Gene/L uses real addressing scheme in each > node and more advanced database servers use raw disk access bypassing > the features provided by FS. Guess why. Yes. Certainly. The point is to know when to optimise, not to do it always. Like I said elsewhere: I advocate the use of a type safe, garbage collected language more in the functional sector, probably, together with a good foreign function call interface and a real low level language for interfacing and, perhaps, hot spot optimisation. Regards -- Markus