From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,21960280f1d61e84 X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII-7-bit Newsgroups: comp.lang.ada Subject: Re: How come Ada isn't more popular? References: <1169636785.504223.139630@j27g2000cwj.googlegroups.com> <45b8361a_5@news.bluewin.ch> <3pejpgfbki.fsf@hod.lan.m-e-leypold.de> From: Markus E Leypold Organization: N/A Date: Tue, 06 Feb 2007 16:44:17 +0100 Message-ID: User-Agent: Some cool user agent (SCUG) Cancel-Lock: sha1:8uNCmNoEJ4a8jOLo9rUynlVQ1vQ= MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii NNTP-Posting-Host: 88.72.224.62 X-Trace: news.arcor-ip.de 1170776353 88.72.224.62 (6 Feb 2007 16:39:13 +0200) X-Complaints-To: abuse@arcor-ip.de Path: g2news2.google.com!news1.google.com!news4.google.com!newsfeed.stanford.edu!news.tele.dk!news.tele.dk!small.news.tele.dk!news-fra1.dfn.de!newsfeed.arcor-ip.de!news.arcor-ip.de!not-for-mail Xref: g2news2.google.com comp.lang.ada:9026 Date: 2007-02-06T16:44:17+01:00 List-Id: Maciej Sobczak writes: > Markus E Leypold wrote: > >>> And I stress again - GC is not the only solution for >>> manual memory management. >> OK. I accept that for the moment. I'm just not convinced you can do >> everything you need with scope bound memory > > Sure. But for the sake of discussion completeness, you might wish to > throw an example of a situation where scoped lifetime will not make it. Model-View-Controller in GUIs. Especially trying to adapt that to GTKAda. Sorry for being so short on this, but detailing this example would be very long and after all perhaps not very convincing. At every local vie of the situation one could argue that it would just be possible to ... whatever. But in the whole it is really hard to build a reusable toolkit this way without reference counting or GC. Certainly it is impossible (I'm convinced) with scope bound live time. Unfortunately failure (or at least tremendous difficulties) to build something in s specific fashion w/o a (semi-) formal proof or at least the possibility to strip that down to a minimal example is not very convincing, since you could always assume that more research would have found a solution. (In my case I was happy to build a specialized solution and note for later reference the susspicion that controlled objects would be needed for a general one and scope bound life wouldn't suffice). I always intended to look a bit deeper into this issue but until now other things were more important. >>> Determinism in both timing and resource consumption? >> Which brings me back to what I said repeatedly to other people: (1) >> That this determinism is very often not a requirement (outside of >> embdedded programming) > > Java programmer wrote a loop where he opened database cursors, > released in the cursor finalizer. All was working like a charm, unless > put into production, when in one case the loop had to spin many more > times than he ever cared to test. I'm not surprised. GC'ing ressources that are bounded doesn't spare you knowing about the way GC works. My suggestion would have been to either close the cursor explicitely (since I know about the problem) or wrap the production of a new cursor in a module/class which also looks at the number of already opened cursors and collects before reaching certain limits (in effect introducing a new, additional, threshold for GC). > GC did not clean up the abandoned > cursor objects fast enough and the number of unnecessarily opened > cursors hit the server limit. That was the end of this application. :-) > > The fix was easy: write explicit close/dispose/dismiss/whatever at the > end of the loop, so that effectively there was never more than one > open cursor. In fact, this was *manual* resource management. Yes. As I said: GC can be made into an instrument to manage other ressources, but it has to be done right. Sometimes you're just better of assisting this mechanism by manually disposing of external ressources at the right places. You're approach "I want it all, and if I can't have both (memory management AND general ressource collection) I don't want neither" is somewhat counterproductive. But you might well continue to believe in your policy here. I, personally, find that it brings me a big step nearer to salvation if I can have GC, even if I do only manual MM with it. After all: I don't have this much other (external) ressources to care about and if I do, it pays to have a careful look at their structure and then wrap some abstraction around them. > The above would be avoided altogether with scoped lifetime. In this case yes. See -- I do not deny the advantages of 'scoped lifetime'. It is a useful pattern, I've myself given some examples in my last answer. But your approach is, since somebody had problems misusing GC in a specific case in which scoped lifetime would have worked fine, that therefore GC is useless and scoped lifetime rules. Personally I prefer to have both approaches at hand, they are complementary, but I certainly wouldn't want to miss GC in some languages. As far as the usability of GC goes, it even helps with controlled objects: Controlled objects might be highly structured and the usual (i.e. Ada) apporach is, that you hide the details of building and later deallocating the structure under the hodd of the abstraction barrier. Fine. That works. But with GC I don't even have to write a tear-it-down procedure for everything a scoped object allocates under the hood. I just make sure to close (e.g.) the filehandle and let the rest to the GC. > You are right that determinism is very often not a requirement. It is > just the life that very often shows that the initial requirements were > not complete. > >> (2) The determinism is shot anyway by using a heap, not by using >> GC. Even better: GC can introduce detereminism in space >> consumption by compacting the heap, which naive heaps with manual >> MM don't do because of fragementation. > > There is nothing particular in scoped lifetime that would prohibit > compacting heaps and there is nothing particular in GC that guarantees No. But without having 3/4ths of a GC anyway compaction is pretty pointless. > it. It's just the statistics based on popular implementations, not a > rule. Sorry, that is nonsense. There are garbage collectors that are designed to be compacting. They are moving objects around. This is absolutely deterministic and not statistical. Whereas manual allocation and deallocation as in Ada or C will fragment the heap and you have NO guarantee (only statistics) about the ratio of allocated (used) memory and presently unusable hole. None. Hows that about reliability if you can't give space guarantees even if you know about the memory your algorithms need, since unfortunately you cannot perdict the exact sequence of allocations? > I can perfectly imagine compacting heaps managed by scoped lifetime. Yes you can do that. Since you're following pointers than and reqrite them you might as well go the whole way and deollaocate unusable memory while you're at it. > >> (3) What is often needed are upper limits not determinism and thos >> upper limits can be guaranteed with GC or with an appropriate >> collector. > This refers to memory consumption only, whereas I clearly stated > deterministic *time* as a second (first, actually) goal. This refers to both, there are real time compatible GC algorithms. Development didn't stop in the last 20 years. >>>> My impression was that Ada Controlled storage is actually quite a >>>> clean concept compared to C++ storage duration. >> >>> Clean? It adds tag to the type, which then becomes a controlling type >>> in every primitive operation. >> >>> I got bitten by this recently. Adding a destructor to C++ class >>> never has any side effects like this. >> I understand. But the Ada OO way is peculiar, but not unmanagable. > OK, I accept the word "peculiar". I only oppose "quite a clean > concept" in your previous post. :-) Tha Ada OO way. 'Controlled' is just the logical consequence and on top of tagged types quite clean. >>> Apart from this, the bare existence of *two* base types Controlled and >>> Limited_Controlled means that the concepts of controlled and limited >>> are not really orthogonal in the sense that adding one of these >>> meta-properties affects the interface that is "shared" by the other >>> aspect. >> Still. Being able to add a Finalize means you need to have a tagged >> type. I see no alternative. > > You might want to take a look at C++. I know C++ rather well. :-) >>>> But both tie allocation to program scope, synchronous with a stack. I >>>> insist that is not always desirable: It rules out some architecture, >>>> especially those where OO abounds. >>> What architecture? >> I already say in another post: That is difficult to show with a toy >> system. It only shows in larger systems where you really can't / don't >> want to say in any give subsystem module how long a certain peice of >> data lives. So none of those can be burdened with deallocating it. > OK. What about refcounting with smart pointers? (1) It ties lifetime to multiple scopes (instead of one), (2) its not efficient, (3) It stille doesn't work for the general case, since there is still a place where you have to decide that you don't need that pointer copy any more, which is unscoped. >>>> The problem with Controlled, BTW, is that it seems to interact with >>>> the rest of the language in such a way that GNAT didn't get it right >>>> even after ~10 years of development. Perhaps difficult w/o a formal >>>> semantics. >> >>> You see. >> Yes, I see. But GNAT is also a political problem (see the role of >> AdaCore, formerly ACT), so (public) GNAT not getting things right >> might well not indicate a problem with reading the Ada standard, but >> in the release politics for public version. My hint: There is no >> incentive to release a high quality public version GNAT. > > I get the message. Clear enough. :-) Good. Whereas A. mightily profits by all the improvements in the GCC backend, which IMHO was their prime motivation to support and actively push reintegration into the GCC tree (they would have been stuck with a GCC 2.8 based compiler else). Their is another theory that they did it all out of the goodness of their hearts, but I don't subscribe to that. >>> In other words, it's very nice that GC doesn't preclude me from doing >>> some stuff manually, but that's not enough. >> I'm appalled: You don't want GC, but no, it doesn't do enough for >> you? > Exactly. It's not enough, because it doesn't solve the problem of > resource management in a general way. Poor misguided friend. :-) >> Of yourse YMMV. but when I have it, it works really well for me. > > I acknowledge that there might be some applications which are strictly > memory-oriented. They are just not the ones I usually write. It also works for apps that are not "memory-oriented". I think you're missing that e.g. filehandles are really simpler and differently structured ressource from memory. A filehandle does not contain references to memory or other filehandle. Memory does. That vastly simplifies the problem of managing file handles indeed so much that I'm convinced that you don't need buitlin support for this. >>>> Apart from that languages >>>> with GC often provide nice tricks to tie external ressources to their >>>> memory proxy and ditch them when the memory proxy is unreachable >>> These "nice tricks" are not so nice. Most of all, they provide no >>> guarantee whatsoever, even that they will be invoked at all. >> That's not quite true. Those tricks are building blocks to implement >> ressources that are automatically finalized when becoming >> unreachable. But it's up to the library author to write a complete >> implementation. > > I don't understand. If the is no guarantee that the finalizer will be > *ever* called, then what kind of building block it is? > >>> A friend of mine spent long evenings recently hunting for database >>> connection leaks in a big Java application. That's telling something. >> Well -- so he was naive and should have handled / understood that >> part >> of the system better. > > Sure. In other words, be prepared that with GC you have to > handle/understand some parts of the sytem better. So? > >> A friend of mine spent half a month with finding >> problems with manual allocation/deallocation and sneaking heap >> corruption. Does that prove anything? I don't think so. > It does prove that your friend did not benefit from the language that > provides scoped lifetime. In that case, yes. But since there is a new-Operator in Ada, leaking would have been the same problem. >>>> And BTW - in >>>> fcuntional langauges you can do more against ressource leaks, sicne >>>> you can "wrap" functions: >>>> (with_file "output" (with_file "out_put" copy_data)) >>>> It's not always done, but a useful micro pattern. >> >>> Yes, it basically emulates something that is just natural in those >>> languages that provide scope-based lifetime out of the box. >> This is no emulation, but how FP does "scope based". Without the >> necessity to add exception handling at the client side or without >> having to introduce tagged types / classes. Isn't THAT nice? :-) > > Same thing with scoped lifetime, as implemented in C++. No need for > exception handling (unless handling is actually meaninful), nor for > changes in the interface. That's nice, I agree. > The difference is that in languages with scoped lifetime the lifetime > management is a property of the type (and so applies to all > instances), whereas the "FP-trick" above is a property of the > use-side. Which one is more robust and less prone to bugs? This is, forgive me, nonsense. I might want to use a file handle in a scoped way here and in a free floating way there. It still afile handel. And no -- the FP way is not "more prone to bugs" and as with George Bauhaus I simply refuse this kind of discussion (FUD and ContraFUD). > BTW - please show me an example involving 10 objects of different kinds. :-) All at the same time? Well -- bad programming. You don't do everything at the same time in FP (and in Ada ...) and I hardly ever have functions involving 10 parameters. >>>> But I notice, that >>>> "Languages like C provide a more general solution (with regard to >>>> accessing memory), which is conceptually not related to any kind of >>>> fixed type system and can therefore implement any type and data model" >>>> would become a valid argument if I agreed with you. >>> Except that it's not the point I'm making. >> No, but the structure of the argument is basically the same. The >> analogy should help to show why it is (IMHO) invalid. > > Ok, but please elaborate on the above first, so I'm sure that it > relates to my point. You refuse mor automation and abstraction on the pretext of generality and better control. That exactly is the point that has been made against: Compiled languages, structured programming, type systems, modularization, OO, etc -- name any advance you want, it has been opposed with arguments of exactly that kind. What is missing from them is, though, some kind of argument that the "loss of control" or "the loss of generaity" actually is bad, or better: Does cost more than it pays for. Your argument, I admit, might be permissible, but it needs more groundwork. >>> Tons of exception handling (and not only - every way to leave a scope >>> needs to be guarded, not only by exception) are necessary in those >>> languages that rely on GC without providing the above possibility at >>> the same time. >> No. I've done the same in Ada w/o controlled objects, but using a >> generic procedure. >> procedure mark_data_records is new process_cache_with_lock( >> Operation => mark_record, ... ); >> begin >> mark_data_records(...); >> end; >> The client side has no burden with exceaption handling. > > Could you explain this example a bit? Later. Don't hesitate to ask again. I'll just cut+paste the complete code too, but it takes some time (which I don't have now). >>> I agree for references/pointers in polymorphic >>> collections. That's not even close to "almost everywhere" for me, but >>> your application domain may differ. >> Yes. it does, abviously. You might not be aware, but code destined >> for >> mere consumers (as opposed to embedded code and code destined as tools >> for other developers) has a large amount of GUI code in it. > > Yes. > >>>> (b) AFAIR there are restrictions on _where_ I can define controlled >>>> types. AFAIR that was a PITA. >>> That's a mess. I'm sorry to repeat that. >> Yes. But does C++ do it better? The Ada restrictions AFAIK come from >> the necessity of separate linking and compilation (you must be able to >> relink w/o looking at the body) and C++ treats that against the >> ability to add finalizers everyhwere. > I don't understand. Adding a finalizer/destructor to the type that > didn't have it before means changes in both specs and the > body. Relinking is not enough. I thought you talked about the restriction where a Controlled type can be defined. Regards -- Markus