From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,aa3e816c84a501a9 X-Google-Attributes: gid103376,public From: dewar@merv.cs.nyu.edu (Robert Dewar) Subject: Re: Variable-block memory managers bad in realtime Date: 1997/12/14 Message-ID: #1/1 X-Deja-AN: 298076260 References: X-Complaints-To: usenet@news.nyu.edu X-Trace: news.nyu.edu 882108537 21853 (None) 128.122.140.58 Organization: New York University Newsgroups: comp.lang.ada Date: 1997-12-14T00:00:00+00:00 List-Id: <> This is plain false. There are well known algorithms that work just fine, and do not depend on statistical regularity. Basically the trade off is that if you want guaranteed constant time allocation, you have to be willing to waste some fixed proportion of memory in the worst case. As or proving such algorithms correct, again you need to go to the literature. This is precisely the kind of area where you don't need "domain experts". You need people who are aware of the standard work in analysis of algorithms, for which there is a large established literature. No doubt you will complain that you don't want or need "experts in memory allocation", but as your note makes clear you certainly COULD benefit from having people in your project who know this kind of material. Once again, this is not the sort of thing you can pick up from newsgroup FAQ's. The literature on memory allocation is comprehensive, and accessible to someone with basic knowledge of the area, but it is not so accessible if you don't have the general background. If I gave my graduate students in CS the task of looking up a memory allocation algorithm that had "perfect" characterstics as far as constant time (i.e. O(1)) allocation, I would expect them to be able to track it down. On the other hand, if I asked them to track down papers in high energy physics or quantum chemistry, I would not expect them to have the background. So this is another opportunity to emphasize that these days, we recognize that programming is a complex task, but that there are many tools available to help, some of them quite complex, and requiring a significant level of training and understanding. The best programming efforts are where domain experts work with experts in the technology in a cooperative manner to make sure that the best technology is used to incorproate the most accurate model of the domain, reaching for the best knowledge available in both areas. If you look at what is happening in scientific programming these days, you see an interesting trend. The typical quantum chemistry or crystallography programs of the 60's (my thesis is in the xtalography area), did not require complex algorithms from a CS point of view, and the work was done by chemists who had picked up some general Fortran programming knowledge. But even back then, the result was very effective programming of silly algorithms in some cases. I improved one data gathering program by replacing a bubble sort with treesort3 (people know it as heapsort today), and then further improved it by devising a modification of treesort3 that halved the number of compares from 2logN to logN [already I was becoming more of a computer scientist than a chemist (*)] These days, scientific programs tend to use highly complex algorithms, and numerical computing is an avid consumer of some of the most advanced work in algorithmic design and analysis. At the Courant Institute, computer scientist, numerical analysis, and other scientific experts work together to solve problems in such areas as secondary oil recovery, design of airframes, and design of fusion reactors. Robert Dewar Ada Core Technologies (*) This optimziation has never been properly published, although full details are in my thesis. The idea appeared as an excercise in Volume III of Knuth, though Knuth's answer was in fact wrong in an important detail (one fo the several nice Wells Fargo checks for $1 that I got from this volume was for finding this bug :-) If you are interested in learning more about that optimization, or using it, you have easy access to the necessary information. Just look at the extensively commented body of GNAT.Heap_Sort_A or GNAT.Heap_Sort_G in files g-hesora.adb and g-hesorg.adb. In general spend some time wandering through the GNAT.xxx hierarchy of useful stuff if you are a GNAT user, you will find it worth while :-)