From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=0.4 required=5.0 tests=BAYES_00,FORGED_MUA_MOZILLA autolearn=no autolearn_force=no version=3.4.4 X-Google-Thread: a07f3367d7,aba1514f4a1fc450,start X-Google-Attributes: gida07f3367d7,public,usenet X-Google-NewGroupId: yes X-Google-Language: ENGLISH,ASCII-7-bit Received: by 10.68.222.225 with SMTP id qp1mr1244351pbc.8.1345582111594; Tue, 21 Aug 2012 13:48:31 -0700 (PDT) Path: p10ni123403549pbh.1!nntp.google.com!news.glorb.com!fu-berlin.de!uni-berlin.de!individual.net!not-for-mail From: Niklas Holsti Newsgroups: comp.arch,comp.lang.ada Subject: Re: Have the Itanium critics all been proven wrong? Date: Tue, 21 Aug 2012 23:48:29 +0300 Organization: Tidorum Ltd Message-ID: References: <5021874F.1747D0BF@sonic.net> <1e1tf9-0kp2.ln1@ntp6.tmsw.no> <46f19bfc-930e-4f06-b5a6-c60f39cfda0c@p14g2000yqk.googlegroups.com> <077b12f6-1196-4b5c-bbdb-04291b1ae616@q22g2000vbx.googlegroups.com> <589825d2-d998-456a-9c37-c8ae13e1e7bc@e29g2000vbm.googlegroups.com> Mime-Version: 1.0 X-Trace: individual.net 7c2RQGcZRHt1mAJOxOU4pAReqGRuWAiV5JBF9gKPSSlRzafw/whhEB1rRVetX0rulS Cancel-Lock: sha1:R95iZbJTRvDxL5u7KHNm1K9SuO4= User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:14.0) Gecko/20120713 Thunderbird/14.0 In-Reply-To: <589825d2-d998-456a-9c37-c8ae13e1e7bc@e29g2000vbm.googlegroups.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Date: 2012-08-21T23:48:29+03:00 List-Id: (I'm cross-posting (I hope) this to comp.lang.ada, since there are questions on how Ada is used.) On 12-08-21 16:00 , Michael S wrote: > On Aug 21, 1:35 pm, Brian Drummond wrote: ... >> And where any dynamic allocation is not good enough, the SPARK subset >> currently mandates pure static allocation (and fixed size stack objects) >> to allow formal proof of storage requirements. Harsh, but I'd rather not >> see a segfault in my car. These restrictions may be relaxed if proof >> technologies improve, but I wouldn't hold my breath... > > SPARK is an Ada analogue of MISRA C? Not really, as I understand them. MISRA C is a bunch of rules that are supposed to define a safer subset of C, and for which rule violations can (mostly?) be detected automatically. But I don't think that the MISRA rules automatically make a C program much more amenable to static analysis or formal proof. SPARK is a Ada, plus annotations embedded in the Ada source, where the goal is to allow automatic proof that the program is safe and correct per some formal criteria expressed in the annotations and elsewhere. For example, one can prove that all run-time checks in the Ada program will succeed and will not raise exceptions. Limitations in the SPARK proof tools mean that only a subset of Ada can be handled by the tools. Moreover, the proofs often need some manual guidance. > I guess, without reading a single > line of spec, I already don't like it . At present, SPARK handles a rather small subset of Ada that forbids some major features, for example access types (pointers). I would use SPARK only for very critical code. However, it is possible to write parts of an application in SPARK, and mix in full Ada code for the rest, "hiding" it from the SPARK tools. Of course, the SPARK proof then formally covers only the SPARK part. > So, I have more generic question about Ada state of the art. > > Preface: > According to my understanding, one of the Ada objectives was creating > two languages in one. > The first language intended primarily for complex embedded software, > but also suitable for general-purpose system programming. > The second language intended primarily for big complex application- > level programming, but is also applicable for relatively simple > embedded software and for some system programming tasks. I've been involved wih Ada for a long time, but I don't remember such a "two languages" thing being discussed. Perhaps it was a topic in the early language definition stages. > The second language, at source level, but not necessarily at pragma > and run-time support level, is a strict subset of the first language. > There is clear distinction line between the first and second > languages, drawn around built-in procedures with the names starting > with unchecked_ and with external procedure calls. > The first language does not pretend to be "safe". And sorry, I am not > in the mood of defining exactly what "safe" means. One can certainly do unsafe or wrong things in Ada, even without using any "unchecked programming", as it is called. A simple mistake like using an uninitialized variable can make the program give unpredictable results. But it is not possible to clobber arbitrary memory locations in this way; for that, one must use Unchecked_Conversion to generate a wild pointer. (OK, there are some other low-level ways to generate wild pointers, too.) The things called Unchecked_ in Ada are the Unchecked_Conversion generic, which reinterprets data of one type as being of a different type, the Unchecked_Deallocation generic, which deallocates heap memory (just like "free" in C), and the Unchecked_Access attribute, which can make a pointer value live longer than the object to which it points. All can cause run-time trouble in obvious ways, but are not enough IMO to make "two languages". Oh yes, and the Unchecked_Union pragma, for interfacing with C union types. But mixed-language programs are more or less completely "unchecked" anyway. > The second language is "safe" and, supposedly, very portable. There are many, many tales that confirm the portability, and my own experience also agrees. > Q. > Do people actually use the "second" Ada language for really big and > really complex application programs? It is certainly possible. For example, the main SPARK tools are written in SPARK. > I mean where covering of 95% of > 'C' dynamic memory use cases with variable length arrays, storage > pools etc is not enough and have to handle somehow the remaining 5%. I think the answer here depends on the kind of data structures used in the program. My own applications use graph structures with many links between nodes, and therefore I do use heap memory (and some Unchecked_Deallocation). Applications that are more array-oriented may be able to avoid heap. > According to my understanding, garbage collector implemented by the > run-time support is the only sane way to handle these last 5% while > remaining in the envelop of the "second" Ada language. > So what happens in practice? > Do people that develop big apps in Ada use "second" language + runtime > with GC? As far as I know, the only Ada systems with automatic garbage collection are those that compile Ada to Java byte-code and run on the JVM. I have heard that some people have used the Boehm "conservative" GC with Ada, but I'm not sure if it works reliably (because some Ada compilers play tricks with array base addresses). So the answer here is "no", I think; GC is not used much with Ada. Unfortunately, IMO. > Or do they opt for unchecked_deallocation() thus, effectively, finding > themselves in the "first" language, even if they don't use any other > unchecked_xxx procedure? Unchecked_Deallocation is used a lot. However, Ada has a class of "controlled" types with user-defined initialization and finalization that can be used to automate deallocation by reference counting. Of course, Ada is not unique in this; C++ has similar means. But although many Ada programs use some unchecked stuff, the web of checks in Ada is so dense that even if the unchecked computation creates an error, the error is usually detected rapidly as an exception. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .