From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,463c5796782db6d8 X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2003-04-11 11:35:03 PST Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!headwall.stanford.edu!newshub.sdsu.edu!logbridge.uoregon.edu!msunews!not-for-mail From: "Chad R. Meiners" Newsgroups: comp.lang.ada Subject: Re: [Spark] Arrays of Strings Date: Fri, 11 Apr 2003 14:27:53 -0400 Organization: Michigan State University Message-ID: References: <1ec946d1.0304090942.3106b4e4@posting.google.com> <1ec946d1.0304100609.52b0fac0@posting.google.com> <1049986095.779228@master.nyc.kbcfp.com> <1050064265.685430@master.nyc.kbcfp.com> NNTP-Posting-Host: arctic.cse.msu.edu X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2800.1106 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1106 Xref: archiver1.google.com comp.lang.ada:36094 Date: 2003-04-11T14:27:53-04:00 List-Id: "Hyman Rosen" wrote in message news:1050064265.685430@master.nyc.kbcfp.com... > I think you are demonstrating *my* point. No I am not. You are not listening to the argument and questioning your own opinion. > Despite the intentions of Spark, > the OP needs to generate and pass a pointer, and he's desperately trying > to work around the constraints of his programming language. As Rod Chapman and Peter Amey point out, the data type the OP is worrying needs to be modeled in Spark-friendly way that can used to accomplish the actually call in the none Spark body. As for trying desperately to work around the constraints of Spark, I also believe that you misunderstood the OP. It is my opinion he was trying to get advice on how to work with the language constraint so that he could separate the provable part of the program for the part left unproven. So the OP might have been frustrated but learning usually involves some frustration due to misunderstandings of lack of knowledge. > It's all good > and well that you can verify programs written in Spark, but it's not of > much use if programs written in Spark can't do anything! This is a ludicrous argument. Of course you can useful programs in Spark (been done with published case studies see www.sparkada.com). The nice point about Spark is that you can only apply it where necessary and interface it cleanly to the rest of your Ada code base (thus you can also interface to other programming languages as well) > Do you remember how, in the days of Ada83, it would be suggested that > people use task types and pointers to them to work around the lack of > procedure pointers? You are distracting from the argument. The above is irrelevant. > Overgeneralize, my foot! Of course you overgeneralized! The design process and goals of Java and Spark are unrelated and you cannot attribute any perceived language faults out of either to the same process (this vague notion of language design hubris). You also presented a strawman argument. There was no hubris uninvolved in the creation of Spark. You should at least read the history and rational of Spark before you pass judgment upon its designers hubris. Spark does not pretend to provide you with all the capabilities of most general purpose programming languages. It instead realized that some code is going to have to exist outside the Spark boundary and provides you with a nice method to interface with this code. This is a very nice feature since it allows you to explicitly state where you are separating your concerns. This is not hubris but instead a very good idea! You must also remember that Spark is a formal method. Learning formal methodologies require a `slight' paradigm change on your developmental thinking. This is why they are often misunderstood and mischaracterized by strawman arguments.