* Ada vs Fortran for scientific applications @ 2006-05-22 4:54 Nasser Abbasi 2006-05-22 6:45 ` Brooks Moses ` (10 more replies) 0 siblings, 11 replies; 314+ messages in thread From: Nasser Abbasi @ 2006-05-22 4:54 UTC (permalink / raw) I like to discuss the technical reasons why Ada is not used as much as Fortran for scientific and number crunching type applications? To make the discussion more focused, let's assume you want to start developing a large scientific application in the domain where Fortran is commonly used. Say you want to develop a new large Finite Elements Methods program or large computational physics simulation system. Assume you can choose either Ada or Fortran. What are the technical language specific reasons why Fortran would be selected over Ada? I happened to know a little about Ada and Fortran, and from what I know, I think Ada would be an excellent choice due to its strong typing, good support for numerical types and good Math library. I know also that Fortran is supposed to be better/faster when it comes to working with large Arrays (Matrices), but it is not clear to me why that is, and if it is still true with Ada 05. Something about arrays aliasing, but not sure how that is. I am also not sure on the support of sparse matrices in both languages' libraries. It is known that Ada strong domain is realtime and safety critical applications. I never understood why Ada never became popular in the scientific field in particular in areas such as computational physics or CFD or such similar fields. Any thoughts? thanks, Nasser ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 4:54 Ada vs Fortran for scientific applications Nasser Abbasi @ 2006-05-22 6:45 ` Brooks Moses 2006-05-22 7:41 ` Jan Vorbrüggen ` (2 more replies) 2006-05-22 7:34 ` Dmitry A. Kazakov ` (9 subsequent siblings) 10 siblings, 3 replies; 314+ messages in thread From: Brooks Moses @ 2006-05-22 6:45 UTC (permalink / raw) Nasser Abbasi wrote: > I like to discuss the technical reasons why Ada is not used as much as > Fortran for scientific and number crunching type applications? > > To make the discussion more focused, let's assume you want to start > developing a large scientific application in the domain where Fortran is > commonly used. Say you want to develop a new large Finite Elements Methods > program or large computational physics simulation system. Assume you can > choose either Ada or Fortran. > > What are the technical language specific reasons why Fortran would be > selected over Ada? In my case? I know Fortran. I don't know Ada. I would be foolish to attempt to learn a new language while writing a large program that I needed to rely on. > I happened to know a little about Ada and Fortran, and from what I know, I > think Ada would be an excellent choice due to its strong typing, good > support for numerical types and good Math library. Quite probably. Many languages are. I'm rapidly becoming of the opinion that, once languages hit a minimum level of functionality sufficient to be capable of solving the problem at hand -- and, for scientific processing, that generally means "does it avoid doing something stupid with basic array math?" and "can it link to an appropriate parallel-processing library" -- the relevant questions are all about the available programmers and the character of the interface between the programmer and the language. Some languages are vastly easier to program in than others -- but which one is which depends on the programmers you have and what they're trying to do. > I know also that Fortran is supposed to be better/faster when it comes to > working with large Arrays (Matrices), but it is not clear to me why that is, > and if it is still true with Ada 05. Something about arrays aliasing, but > not sure how that is. This may be true to a minor extent; I doubt that there is any major fundamental difference. In general, "Which language is faster?" is a meaningless question. To take a well-populated example, consider that for Fortran on Intel desktop computers there are usually significant variations (10 to 20 percent seems very common) in the speed of the exact same program when compiled on different compilers -- and that's just the mature compilers. With less-mature compilers, such as early versions of g95 and gfortran, there can be differences of a factor of two or three in speed. Now, it is highly unlikely that the Ada language has anything in it that will cause more than a few percentage points of difference in the maximum theoretically-achievable speed. Thus, in practice any speed differences are going to be almost entirely a factor of the quality of the code you write, and the quality of the compiler you compile it on. The only way the language is likely to be relevant is if you are writing the code for a platform where there aren't any high-quality compilers for the language you're using. My completely unfounded guess is that this is more likely to be true for Ada than it is for Fortran, simply because of popularity, but it could be completely the reverse for the system you're using. Thus, it's far more important to look at the compilers you have available, and do your own tests on comparable code similar to your expected application, than it is to worry about minor theoretical details about "which language is faster". > I am also not sure on the support of sparse matrices in both languages' > libraries. I'd be fairly surprised if there weren't decent sparse matrix libraries available for both languages. Probably lots of them, with a vast range of quality. > It is known that Ada strong domain is realtime and safety critical > applications. I never understood why Ada never became popular in the > scientific field in particular in areas such as computational physics or CFD > or such similar fields. I have no idea; by the time I reached college, that popularity contest had already been settled, and my intro-to-engineering class taught Fortran and didn't teach Ada. - Brooks, posting from comp.lang.fortran -- The "bmoses-nospam" address is valid; no unmunging needed. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 6:45 ` Brooks Moses @ 2006-05-22 7:41 ` Jan Vorbrüggen 2006-05-22 18:49 ` Brooks Moses 2006-05-22 11:48 ` Michael Metcalf 2006-05-23 8:34 ` Jon Harrop 2 siblings, 1 reply; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-22 7:41 UTC (permalink / raw) > Now, it is highly unlikely that the Ada language has anything in it that > will cause more than a few percentage points of difference in the > maximum theoretically-achievable speed. Thus, in practice any speed > differences are going to be almost entirely a factor of the quality of > the code you write, and the quality of the compiler you compile it on. That might be the case for Ada, but it's not clearly true for others. A language - any language, but in particular a computer or programming language - has both syntax and semantics. In trying to evaluate a language, people too often only look at the syntax - how can I express an operation I want executed? - instead of primarily the semantics - what in detail does "operation" mean here? -. If a language does not allow you to express cer- tain things that you know to be true about your operation, and that could be used to optimize the program, you are already at a disadvantage compared to a different language where that is not the case. As an example, look at the semantics of a DO loop and the apparantly equivalent FORALL statement in Fortran. Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 7:41 ` Jan Vorbrüggen @ 2006-05-22 18:49 ` Brooks Moses 2006-05-23 5:51 ` Tim Prince 2006-05-23 8:56 ` Jan Vorbrüggen 0 siblings, 2 replies; 314+ messages in thread From: Brooks Moses @ 2006-05-22 18:49 UTC (permalink / raw) Jan Vorbr�ggen wrote: >>Now, it is highly unlikely that the Ada language has anything in it that >>will cause more than a few percentage points of difference in the >>maximum theoretically-achievable speed. Thus, in practice any speed >>differences are going to be almost entirely a factor of the quality of >>the code you write, and the quality of the compiler you compile it on. > > That might be the case for Ada, but it's not clearly true for others. > > A language - any language, but in particular a computer or programming > language - has both syntax and semantics. In trying to evaluate a language, > people too often only look at the syntax - how can I express an operation > I want executed? - instead of primarily the semantics - what in detail does > "operation" mean here? -. If a language does not allow you to express cer- > tain things that you know to be true about your operation, and that could > be used to optimize the program, you are already at a disadvantage compared > to a different language where that is not the case. As an example, look at > the semantics of a DO loop and the apparantly equivalent FORALL statement > in Fortran. I agree, in principle -- and you've stated it rather more clearly that I was able to; much of that is something that I meant to say and didn't really. My contention is that, in practice, most of the languages that numerical code is commonly written in have semantics that are sufficient to do the task reasonably well. And that as such, once one passes a certain bar of functionality for the task, semantics have rather little impact on the decision -- things like the distinction between a DO loop and an equivalent FORALL construct rarely seem to have a significant impact. (This does, though, require that one get past that bar, and a lot of languages don't, usually because that's not what they're intended for.) I should note, though, that I'm drawing the syntax/semantics line way over on the side of semantics. For instance, I consider complex arithmetic and multidimensional arrays to be essentially syntax -- one can trivially write code longhand to produce the same functionality. It's just awfully inefficient syntax. On the other hand, I suspect that my view is colored pretty heavily from not having a lot of parallel-processing experience, and that semantics aren't anywhere near a most-of-the-top-languages-are-equivalent ceiling there. Thus, I should probably limit my above claims to single-threaded programs -- which I recognize is a significant limitation. - Brooks -- The "bmoses-nospam" address is valid; no unmunging needed. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 18:49 ` Brooks Moses @ 2006-05-23 5:51 ` Tim Prince 2006-05-23 8:56 ` Jan Vorbrüggen 1 sibling, 0 replies; 314+ messages in thread From: Tim Prince @ 2006-05-23 5:51 UTC (permalink / raw) Brooks Moses wrote: > On the other hand, I suspect that my view is colored pretty heavily from > not having a lot of parallel-processing experience, and that semantics > aren't anywhere near a most-of-the-top-languages-are-equivalent ceiling > there. Thus, I should probably limit my above claims to single-threaded > programs -- which I recognize is a significant limitation. > As a major claim to superiority for Ada lies in its built-in support for concurrency, the parallel capabilities acquired by Fortran must be compared, according to the topic opened here. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 18:49 ` Brooks Moses 2006-05-23 5:51 ` Tim Prince @ 2006-05-23 8:56 ` Jan Vorbrüggen 2006-05-23 13:28 ` Greg Lindahl 1 sibling, 1 reply; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-23 8:56 UTC (permalink / raw) > I should note, though, that I'm drawing the syntax/semantics line way > over on the side of semantics. For instance, I consider complex > arithmetic and multidimensional arrays to be essentially syntax -- one > can trivially write code longhand to produce the same functionality. > It's just awfully inefficient syntax. No, that assumption is incorrect. To take complex arithmetic as an example: If I have a native complex type, the language implementor is free to arrange the real and imaginary part in memory as he sees fit, perhaps even using different arrangements in different situations. She can easily make use of other pieces of information, for instance that one of the parts is zero or one, to simplify an arithmetic expression (think of constant propagation and elimination of common subexpressions). If you do not have the semantics available, you are forced into a particular implementation that will likely by suboptimal in all but a set of measure 0 8-). This is very similar to using a non-SEQUENCE derived type versus a C structure: The _programmer_ may assume _less_ about what is going on under the hood, has to specify less details, and therefore the implementor has more freedom to select a (we hope) optimal solution. > On the other hand, I suspect that my view is colored pretty heavily from > not having a lot of parallel-processing experience, and that semantics > aren't anywhere near a most-of-the-top-languages-are-equivalent ceiling > there. Thus, I should probably limit my above claims to single-threaded > programs -- which I recognize is a significant limitation. That is one of the clear distinctions for defining semantics of language, and I agree you'll see a large difference in such cases. Nonetheless, even for a single-threaded programme, there will be distinctions - see above. Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 8:56 ` Jan Vorbrüggen @ 2006-05-23 13:28 ` Greg Lindahl 2006-05-24 8:10 ` Jan Vorbrüggen 0 siblings, 1 reply; 314+ messages in thread From: Greg Lindahl @ 2006-05-23 13:28 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 345 bytes --] In article <4dg101F17acafU1@individual.net>, Jan Vorbr�ggen <jvorbrueggen@not-mediasec.de> wrote: >If I have a native complex type, the language implementor is free to arrange >the real and imaginary part in memory as he sees fit, Although Fortran does restrict the implementation, so the langauge implementer is not fully free. -- greg ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 13:28 ` Greg Lindahl @ 2006-05-24 8:10 ` Jan Vorbrüggen 2006-05-24 15:19 ` Richard E Maine 2006-05-24 15:24 ` Dick Hendrickson 0 siblings, 2 replies; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-24 8:10 UTC (permalink / raw) >>If I have a native complex type, the language implementor is free to arrange >>the real and imaginary part in memory as he sees fit, > Although Fortran does restrict the implementation, so the langauge > implementer is not fully free. Is that through sequence association - i.e., I can assume (conceptually) an EUIVALENCE of two reals, the real and imaginary part in that order, to be in place for a complex variable? Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 8:10 ` Jan Vorbrüggen @ 2006-05-24 15:19 ` Richard E Maine 2006-05-29 12:55 ` Jan Vorbrüggen 2006-05-24 15:24 ` Dick Hendrickson 1 sibling, 1 reply; 314+ messages in thread From: Richard E Maine @ 2006-05-24 15:19 UTC (permalink / raw) Jan Vorbr�ggen <jvorbrueggen@not-mediasec.de> wrote: > >>If I have a native complex type, the language implementor is free to arrange > >>the real and imaginary part in memory as he sees fit, > > Although Fortran does restrict the implementation, so the langauge > > implementer is not fully free. > > Is that through sequence association - i.e., I can assume (conceptually) > an EUIVALENCE of two reals, the real and imaginary part in that order, to > be in place for a complex variable? Yes. And it is a bit more than conceptually. The standard is pretty explicit about it.... well at least the f77 standard is. I'm failing to find quite the same words in the f2003 standard. I have no real doubt what the answer to an interp request on it would be (based on compatibility with f77), but I'm not finding the explicit words in f2003. Can anyone pint out where I might have missed them? In f2003 we have, in the section on complex type (4.4.3) "The values of a complex type are ordered pairs of real values. The first real value is called the real part. The second real value is called the imaginary part." But these words talk about values - not about representations. This sounds to me like a description of a mathematical concept. I find words elsewhere saying that complex occupies two consecutive numeric storage units, which says something about representation, but I can't find the words to say what those two storagte units have to look like. In contrast, f77 says, in its section on complex type (4.6) "The representation of a complex datum is in the form of an ordered pair of real data. The first of the pair represents the real part of the complex datum and the second represents the imaginary part. Each part has the same degreee of approximation as for a real datum. A complex datum has two consecutive numeric storage units in a storage sequence: the first storage unit is the real part and the second storage unit is the imaginary part." While the wods about an ordered pair look pretty similar, the f77 version has the word "representation", while the f2003 version doesn't. That makes them say different things acccording to me, although I'm sure the difference is unintentional. The f77 version seems to me to say the same thing twice, but the f2003 version dropped it down to zero times. Section 17.1 of f77 says the same thing a third time, but f2003 doesn't seem to have that copy either. This change in wording happened between f77 and f90. I see that even the f77 version doesn't quite seem to say what I'm sure was intended. Again, I might have missed it, but... F77 talks about the storage units being the real and imaginary parts, and it says that each part has "the same degree of approximation" as for a real datum. But I don't see that it says it has the same representation as a real datum. Now by the time you are tied down that much, I can't see why the representation wouldn't be the same in practice, but I don't say that this says it actually has to be. I'm sure that is the intent, particularly given the words in 17.2(11) and (12). Those words in section 17 don't say what values the partially associated entities become defined with, but there are no other plausible candidate values and it would be quite unlike the usual practice of the standard to say that they become defined, but to omit any mention of what value they become defined with. Sometimes things get processor-dependent values, but the standard at least tends to say that such values are processor dependent rather than just saying that there is a value, but being silent about what it might be. -- Richard Maine | Good judgment comes from experience; email: my first.last at org.domain| experience comes from bad judgment. org: nasa, domain: gov | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 15:19 ` Richard E Maine @ 2006-05-29 12:55 ` Jan Vorbrüggen 0 siblings, 0 replies; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-29 12:55 UTC (permalink / raw) > In contrast, f77 says, in its section on complex type (4.6) > > "The representation of a complex datum is in the form of an ordered > pair of real data. The first of the pair represents the real part > of the complex datum and the second represents the imaginary part. > Each part has the same degreee of approximation as for a real datum. > A complex datum has two consecutive numeric storage units in a > storage sequence: the first storage unit is the real part and the > second storage unit is the imaginary part." > > While the wods about an ordered pair look pretty similar, the f77 > version has the word "representation", while the f2003 version doesn't. > That makes them say different things acccording to me, although I'm sure > the difference is unintentional. The f77 version seems to me to say the > same thing twice, but the f2003 version dropped it down to zero times. It seems to me that F77 is saying two different things here: one is about representation, and the second is about storage sequence and association. They overlap in that the first statement includes the word "ordered", al- though one could conceivably debate (in the Talmudic sense) whether that should be understood with regard to storage sequence in the same way as the second phrase nails things down. Given the declaration COMPLEX X(10) REAL Y(20) EQUIVALENCE (X, Y) I understand that at least F77, and by intent (your interpretation) later Fortran standards force the compiler to have the piece of code X(3) = (4711., 0.815) PRINT *, Y(6) result in an approximation of 0.815 being printed. This would indeed make a different implementation more difficult. Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 8:10 ` Jan Vorbrüggen 2006-05-24 15:19 ` Richard E Maine @ 2006-05-24 15:24 ` Dick Hendrickson 2006-05-24 19:03 ` glen herrmannsfeldt 1 sibling, 1 reply; 314+ messages in thread From: Dick Hendrickson @ 2006-05-24 15:24 UTC (permalink / raw) Jan Vorbr�ggen wrote: >>> If I have a native complex type, the language implementor is free to >>> arrange >>> the real and imaginary part in memory as he sees fit, >> >> Although Fortran does restrict the implementation, so the langauge >> implementer is not fully free. > > > Is that through sequence association - i.e., I can assume (conceptually) > an EUIVALENCE of two reals, the real and imaginary part in that order, to > be in place for a complex variable? Yes. COMMON, EQUIVALENCE, and use as an argument usually require a fixed structure for complex variables. However, essentially all of Fortran's rules have an "as if" clause somewhere in them. If the compiler is sufficiently clever (and can see enough of the whole program) it can do anything with complex variables. Keeping temporaries in registers and discarding them when they become dead is common practice and is standard conforming. Dick Hendrickson > > Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 15:24 ` Dick Hendrickson @ 2006-05-24 19:03 ` glen herrmannsfeldt 2006-05-29 12:56 ` Jan Vorbrüggen 0 siblings, 1 reply; 314+ messages in thread From: glen herrmannsfeldt @ 2006-05-24 19:03 UTC (permalink / raw) Dick Hendrickson wrote: (snip, someone wrote) >> Is that through sequence association - i.e., I can assume (conceptually) >> an EUIVALENCE of two reals, the real and imaginary part in that order, to >> be in place for a complex variable? (snip) > However, essentially all of Fortran's rules have an "as if" > clause somewhere in them. If the compiler is sufficiently > clever (and can see enough of the whole program) it can do > anything with complex variables. Keeping temporaries in > registers and discarding them when they become dead is > common practice and is standard conforming. OK, but say for a certain program it is more efficient to store a complex array with all the real parts followed by all the imaginary parts, and the compiler figured this out. The "as if" rule would work only if there was no other way for the program/programmer to discover the change. If the array was in COMMON, EQUIVALENCEd, or passed to an external routine, this optimization most likely couldn't be done. Without any legal way to view the storage association, it would seem possible to do it. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 19:03 ` glen herrmannsfeldt @ 2006-05-29 12:56 ` Jan Vorbrüggen 2006-05-29 20:07 ` glen herrmannsfeldt 2006-05-30 4:55 ` robert.corbett 0 siblings, 2 replies; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-29 12:56 UTC (permalink / raw) > OK, but say for a certain program it is more efficient to store a > complex array with all the real parts followed by all the imaginary > parts, and the compiler figured this out. The "as if" rule would work > only if there was no other way for the program/programmer to discover > the change. If the array was in COMMON, EQUIVALENCEd, or passed to an > external routine, this optimization most likely couldn't be done. > Without any legal way to view the storage association, it would seem > possible to do it. Precisely what I was thinking. So historic baggage is weighing Fortran down in this particular case. Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-29 12:56 ` Jan Vorbrüggen @ 2006-05-29 20:07 ` glen herrmannsfeldt 2006-05-30 4:55 ` robert.corbett 1 sibling, 0 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-05-29 20:07 UTC (permalink / raw) Jan Vorbr�ggen wrote: >> OK, but say for a certain program it is more efficient to store a >> complex array with all the real parts followed by all the imaginary >> parts, and the compiler figured this out. The "as if" rule would work >> only if there was no other way for the program/programmer to discover >> the change. If the array was in COMMON, EQUIVALENCEd, or passed to an >> external routine, this optimization most likely couldn't be done. >> Without any legal way to view the storage association, it would seem >> possible to do it. > Precisely what I was thinking. So historic baggage is weighing Fortran down > in this particular case. It seems pretty convenient for FFT routines to treat a COMPLEX array as REAL. It would take some time to know if this speeds up the FFT or not. One alternative, as you say precluded by Fortran history, would be something like the descriptor of assumed shape arrays that could describe a COMPLEX type where the real and imaginary parts are not consecutive. It would seem that this could be done in the case of assumed shape arrays, as they are already incompatible with assumed size arrays, normally requiring a copy if passed to an assumed size array subprogram. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-29 12:56 ` Jan Vorbrüggen 2006-05-29 20:07 ` glen herrmannsfeldt @ 2006-05-30 4:55 ` robert.corbett 2006-05-30 7:05 ` Jan Vorbrüggen 1 sibling, 1 reply; 314+ messages in thread From: robert.corbett @ 2006-05-30 4:55 UTC (permalink / raw) Jan Vorbrüggen wrote: > > OK, but say for a certain program it is more efficient to store a > > complex array with all the real parts followed by all the imaginary > > parts, and the compiler figured this out. The "as if" rule would work > > only if there was no other way for the program/programmer to discover > > the change. If the array was in COMMON, EQUIVALENCEd, or passed to an > > external routine, this optimization most likely couldn't be done. > > Without any legal way to view the storage association, it would seem > > possible to do it. > > Precisely what I was thinking. So historic baggage is weighing Fortran down > in this particular case. The cases of COMMON and EQUIVALENCE are not a serious problem, since the compiler can tell if an array is in a common block or an equivalence group. Passing the array to an external routine is a serious problem, but I do not classify passing arrays to external routines as historical baggage. Bob Corbett ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-30 4:55 ` robert.corbett @ 2006-05-30 7:05 ` Jan Vorbrüggen 0 siblings, 0 replies; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-30 7:05 UTC (permalink / raw) > The cases of COMMON and EQUIVALENCE are not a serious problem, > since the compiler can tell if an array is in a common block or an > equivalence group. Yes, that was clear - this is similar to what the Sun C compiler does to art. > Passing the array to an external routine is a serious problem, but I do > not classify passing arrays to external routines as historical baggage. I wasn't going to suggest that 8-). I would say the situation is similar to (non-)SEQUENCE types. Without the SEQUENCE attribute, the compiler is allowed to do any "local" optimization because the programmer promises that it has all references to that type "in view". With the SEQUENCE attribute, the compiler has to promise to handle all "compatible" definitions in exactly the same way, but otherwise has lee- way to define that way as it wants. In the complex case, for seperate com- pilation the compiler has no way to determine that a different compilation unit uses sequence association (be it by EQUIVALENCE or otherwise), so it has to assume the worst - i.e., typical C behaviour 8-). Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 6:45 ` Brooks Moses 2006-05-22 7:41 ` Jan Vorbrüggen @ 2006-05-22 11:48 ` Michael Metcalf 2006-05-22 12:01 ` Dr. Adrian Wrigley 2006-05-23 8:34 ` Jon Harrop 2 siblings, 1 reply; 314+ messages in thread From: Michael Metcalf @ 2006-05-22 11:48 UTC (permalink / raw) "Brooks Moses" <bmoses-nospam@cits1.stanford.edu> wrote in message news:44715DED.5050906@cits1.stanford.edu... > >> It is known that Ada strong domain is realtime and safety critical >> applications. I never understood why Ada never became popular in the >> scientific field in particular in areas such as computational physics or >> CFD or such similar fields. > > I have no idea; by the time I reached college, that popularity contest had > already been settled, and my intro-to-engineering class taught Fortran and > didn't teach Ada. > If I recall correctly, ADA has no complex arithmetic capability, a killer for those who need it. Regards, Mike Metcalf ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 11:48 ` Michael Metcalf @ 2006-05-22 12:01 ` Dr. Adrian Wrigley 0 siblings, 0 replies; 314+ messages in thread From: Dr. Adrian Wrigley @ 2006-05-22 12:01 UTC (permalink / raw) On Mon, 22 May 2006 11:48:02 +0000, Michael Metcalf wrote: > "Brooks Moses" <bmoses-nospam@cits1.stanford.edu> wrote in message > news:44715DED.5050906@cits1.stanford.edu... >> >>> It is known that Ada strong domain is realtime and safety critical >>> applications. I never understood why Ada never became popular in the >>> scientific field in particular in areas such as computational physics or >>> CFD or such similar fields. >> >> I have no idea; by the time I reached college, that popularity contest had >> already been settled, and my intro-to-engineering class taught Fortran and >> didn't teach Ada. > > If I recall correctly, ADA has no complex arithmetic capability, a killer > for those who need it. Annex G (Numerics) of Ada 95 specifies complex arithmetic and I/O. It's only been around for eleven years... -- Adrian ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 6:45 ` Brooks Moses 2006-05-22 7:41 ` Jan Vorbrüggen 2006-05-22 11:48 ` Michael Metcalf @ 2006-05-23 8:34 ` Jon Harrop 2 siblings, 0 replies; 314+ messages in thread From: Jon Harrop @ 2006-05-23 8:34 UTC (permalink / raw) Brooks Moses wrote: > Nasser Abbasi wrote: >> What are the technical language specific reasons why Fortran would be >> selected over Ada? > > In my case? I know Fortran. I don't know Ada. I would be foolish to > attempt to learn a new language while writing a large program that I > needed to rely on. That's a great way to learn a new language. -- Dr Jon D Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/products/ocaml_for_scientists/chapter1.html ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 4:54 Ada vs Fortran for scientific applications Nasser Abbasi 2006-05-22 6:45 ` Brooks Moses @ 2006-05-22 7:34 ` Dmitry A. Kazakov 2006-05-23 8:32 ` Jon Harrop 2006-05-22 7:36 ` Greg Lindahl ` (8 subsequent siblings) 10 siblings, 1 reply; 314+ messages in thread From: Dmitry A. Kazakov @ 2006-05-22 7:34 UTC (permalink / raw) On Mon, 22 May 2006 04:54:42 GMT, Nasser Abbasi wrote: > What are the technical language specific reasons why Fortran would be > selected over Ada? Languages are *never* selected by technical reasons. They are by political ones. [...] > Any thoughts? Main strength of Ada for scientific applications is in my view its safety. I am writing a lot for AI and image processing. In both cases debugging is practically impossible. When something does not work or just looks not working, how do you know, is it an algorithmic problem or a software bug? There are techniques which prevent much bugs through a disciplined software developing process. But these are extremely expensive, require tools, man-power, requirements analysis, much time, everything poor scientists usually do not have... -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 7:34 ` Dmitry A. Kazakov @ 2006-05-23 8:32 ` Jon Harrop 0 siblings, 0 replies; 314+ messages in thread From: Jon Harrop @ 2006-05-23 8:32 UTC (permalink / raw) Dmitry A. Kazakov wrote: > Languages are *never* selected by technical reasons. I select my languages for technical reasons. -- Dr Jon D Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/products/ocaml_for_scientists/chapter1.html ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 4:54 Ada vs Fortran for scientific applications Nasser Abbasi 2006-05-22 6:45 ` Brooks Moses 2006-05-22 7:34 ` Dmitry A. Kazakov @ 2006-05-22 7:36 ` Greg Lindahl 2006-05-22 21:25 ` Ken Plotkin 2006-05-22 10:34 ` Tim Prince ` (7 subsequent siblings) 10 siblings, 1 reply; 314+ messages in thread From: Greg Lindahl @ 2006-05-22 7:36 UTC (permalink / raw) In article <mubcg.13522$fb2.853@newssvr27.news.prodigy.net>, Nasser Abbasi <nma@12000.org> wrote: >I like to discuss the technical reasons why Ada is not used as much as >Fortran for scientific and number crunching type applications? I'm not sure that technical reasons are involved in the actual decision making. BTW, cross-posting a question like this often results in poor discussion. -- greg ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 7:36 ` Greg Lindahl @ 2006-05-22 21:25 ` Ken Plotkin 2006-05-22 21:40 ` blmblm 2006-05-23 15:32 ` Pascal Obry 0 siblings, 2 replies; 314+ messages in thread From: Ken Plotkin @ 2006-05-22 21:25 UTC (permalink / raw) On 22 May 2006 00:36:11 -0700, lindahl@pbm.com (Greg Lindahl) wrote: >I'm not sure that technical reasons are involved in the actual >decision making. BTW, cross-posting a question like this often >results in poor discussion. I think you're correct about that. ADA was very political. As I recall, ADA was designed by the US DOD for use in embedded systems. It came at a time when Pascal-like languages were in vogue. Its structured nature and verbosity were intended to improve readibility, maintenance and reliability. I once needed an algorithm for radar processing. I received it in the form of a snippet of ADA source code. If that snippet was representative, it certainly achieved the readibility goal. DOD put one major restriction that IMHO hurt ADA's acceptance. That was that there were to be no subsets. To be called ADA it had to be a full implementation. I believe it also had to pass quality control. Compilers were expensive. In the 80s, when PCs were really catching on, an ADA compiler for a PC cost several thousand dollars, vs a few hundred for a Fortran compiler. I always figured the all-or-nothing and testing requirements caused that. Ken Plotkin ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 21:25 ` Ken Plotkin @ 2006-05-22 21:40 ` blmblm 2006-05-23 2:12 ` Ken Plotkin 2006-05-23 15:32 ` Pascal Obry 1 sibling, 1 reply; 314+ messages in thread From: blmblm @ 2006-05-22 21:40 UTC (permalink / raw) In article <lea472117j9i187qakh3msqo5dturujrkk@4ax.com>, Ken Plotkin <kplotkin@nospam-cox.net> wrote: >On 22 May 2006 00:36:11 -0700, lindahl@pbm.com (Greg Lindahl) wrote: > >>I'm not sure that technical reasons are involved in the actual >>decision making. BTW, cross-posting a question like this often >>results in poor discussion. > >I think you're correct about that. ADA was very political. "Ada", is it not? The language's name is not an acronym, but is a reference to Ada Lovelace, as I understand it. A nitpick, admittedly. [ snip ] -- | B. L. Massingill | ObDisclaimer: I don't speak for my employers; they return the favor. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 21:40 ` blmblm @ 2006-05-23 2:12 ` Ken Plotkin 0 siblings, 0 replies; 314+ messages in thread From: Ken Plotkin @ 2006-05-23 2:12 UTC (permalink / raw) On 22 May 2006 21:40:35 GMT, blmblm@myrealbox.com <blmblm@myrealbox.com> wrote: >"Ada", is it not? The language's name is not an acronym, but is >a reference to Ada Lovelace, as I understand it. You are correct. >A nitpick, admittedly. Nope - an important clarification. Thanks. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 21:25 ` Ken Plotkin 2006-05-22 21:40 ` blmblm @ 2006-05-23 15:32 ` Pascal Obry 2006-05-23 17:33 ` Marc A. Criley 1 sibling, 1 reply; 314+ messages in thread From: Pascal Obry @ 2006-05-23 15:32 UTC (permalink / raw) To: Ken Plotkin Ken Plotkin a �crit : > As I recall, ADA was designed by the US DOD for use in embedded > systems. It came at a time when Pascal-like languages were in vogue. Not exactly. The requirements came from the DOD but the design was part of an international call. The "winner" language was the green language (Ada), it was designed at CII Honeywell Bull by a team lead by Jean Ichbiah. > Its structured nature and verbosity were intended to improve > readibility, maintenance and reliability. Yep, Ada mains goals. Pascal. -- --|------------------------------------------------------ --| Pascal Obry Team-Ada Member --| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE --|------------------------------------------------------ --| http://www.obry.net --| "The best way to travel is by means of imagination" --| --| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595 ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 15:32 ` Pascal Obry @ 2006-05-23 17:33 ` Marc A. Criley 0 siblings, 0 replies; 314+ messages in thread From: Marc A. Criley @ 2006-05-23 17:33 UTC (permalink / raw) Pascal Obry wrote: > Ken Plotkin a �crit : >> Its structured nature and verbosity were intended to improve >> readibility, maintenance and reliability. > > Yep, Ada mains goals. As an interesting aside, a software analysis tool that's been around for a few years, "Understand for Ada" (there's also versions for Fortran and other languages, see www.scitools.com) recently added the capability to generate "Control Flow Diagrams" (CFDs) from source code. Essentially these are flow charts, where the contents of the boxes are the source code statements directly lifted from the code. To an experienced programmer this isn't a big deal, but it's a big help for those who do little or no programming to help them understand and follow what's going on in a given subprogram. The Ada programming penchant for more expressive (or verbose :-) naming is of real benefit here, because what happens in each statement or conditional is usually more descriptive than it tends to be in other languages. -- Marc A. Criley -- McKae Technologies -- www.mckae.com -- DTraq - XPath In Ada - XML EZ Out ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 4:54 Ada vs Fortran for scientific applications Nasser Abbasi ` (2 preceding siblings ...) 2006-05-22 7:36 ` Greg Lindahl @ 2006-05-22 10:34 ` Tim Prince 2006-05-22 12:52 ` George N. White III ` (6 subsequent siblings) 10 siblings, 0 replies; 314+ messages in thread From: Tim Prince @ 2006-05-22 10:34 UTC (permalink / raw) Nasser Abbasi wrote: > I like to discuss the technical reasons why Ada is not used as much as > Fortran for scientific and number crunching type applications? > > > > To make the discussion more focused, let's assume you want to start > developing a large scientific application in the domain where Fortran is > commonly used. Say you want to develop a new large Finite Elements Methods > program or large computational physics simulation system. Assume you can > choose either Ada or Fortran. > > > > What are the technical language specific reasons why Fortran would be > selected over Ada? > > > > I happened to know a little about Ada and Fortran, and from what I know, I > think Ada would be an excellent choice due to its strong typing, good > support for numerical types and good Math library. > > > > I know also that Fortran is supposed to be better/faster when it comes to > working with large Arrays (Matrices), but it is not clear to me why that is, > and if it is still true with Ada 05. Something about arrays aliasing, but > not sure how that is. > > > > I am also not sure on the support of sparse matrices in both languages' > libraries. > > > > It is known that Ada strong domain is realtime and safety critical > applications. I never understood why Ada never became popular in the > scientific field in particular in areas such as computational physics or CFD > or such similar fields. > Following contains some personal opinions formed by reading about it, not by actual comparison of functionality, which has seldom been possible: If the design goals of Ada had been fully realized, the major advantage of Ada would have been built-in support for multi-processing. f2008 intends to make up for that, in the most relevant case, with co-arrays. Add-ons like OpenMP and MPI have kept Fortran ahead, for practical purposes. The additional difficulty of learning sufficient Ada and finding an efficient compiler, and increased difficulty of development, have served to restrict it to the application domains most directly targeted and financed. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 4:54 Ada vs Fortran for scientific applications Nasser Abbasi ` (3 preceding siblings ...) 2006-05-22 10:34 ` Tim Prince @ 2006-05-22 12:52 ` George N. White III 2006-05-22 13:02 ` Jean-Pierre Rosen ` (5 subsequent siblings) 10 siblings, 0 replies; 314+ messages in thread From: George N. White III @ 2006-05-22 12:52 UTC (permalink / raw) On Mon, 22 May 2006, Nasser Abbasi wrote: > I like to discuss the technical reasons why Ada is not used as much as > Fortran for scientific and number crunching type applications? > > To make the discussion more focused, let's assume you want to start > developing a large scientific application in the domain where Fortran is > commonly used. Say you want to develop a new large Finite Elements Methods > program or large computational physics simulation system. Assume you can > choose either Ada or Fortran. Such a project requires signficant resources, so the decision isn't up to the individuals who will do the work. Managers will ask questions like: 1. if the lead programmer gets killed riding her motorcycle when the project is 95% complete, who can I find to finish the job? 2. the project needs to remain useful for 20-30 years. Which compiler has a 30-year track record? With questions like this, Ada would have to offer really signficant benefits to be chosen over Fortran, but the answers also depend on the organization. Some organizations foster a structure where no one individual is irreplaceable, and would expect any of the programmers to be able to take the place of a fallen colleague. Some organizations don't simply consume tools, but treat them as a resource that must be maintained and nourished. An organization that embrasses Ada to the point that they employ people who participate in standards activities and provide a resource to the internal developer community will not give as much weight to Fortran's historical record. If you take the long term view, you should also ask: will the organization still be around in 30 years? -- George N. White III <aa056@chebucto.ns.ca> ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 4:54 Ada vs Fortran for scientific applications Nasser Abbasi ` (4 preceding siblings ...) 2006-05-22 12:52 ` George N. White III @ 2006-05-22 13:02 ` Jean-Pierre Rosen 2006-05-22 15:23 ` Dan Nagle 2006-05-24 5:26 ` robin 2006-05-23 8:34 ` gautier_niouzes ` (4 subsequent siblings) 10 siblings, 2 replies; 314+ messages in thread From: Jean-Pierre Rosen @ 2006-05-22 13:02 UTC (permalink / raw) Nasser Abbasi a �crit : > What are the technical language specific reasons why Fortran would be > selected over Ada? > Some immediate reasons: 1) Packaging. Packages allow better organization of software, which is good for any kind of application. 2) Strong typing. Scientific applications often deal with physical units, and Ada is great at supporting these. 3) User defined accuracy. Ada allows you to define the accuracy you need, the compiler chooses the appropriate representation. Note that you are not limited to only two floating point types (many machines have more than that). 4) Fixed points. Not available in Fortran 5) Guaranteed accuracy, not only for basic arithmetic, but for the whole mathematical library 6) Standardization. All compilers process exactly the same language. 7) Interfacing. Easy to call libraries in foreing languages => all libraries available for Fortran are available for Ada. 8) Concurrency, built into the language 9) Generics. Stop rewriting these damn sorting routines 1000 times. 10) Default parameters. Makes complex subprograms (simplex...) much easier to use. 11) Operators on any types, including arrays. Define a matrix product as "*"... 12) Bounds checking, with a very low penalty. Makes bounds checking really usable. -- --------------------------------------------------------- J-P. Rosen (rosen@adalog.fr) Visit Adalog's web site at http://www.adalog.fr ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 13:02 ` Jean-Pierre Rosen @ 2006-05-22 15:23 ` Dan Nagle 2006-05-22 16:20 ` Nasser Abbasi ` (2 more replies) 2006-05-24 5:26 ` robin 1 sibling, 3 replies; 314+ messages in thread From: Dan Nagle @ 2006-05-22 15:23 UTC (permalink / raw) Hello, Jean-Pierre Rosen wrote: > Nasser Abbasi a �crit : >> What are the technical language specific reasons why Fortran would be >> selected over Ada? >> > Some immediate reasons: > 1) Packaging. Packages allow better organization of software, which is > good for any kind of application. Can you compare and contrast Ada packages with Fortran modules and submodules? > 2) Strong typing. Scientific applications often deal with physical > units, and Ada is great at supporting these. What specific features of Ada provide better support than the comparable feature of Fortran? > 3) User defined accuracy. Ada allows you to define the accuracy you > need, the compiler chooses the appropriate representation. Note that you > are not limited to only two floating point types (many machines have > more than that). How is this better than Fortran's kind mechanism? > 4) Fixed points. Not available in Fortran Agreed. How important is this for floating point work? Fortran is rarely used for imbedded software (at least, I wouldn't). > 5) Guaranteed accuracy, not only for basic arithmetic, but for the whole > mathematical library Can you compare Ada's accuracy requirements with Fortran's support for IEEE 754? > 6) Standardization. All compilers process exactly the same language. Again, how is this different? Fortran compilers are required to be able to report use of extensions to the standard. > 7) Interfacing. Easy to call libraries in foreing languages => all > libraries available for Fortran are available for Ada. Can you compare Interfaces.C to ISO_C_BINDING? How is one better or worse than the other? > 8) Concurrency, built into the language Co-arrays and concurrent loops are coming in Fortran 2008. > 9) Generics. Stop rewriting these damn sorting routines 1000 times. Intelligent Macros are coming in Fortran 2008. > 10) Default parameters. Makes complex subprograms (simplex...) much > easier to use. Agreed. > 11) Operators on any types, including arrays. Define a matrix product as > "*"... How is Ada's operators for types better or worse than Fortran's? Is Ada's "*" operator better than Fortran's matmul()? > 12) Bounds checking, with a very low penalty. Makes bounds checking > really usable. How is Ada's bounds checking better or worse than Fortran's? "Fortran" /= "FORTRAN 77" ;-) -- Cheers! Dan Nagle Purple Sage Computing Solutions, Inc. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 15:23 ` Dan Nagle @ 2006-05-22 16:20 ` Nasser Abbasi 2006-05-22 16:38 ` Jan Vorbrüggen ` (4 more replies) 2006-05-22 16:38 ` Richard E Maine 2006-05-23 8:25 ` Jean-Pierre Rosen 2 siblings, 5 replies; 314+ messages in thread From: Nasser Abbasi @ 2006-05-22 16:20 UTC (permalink / raw) "Dan Nagle" <dannagle@verizon.net> wrote in message news:tHkcg.6937$kR6.484@trnddc05... >> 11) Operators on any types, including arrays. Define a matrix product as >> "*"... > > How is Ada's operators for types better or worse than Fortran's? > Is Ada's "*" operator better than Fortran's matmul()? > I'll answer the easy one for now since I have not had my coffee yet: It is clear that A*B is easier to read and understand than MATMUL(A,B) would you not agree? > > "Fortran" /= "FORTRAN 77" ;-) > Yes :), I was surprised to read that now FORTRAN is called Fortran, (only one letter is uppercase). This is progress (I think). Nasser ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 16:20 ` Nasser Abbasi @ 2006-05-22 16:38 ` Jan Vorbrüggen 2006-05-22 16:41 ` Gordon Sande ` (3 subsequent siblings) 4 siblings, 0 replies; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-22 16:38 UTC (permalink / raw) > It is clear that > > A*B > > is easier to read and understand than > > MATMUL(A,B) > > would you not agree? I definitely disagree. Is that operator meant as the matrix multiplication, the outer product, element-wise multiplication, or something else that has the mathematical properties of a group? Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 16:20 ` Nasser Abbasi 2006-05-22 16:38 ` Jan Vorbrüggen @ 2006-05-22 16:41 ` Gordon Sande 2006-05-22 16:48 ` Dan Nagle ` (2 subsequent siblings) 4 siblings, 0 replies; 314+ messages in thread From: Gordon Sande @ 2006-05-22 16:41 UTC (permalink / raw) On 2006-05-22 13:20:40 -0300, "Nasser Abbasi" <nma@12000.org> said: > > "Dan Nagle" <dannagle@verizon.net> wrote in message > news:tHkcg.6937$kR6.484@trnddc05... > >>> 11) Operators on any types, including arrays. Define a matrix product as "*"... >> >> How is Ada's operators for types better or worse than Fortran's? >> Is Ada's "*" operator better than Fortran's matmul()? >> > > I'll answer the easy one for now since I have not had my coffee yet: > > It is clear that > > A*B But the Hadamard product is not the same as a matrix product. Conventions and understandability do matter. > is easier to read and understand than > > MATMUL(A,B) > > would you not agree? > >> >> "Fortran" /= "FORTRAN 77" ;-) >> > > Yes :), I was surprised to read that now FORTRAN is called Fortran, > (only one letter is uppercase). This is progress (I think). > > Nasser > > ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 16:20 ` Nasser Abbasi 2006-05-22 16:38 ` Jan Vorbrüggen 2006-05-22 16:41 ` Gordon Sande @ 2006-05-22 16:48 ` Dan Nagle 2006-05-23 2:12 ` news.hinet.net 2006-05-22 17:22 ` Paul Van Delst 2006-05-24 5:26 ` robin 4 siblings, 1 reply; 314+ messages in thread From: Dan Nagle @ 2006-05-22 16:48 UTC (permalink / raw) Hello, Nasser Abbasi wrote: <snip> > I'll answer the easy one for now since I have not had my coffee yet: > > It is clear that > > A*B > > is easier to read and understand than > > MATMUL(A,B) It is? All the intrinsic operators in Fortran apply element-wise, and * is no exception. How does that make the intrinsic procedure harder to read? The applications programmer may always define a matrix type, and define the * operator to be the matmul intrinsic. Note the distinction between "rank-2 array" and "matrix". <snip> > I was surprised to read that now FORTRAN is called Fortran, (only > one letter is uppercase). This is progress (I think). I think Fortran and Ada have a lot in common, both are being actively developed (though not on the same schedule). And both are threatened by the "I only know C++, it's the best language" syndrome. :-( And, I think, both are superior technical solutions within their respective problem domains. Although Eiffel interests me also. -- Cheers! Dan Nagle Purple Sage Computing Solutions, Inc. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 16:48 ` Dan Nagle @ 2006-05-23 2:12 ` news.hinet.net 0 siblings, 0 replies; 314+ messages in thread From: news.hinet.net @ 2006-05-23 2:12 UTC (permalink / raw) what does we discuss "Ada vs Fortran for scientific applications"? does it mean which syntax is suitable for scientific application? if you mean it, you didn't waiting for fortan "2008" . matlab may be another good choose. . It's good for scientific application. why? there are over 30+ math library . simple syntax is easy to learn and writing. high performance, it utilize fortran's math library and kernel is writing in C and fortran. a lot of language interfaces , such as java , c , fortran , and ada (ada is especial in realtime and simulink). you can call those language directly. can translate to C code. built-in complex type support. It has operator overloading. you can do mult(a,b,...) or a .* b .* ... or matix Multiplies mmult(a,b...) or a * b *.... using vectorization. you almost can reduce most loop syntax in your code more detail sse http://www.mathworks.com/ of course, there are some disadvantages. weak type checking often causes errors in runtime. It almost never happens in ada. hard to read but easy to wirte. (writing is happy and read is painful) In scientific application, It should be better than ada and fortan ^__^. "Dan Nagle" <dannagle@verizon.net> ???????:tXlcg.3832$p13.2487@trnddc07... > Hello, > > Nasser Abbasi wrote: > > <snip> > >> I'll answer the easy one for now since I have not had my coffee yet: >> >> It is clear that >> >> A*B >> >> is easier to read and understand than >> >> MATMUL(A,B) > > It is? All the intrinsic operators in Fortran apply > element-wise, and * is no exception. How does that make > the intrinsic procedure harder to read? > > The applications programmer may always define a matrix type, > and define the * operator to be the matmul intrinsic. > Note the distinction between "rank-2 array" and "matrix". > > <snip> > >> I was surprised to read that now FORTRAN is called Fortran, (only one >> letter is uppercase). This is progress (I think). > > I think Fortran and Ada have a lot in common, > both are being actively developed (though not on the same schedule). > And both are threatened by the "I only know C++, it's the best language" > syndrome. :-( And, I think, both are superior technical solutions > within their respective problem domains. Although Eiffel interests > me also. > > -- > Cheers! > > Dan Nagle > Purple Sage Computing Solutions, Inc. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 16:20 ` Nasser Abbasi ` (2 preceding siblings ...) 2006-05-22 16:48 ` Dan Nagle @ 2006-05-22 17:22 ` Paul Van Delst 2006-05-23 7:04 ` Gareth Owen 2006-05-24 5:26 ` robin 4 siblings, 1 reply; 314+ messages in thread From: Paul Van Delst @ 2006-05-22 17:22 UTC (permalink / raw) Nasser Abbasi wrote: > "Dan Nagle" <dannagle@verizon.net> wrote in message > news:tHkcg.6937$kR6.484@trnddc05... > > >>>11) Operators on any types, including arrays. Define a matrix product as >>>"*"... >> >>How is Ada's operators for types better or worse than Fortran's? >>Is Ada's "*" operator better than Fortran's matmul()? >> > > I'll answer the easy one for now since I have not had my coffee yet: > > It is clear that > A*B > is easier to read and understand than > MATMUL(A,B) > would you not agree? Hello, At the risk of igniting a language war (not my intent) and/or exposing my ignorance of Ada, I find the former misleading. To me, A*B suggests a regular old multiplication of two arrays, rather than a matix multiplication. MATMUL(A,B) is definitely wordier but the intent is quite clear. Sufficient exposure is enough to adapt to the Ada meaning of A*B, but you specifically mentioned one as clearly easier to read and understand .... so I'm tossing in my 2cents worth. :o) If I had my druthers wrt succinctness, I'd use a different operator for matrix multiplication. Sort of like IDL where A#B or A##B signifies matrix multiplication (the two forms are, essentially, to avoid unnecessary transposing of the second matrix.) (I'm sure there's myriad reasons why a different operator is a bad idea.) cheers, paulv >>"Fortran" /= "FORTRAN 77" ;-) >> > Yes :), I was surprised to read that now FORTRAN is called Fortran, (only > one letter is uppercase). This is progress (I think). > > Nasser > > > > -- Paul van Delst Ride lots. CIMSS @ NOAA/NCEP/EMC Eddy Merckx Ph: (301)763-8000 x7748 Fax:(301)763-8545 ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 17:22 ` Paul Van Delst @ 2006-05-23 7:04 ` Gareth Owen 2006-05-23 7:02 ` Martin Krischik 2006-05-23 14:23 ` Rich Townsend 0 siblings, 2 replies; 314+ messages in thread From: Gareth Owen @ 2006-05-23 7:04 UTC (permalink / raw) Paul Van Delst <Paul.vanDelst@noaa.gov> writes: > At the risk of igniting a language war (not my intent) and/or exposing > my ignorance of Ada, I find the former misleading. To me, A*B suggests > a regular old multiplication of two arrays, rather than a matix > multiplication. I think assuming anything about A*B is extremely dangerous. Just of the top of my head, Ada and Matlab think it means matrix multiplication and Fortran and Mathematica think its pointwise multiplication. It means basically nothing in C, and in C++ it means whatever the matrix class implementor wanted it to mean. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 7:04 ` Gareth Owen @ 2006-05-23 7:02 ` Martin Krischik 2006-05-23 14:23 ` Rich Townsend 1 sibling, 0 replies; 314+ messages in thread From: Martin Krischik @ 2006-05-23 7:02 UTC (permalink / raw) Gareth Owen wrote: > Paul Van Delst <Paul.vanDelst@noaa.gov> writes: > > > At the risk of igniting a language war (not my intent) and/or exposing > > my ignorance of Ada, I find the former misleading. To me, A*B suggests > > a regular old multiplication of two arrays, rather than a matix > > multiplication. > > I think assuming anything about A*B is extremely dangerous. Just of > the top of my head, Ada and Matlab think it means matrix > multiplication and Fortran and Mathematica think its pointwise > multiplication. > It means basically nothing in C, and in C++ it means whatever the > matrix class implementor wanted it to mean. Honest truth: For a user defined type - in Ada - * means whatever the programmer wants it to mean. iE. unary + is often (mis)used as a shorthand for type conversion. In that respect Ada is not better then any other language with user defined operators. But then: you can allways define a Matrix_Multiplication (...) function as well. Martin ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 7:04 ` Gareth Owen 2006-05-23 7:02 ` Martin Krischik @ 2006-05-23 14:23 ` Rich Townsend 2006-05-23 17:24 ` Brooks Moses 1 sibling, 1 reply; 314+ messages in thread From: Rich Townsend @ 2006-05-23 14:23 UTC (permalink / raw) Gareth Owen wrote: > Paul Van Delst <Paul.vanDelst@noaa.gov> writes: > > >>At the risk of igniting a language war (not my intent) and/or exposing >>my ignorance of Ada, I find the former misleading. To me, A*B suggests >>a regular old multiplication of two arrays, rather than a matix >>multiplication. > > > I think assuming anything about A*B is extremely dangerous. Just of > the top of my head, Ada and Matlab think it means matrix > multiplication and Fortran and Mathematica think its pointwise > multiplication. > > It means basically nothing in C, and in C++ it means whatever the > matrix class implementor wanted it to mean. My stance on A*B is this: if A*B denotes matrix multiplication, then A/B should denote matrix 'division': B^-1*A. Which means you need to standardize matrix inversion/linear-equations solution into the language. Which is batshit crazy. cheers, Rich ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 14:23 ` Rich Townsend @ 2006-05-23 17:24 ` Brooks Moses 2006-05-23 18:40 ` Rich Townsend 0 siblings, 1 reply; 314+ messages in thread From: Brooks Moses @ 2006-05-23 17:24 UTC (permalink / raw) Rich Townsend wrote: > Gareth Owen wrote: >>I think assuming anything about A*B is extremely dangerous. Just of >>the top of my head, Ada and Matlab think it means matrix >>multiplication and Fortran and Mathematica think its pointwise >>multiplication. >> >>It means basically nothing in C, and in C++ it means whatever the >>matrix class implementor wanted it to mean. > > My stance on A*B is this: if A*B denotes matrix multiplication, then A/B should > denote matrix 'division': B^-1*A. Which means you need to standardize matrix > inversion/linear-equations solution into the language. Which is batshit crazy. I believe that's how Matlab does it. Then again, for Matlab's purposes, standardizing matrix inversion and linear-equation solution into the language is entirely reasonable. - Brooks -- The "bmoses-nospam" address is valid; no unmunging needed. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 17:24 ` Brooks Moses @ 2006-05-23 18:40 ` Rich Townsend 2006-05-23 21:17 ` Martin Dowie 2006-05-25 19:43 ` Janne Blomqvist 0 siblings, 2 replies; 314+ messages in thread From: Rich Townsend @ 2006-05-23 18:40 UTC (permalink / raw) Brooks Moses wrote: > Rich Townsend wrote: > >> Gareth Owen wrote: >> >>> I think assuming anything about A*B is extremely dangerous. Just of >>> the top of my head, Ada and Matlab think it means matrix >>> multiplication and Fortran and Mathematica think its pointwise >>> multiplication. >>> >>> It means basically nothing in C, and in C++ it means whatever the >>> matrix class implementor wanted it to mean. >> >> >> My stance on A*B is this: if A*B denotes matrix multiplication, then >> A/B should denote matrix 'division': B^-1*A. Which means you need to >> standardize matrix inversion/linear-equations solution into the >> language. Which is batshit crazy. > > > I believe that's how Matlab does it. > > Then again, for Matlab's purposes, standardizing matrix inversion and > linear-equation solution into the language is entirely reasonable. > Exactly. But I don't think you would find a single person in this newsgroup who would support inclusion of a solver in Fortran. And the reason would be netlib. In this respect, Matlab is a language *AND* a library; whereas Fortran is just a language, and netlib is the numerics metalibrary for it. cheers, Rich ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 18:40 ` Rich Townsend @ 2006-05-23 21:17 ` Martin Dowie 2006-05-24 2:21 ` Nasser Abbasi 2006-05-25 19:43 ` Janne Blomqvist 1 sibling, 1 reply; 314+ messages in thread From: Martin Dowie @ 2006-05-23 21:17 UTC (permalink / raw) Rich Townsend wrote: > Brooks Moses wrote: >> Rich Townsend wrote: >> >>> Gareth Owen wrote: >>> >>>> I think assuming anything about A*B is extremely dangerous. Just of >>>> the top of my head, Ada and Matlab think it means matrix >>>> multiplication and Fortran and Mathematica think its pointwise >>>> multiplication. >>>> >>>> It means basically nothing in C, and in C++ it means whatever the >>>> matrix class implementor wanted it to mean. >>> >>> >>> My stance on A*B is this: if A*B denotes matrix multiplication, then >>> A/B should denote matrix 'division': B^-1*A. Which means you need to >>> standardize matrix inversion/linear-equations solution into the >>> language. Which is batshit crazy. >> >> >> I believe that's how Matlab does it. >> >> Then again, for Matlab's purposes, standardizing matrix inversion and >> linear-equation solution into the language is entirely reasonable. >> > > Exactly. But I don't think you would find a single person in this > newsgroup who would support inclusion of a solver in Fortran. And the > reason would be netlib. > > In this respect, Matlab is a language *AND* a library; whereas Fortran > is just a language, and netlib is the numerics metalibrary for it. Hmmm, that sounds suspiciously like what has just been standardized within Ada (Ada2005 that is). Here is a link to the "Rationale" written by John Barnes: http://www.adaic.org/standards/05rat/html/Rat-7-6.html Cheers -- Martin ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 21:17 ` Martin Dowie @ 2006-05-24 2:21 ` Nasser Abbasi 2006-05-26 15:18 ` Martin Dowie 0 siblings, 1 reply; 314+ messages in thread From: Nasser Abbasi @ 2006-05-24 2:21 UTC (permalink / raw) "Martin Dowie" <martin.dowie@btopenworld.com> wrote in message news:Z5-dnZJnkvdj5u7ZRVnyrQ@bt.com... > Rich Townsend wrote: >> Brooks Moses wrote: >>> Rich Townsend wrote: >>> >>>> Gareth Owen wrote: >>>> >>>>> I think assuming anything about A*B is extremely dangerous. Just of >>>>> the top of my head, Ada and Matlab think it means matrix >>>>> multiplication and Fortran and Mathematica think its pointwise >>>>> multiplication. >>>>> >>>>> It means basically nothing in C, and in C++ it means whatever the >>>>> matrix class implementor wanted it to mean. >>>> >>>> >>>> My stance on A*B is this: if A*B denotes matrix multiplication, then >>>> A/B should denote matrix 'division': B^-1*A. Which means you need to >>>> standardize matrix inversion/linear-equations solution into the >>>> language. Which is batshit crazy. >>> >>> >>> I believe that's how Matlab does it. >>> >>> Then again, for Matlab's purposes, standardizing matrix inversion and >>> linear-equation solution into the language is entirely reasonable. >>> >> >> Exactly. But I don't think you would find a single person in this >> newsgroup who would support inclusion of a solver in Fortran. And the >> reason would be netlib. >> >> In this respect, Matlab is a language *AND* a library; whereas Fortran is >> just a language, and netlib is the numerics metalibrary for it. > > Hmmm, that sounds suspiciously like what has just been standardized within > Ada (Ada2005 that is). > > Here is a link to the "Rationale" written by John Barnes: > http://www.adaic.org/standards/05rat/html/Rat-7-6.html > > Cheers > -- Martin WOW! thanks for the link. I had no idea Ada now supports all these useful Matrix operations in its libraries. There should be more such functions, but this is a great start. I have gnat2005, so I'll see if I get a change to try the solve function and compare it to Matlab's (ofcourse Matlab has much much more Matrix related functions, but for Ada, this is a great start in the right direction). Nasser ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 2:21 ` Nasser Abbasi @ 2006-05-26 15:18 ` Martin Dowie 0 siblings, 0 replies; 314+ messages in thread From: Martin Dowie @ 2006-05-26 15:18 UTC (permalink / raw) Nasser Abbasi wrote: > WOW! thanks for the link. I had no idea Ada now supports all these useful > Matrix operations in its libraries. There should be more such functions, but > this is a great start. I have gnat2005, so I'll see if I get a change to try > the solve function and compare it to Matlab's (ofcourse Matlab has much much > more Matrix related functions, but for Ada, this is a great start in the > right direction). Your welcome and yes, a start is all any language standard library can offer - more expansive libraries can always be found for Ultra-specialists but for 99.9% this should be fine. Cheers -- Martin ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 18:40 ` Rich Townsend 2006-05-23 21:17 ` Martin Dowie @ 2006-05-25 19:43 ` Janne Blomqvist 1 sibling, 0 replies; 314+ messages in thread From: Janne Blomqvist @ 2006-05-25 19:43 UTC (permalink / raw) In article <e4vksu$k82$1@scrotar.nss.udel.edu>, Rich Townsend wrote: > Brooks Moses wrote: >> Rich Townsend wrote: >>> My stance on A*B is this: if A*B denotes matrix multiplication, then >>> A/B should denote matrix 'division': B^-1*A. Which means you need to >>> standardize matrix inversion/linear-equations solution into the >>> language. Which is batshit crazy. >> >> >> I believe that's how Matlab does it. >> >> Then again, for Matlab's purposes, standardizing matrix inversion and >> linear-equation solution into the language is entirely reasonable. >> > > Exactly. But I don't think you would find a single person in this newsgroup who > would support inclusion of a solver in Fortran. And the reason would be netlib. Well, if a solver were included in the standard or more to the point in the One True Compiler (TM), then perhaps we wouldn't need these ridiculous "fastest matrix inversion" contests. -- Janne Blomqvist ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 16:20 ` Nasser Abbasi ` (3 preceding siblings ...) 2006-05-22 17:22 ` Paul Van Delst @ 2006-05-24 5:26 ` robin 2006-05-24 6:06 ` GF Thomas 4 siblings, 1 reply; 314+ messages in thread From: robin @ 2006-05-24 5:26 UTC (permalink / raw) "Nasser Abbasi" <nma@12000.org> wrote in message news:sxlcg.75860$F_3.64697@newssvr29.news.prodigy.net... > > "Dan Nagle" <dannagle@verizon.net> wrote in message > news:tHkcg.6937$kR6.484@trnddc05... > > >> 11) Operators on any types, including arrays. Define a matrix product as > >> "*"... > > > > How is Ada's operators for types better or worse than Fortran's? > > Is Ada's "*" operator better than Fortran's matmul()? > > I'll answer the easy one for now since I have not had my coffee yet: > > It is clear that > > A*B > > is easier to read and understand than > > MATMUL(A,B) > > would you not agree? But a casual user would ask, does A*B mean term by term or matrix product? If one, then how is the other distinguished? A casual user would probably recognize MATMUL as being matrix multiplication, not term by term. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 5:26 ` robin @ 2006-05-24 6:06 ` GF Thomas 0 siblings, 0 replies; 314+ messages in thread From: GF Thomas @ 2006-05-24 6:06 UTC (permalink / raw) "robin" <robin_v@bigpond.com> wrote in message news:h8Scg.9804$S7.6911@news-server.bigpond.net.au... [...] > > It is clear that > > > > A*B > > > > is easier to read and understand than > > > > MATMUL(A,B) > > > > would you not agree? > > But a casual user would ask, does A*B mean term by term or matrix product? > If one, then how is the other distinguished? > A casual user would probably recognize MATMUL as > being matrix multiplication, not term by term. > Quite. Recently on the comp-FORTRAN newslist we had such disinformation from a self-styled nasa bigbrain who claimed that epx(A) =exp(Int(0,t)A(s)ds). The same bargain bigbrain lately poo-poohed C/C++'s inability to host a stiff ODE solver when such C/C++ codes have been available within .gov for some time now, without f2c, and gratis to .gov employees . Little wonder that Fortran is priced as a Challenger + crew/decade. What gives? -- Boom, Gerry T. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 15:23 ` Dan Nagle 2006-05-22 16:20 ` Nasser Abbasi @ 2006-05-22 16:38 ` Richard E Maine 2006-05-23 8:25 ` Jean-Pierre Rosen 2 siblings, 0 replies; 314+ messages in thread From: Richard E Maine @ 2006-05-22 16:38 UTC (permalink / raw) Dan Nagle <dannagle@verizon.net> wrote: > Again, how is this different? Fortran compilers are required > to be able to report use of extensions to the standard. Only syntactic extensions. See 1.4(3) of f2003, which I presume is the requirement you are talking about. Or perhaps you are also including 1.4(6). In any case, the requirements are very specific. Fortran compilers are not required to, and many don't, have the capability to report on *all* extensions, where I emphasize the "all". Let me remphasize it by explicitly stating that "all" includes extensions for things come up only at run-time. I'm not sure that a single Fortran compiler exists that has the capability of reporting the use of *all* extensions. It is my possibly flawed understanding that this is fundamentally different from the situation for Ada. -- Richard Maine | Good judgment comes from experience; email: my first.last at org.domain| experience comes from bad judgment. org: nasa, domain: gov | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 15:23 ` Dan Nagle 2006-05-22 16:20 ` Nasser Abbasi 2006-05-22 16:38 ` Richard E Maine @ 2006-05-23 8:25 ` Jean-Pierre Rosen 2006-05-23 11:40 ` Dan Nagle ` (2 more replies) 2 siblings, 3 replies; 314+ messages in thread From: Jean-Pierre Rosen @ 2006-05-23 8:25 UTC (permalink / raw) Disclaimer: I am an Ada guy. FORTRAN was my first programming language, but it was long, long ago, and I don't know about the latest improvements to the language. I'll answer some questions here, but I'd like to be enlightened on the latest trade from the FORTRAN side. Dan Nagle a �crit : > Hello, > > Jean-Pierre Rosen wrote: >> Nasser Abbasi a �crit : >>> What are the technical language specific reasons why Fortran would be >>> selected over Ada? >>> >> Some immediate reasons: >> 1) Packaging. Packages allow better organization of software, which is >> good for any kind of application. > > Can you compare and contrast Ada packages > with Fortran modules and submodules? Honestly, I don't know about Fortran modules. >> 2) Strong typing. Scientific applications often deal with physical >> units, and Ada is great at supporting these. > > What specific features of Ada provide better support > than the comparable feature of Fortran? Is it possible in Fortran to define three *incompatible* types Length, Time, and Speed, and define a "/" operator between Length and Time that returns Speed? >> 3) User defined accuracy. Ada allows you to define the accuracy you >> need, the compiler chooses the appropriate representation. Note that >> you are not limited to only two floating point types (many machines >> have more than that). > > How is this better than Fortran's kind mechanism? I need to be educated about Fortran's kind, but can you use it to specify that you want a type with guaranteed 5 digits accuracy? >> 4) Fixed points. Not available in Fortran > > Agreed. How important is this for floating point work? > Fortran is rarely used for imbedded software (at least, > I wouldn't). It's not important for floating point work, it's important for fixed point work :-) Because Fortran has no fixed points, the scientific community sees floating point as the only way to model real numbers. Actually, fixed points have nothing to do with embedded software, they are a different way of modelling real (in the mathematical sense) numbers, with different numerical properties. Depending on the problem, fixed point may (or not) be more appropriate. >> 5) Guaranteed accuracy, not only for basic arithmetic, but for the >> whole mathematical library > > Can you compare Ada's accuracy requirements with Fortran's > support for IEEE 754? Ada's accuracy requirement is independent from any hardware (or software) implementation of floating points, and are applicable even for non IEEE machines. >> 6) Standardization. All compilers process exactly the same language. > > Again, how is this different? Fortran compilers are required > to be able to report use of extensions to the standard. AFAIK, there is a formal validation suite for Fortran, but 1) Compilers rarely report their validation result 2) A compiler is deemed "passed" if it succeeds 95% of the tests, while 100% is required for Ada >> 7) Interfacing. Easy to call libraries in foreing languages => all >> libraries available for Fortran are available for Ada. > > Can you compare Interfaces.C to ISO_C_BINDING? > How is one better or worse than the other? Sorry, I don't know what you are refering to (except for Interfaces.C :-) >> 8) Concurrency, built into the language > > Co-arrays and concurrent loops are coming in Fortran 2008. Concurrency has been in Ada since 1983! Moreover, it's a multi-tasking model, not concurrent statements model. Both models have benefits and drawbacks, it depends on the needs. > >> 9) Generics. Stop rewriting these damn sorting routines 1000 times. > > Intelligent Macros are coming in Fortran 2008. I don't know what an "intelligent macro" is, but certainly generics (once again available since 1983"), are much more than macros, even intelligent ones. For one thing, the legality of generics is checked when the generic is compiled. This means that, provided actual parameters meet the requirements of the formals, there is no neeed to recheck at instantiation time, and ensures that any legal instantiation will work as expected. AFAIK, this cannot be achieved by macros. >> 10) Default parameters. Makes complex subprograms (simplex...) much >> easier to use. > > Agreed. > >> 11) Operators on any types, including arrays. Define a matrix product >> as "*"... > > How is Ada's operators for types better or worse than Fortran's? > Is Ada's "*" operator better than Fortran's matmul()? More convenient to write: Mat1 := Mat2 * Mat3; >> 12) Bounds checking, with a very low penalty. Makes bounds checking >> really usable. > > How is Ada's bounds checking better or worse than Fortran's? I may miss something on the Fortran side, but Ada's very precise typing allows to define variables whose bounds are delimited. If these variables are later used to index an array (and if the language features are properly used), the compiler statically knows that no out-of-bound can occur. In short, most of the time, an Ada compiler is able to prove that bounds checking is not necessary, and corresponding checks are not generated. In practice, compiling an Ada program with or without bounds checking shows very little difference in execution speed, because only the really useful checks are left, all the spurious ones have been eliminated. -- --------------------------------------------------------- J-P. Rosen (rosen@adalog.fr) Visit Adalog's web site at http://www.adalog.fr ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 8:25 ` Jean-Pierre Rosen @ 2006-05-23 11:40 ` Dan Nagle 2006-05-23 13:14 ` Dr. Adrian Wrigley ` (2 more replies) 2006-05-23 17:09 ` Dick Hendrickson 2006-05-24 6:46 ` Jan Vorbrüggen 2 siblings, 3 replies; 314+ messages in thread From: Dan Nagle @ 2006-05-23 11:40 UTC (permalink / raw) Hello, Jean-Pierre Rosen wrote: > Disclaimer: > I am an Ada guy. FORTRAN was my first programming language, but it was > long, long ago, and I don't know about the latest improvements to the > language. I'll answer some questions here, but I'd like to be > enlightened on the latest trade from the FORTRAN side. Little things first: I write Ada, you write Fortran. > Dan Nagle a �crit : <snip> >> Can you compare and contrast Ada packages >> with Fortran modules and submodules? > Honestly, I don't know about Fortran modules. Metcalf, Reid & Cohen, _Fortran 95/2003 Explained_ is a great place to start. (I have Barnes for Ada 95, the Ada 2005 book is back-ordered until summer :-( ) <snip> > Is it possible in Fortran to define three *incompatible* types Length, > Time, and Speed, and define a "/" operator between Length and Time that > returns Speed? Yes. <snip> > I need to be educated about Fortran's kind, but can you use it to > specify that you want a type with guaranteed 5 digits accuracy? Yes. <snip> > Ada's accuracy requirement is independent from any hardware (or > software) implementation of floating points, and are applicable even for > non IEEE machines. These days, computers use 754 arithmetic (and if 754r finally completes, I expect 754r arithmetic will follow eventually). The Fortran committees are tracking 754r. Fortran isn't usually used for embedded systems with "interesting" floating point. <snip> >> Can you compare Interfaces.C to ISO_C_BINDING? >> How is one better or worse than the other? > Sorry, I don't know what you are refering to (except for Interfaces.C :-) iso_c_binding is the standard-defined module providing Fortran definitions of C entities. It's about 95% of Fortran's "Interoperability with C" feature. <snip> > Concurrency has been in Ada since 1983! Moreover, it's a multi-tasking > model, not concurrent statements model. Both models have benefits and > drawbacks, it depends on the needs. Co-arrays are the main concurrency of f08. For a quick update, the paper is at Rutherford-Appleton Labs, see http://epubs.cclrc.ac.uk/bitstream/161/raltr-1998060.pdf <snip> > More convenient to write: > Mat1 := Mat2 * Mat3; A programmer may define a matrix type where the * may be the matrix multiplication operator. This has been discussed a bit in this thread. <snip> > In practice, compiling an Ada program with or without bounds checking > shows very little difference in execution speed, because only the really > useful checks are left, all the spurious ones have been eliminated. In practice, bounds checking is available with every Fortran compiler. It's usually off by default (for performance). Fortran has not been f77 for 15 years now, modern Fortran is so different some purists like to pretend it's a different language. :-) -- Cheers! Dan Nagle Purple Sage Computing Solutions, Inc. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 11:40 ` Dan Nagle @ 2006-05-23 13:14 ` Dr. Adrian Wrigley 2006-05-23 17:07 ` Dan Nagle 2006-05-24 5:26 ` robin 2006-05-27 5:18 ` Aldebaran 2 siblings, 1 reply; 314+ messages in thread From: Dr. Adrian Wrigley @ 2006-05-23 13:14 UTC (permalink / raw) On Tue, 23 May 2006 11:40:27 +0000, Dan Nagle wrote: >> Concurrency has been in Ada since 1983! Moreover, it's a multi-tasking >> model, not concurrent statements model. Both models have benefits and >> drawbacks, it depends on the needs. > > Co-arrays are the main concurrency of f08. For a quick update, > the paper is at Rutherford-Appleton Labs, see > http://epubs.cclrc.ac.uk/bitstream/161/raltr-1998060.pdf I don't see that Ada concurrency and co-arrays have much relationship at all. In an Ada program, you can: Have several tasks performing computation Have other tasks controlling GUIs, monitoring progress etc. Have another task serving http requests etc. And tasks can be split between several (possibly heterogenous) machines in a network which may be added, fail or be removed. Can you do implement this kind of concurrency just using normal Fortran language features? Isn't this a bit more like MPI? In Ada, these concurrency features are very robust and streamlined. This is because tasking was a primary goal of the language design. For a typical Fortran application like weather forecasting, this is exactly the architecture you might choose. It may be hardcore numerical code. But technically, it is still real-time. The Fortran mind-set still seems very "batch-oriented". -- Adrian ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 13:14 ` Dr. Adrian Wrigley @ 2006-05-23 17:07 ` Dan Nagle 2006-05-23 22:20 ` Dr. Adrian Wrigley 0 siblings, 1 reply; 314+ messages in thread From: Dan Nagle @ 2006-05-23 17:07 UTC (permalink / raw) Hello, Dr. Adrian Wrigley wrote: <snip> > Isn't this a bit more like MPI? Well, it's like MPI minus the rubbish. > In Ada, these concurrency features are very robust and streamlined. > This is because tasking was a primary goal of the language design. > > For a typical Fortran application like weather forecasting, this ^^^^ > is exactly the architecture you might choose. It may be > hardcore numerical code. I'm not sure which antecedent "this" references, but the meteorological folks are one of the main forces behind co-arrays. -- Cheers! Dan Nagle Purple Sage Computing Solutions, Inc. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 17:07 ` Dan Nagle @ 2006-05-23 22:20 ` Dr. Adrian Wrigley 2006-05-23 22:49 ` Dan Nagle 2006-05-24 12:56 ` J.F. Cornwall 0 siblings, 2 replies; 314+ messages in thread From: Dr. Adrian Wrigley @ 2006-05-23 22:20 UTC (permalink / raw) On Tue, 23 May 2006 17:07:06 +0000, Dan Nagle wrote: > Hello, > > Dr. Adrian Wrigley wrote: > > <snip> > >> Isn't this a bit more like MPI? > > Well, it's like MPI minus the rubbish. > >> In Ada, these concurrency features are very robust and streamlined. >> This is because tasking was a primary goal of the language design. >> >> For a typical Fortran application like weather forecasting, this > ^^^^ >> is exactly the architecture you might choose. It may be >> hardcore numerical code. > > I'm not sure which antecedent "this" references, > but the meteorological folks are one of the main > forces behind co-arrays. In a weather forecasting program you want to have data acquisition (real-time), prediction (computation) and display (real-time GUIs) running on a continuous, high uptime basis across a network of machines. If Fortran had strong multitasking, real-time and distributed capabilities, these goals would be reasonable and achievable within the language. Absence of these features means such systems would often (I guess) be multi-language setups, with things like Java, C++, Tcl/Tk, shell scripts, cron jobs etc. playing a part. Has anyone here worked on a big meteorological system? Am I right? Co-arrays fill a longstanding, unmet need in programming languages. Fortran, in particular, should have had this feature long ago. But the design/semantics are challenging to get something which is truly general-purpose, useful and understandable. Co-arrays would be great if they were widely available and performant. But they're not :( Designing a co-array system involves bridging the semantic gap between the software author, the compiler and the hardware architecture. The Co-array Fortran proposals seem to fill an existing need within the community, but I don't think make a good model to adopt in other languages. I did briefly work on more general ideas for language/hardware co-design using co-arrays and similar. Initial results were encouraging, but to make use of the technology, you really need a custom parallel processor built. And rewriting your software :( -- Adrian ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 22:20 ` Dr. Adrian Wrigley @ 2006-05-23 22:49 ` Dan Nagle 2006-05-23 23:21 ` Dr. Adrian Wrigley 2006-05-24 12:56 ` J.F. Cornwall 1 sibling, 1 reply; 314+ messages in thread From: Dan Nagle @ 2006-05-23 22:49 UTC (permalink / raw) Hello, Dr. Adrian Wrigley wrote: <snip> > If Fortran had strong multitasking, real-time and distributed > capabilities, these goals would be reasonable and achievable > within the language. Absence of these features means such > systems would often (I guess) be multi-language setups, with things > like Java, C++, Tcl/Tk, shell scripts, cron jobs etc. playing a part. > Has anyone here worked on a big meteorological system? Am I right? I don't know, perhaps someone at a meteorological center can answer. There are several Fortran interfaces to pthreads, and there's always OpenMP. > Co-arrays fill a longstanding, unmet need in programming languages. > Fortran, in particular, should have had this feature long ago. > But the design/semantics are challenging to get something which > is truly general-purpose, useful and understandable. > Co-arrays would be great if they were widely available and performant. > But they're not :( Actually, they're a proven design from Cray. At the last meeting of the Fortran committee (May 06), my impression from the vendors present was that customers are demanding co-arrays so strongly that they will be implemented rather quickly, now that the design has stabilized. > Designing a co-array system involves bridging the semantic gap > between the software author, the compiler and the hardware > architecture. The Co-array Fortran proposals seem to fill an > existing need within the community, but I don't think make a > good model to adopt in other languages. See UPC (a/k/a "Unified Parallel C" IIRC) for a similar C feature. > I did briefly work > on more general ideas for language/hardware co-design using > co-arrays and similar. Initial results were encouraging, but > to make use of the technology, you really need a custom parallel > processor built. And rewriting your software :( But today, interprocessor communications are approaching the performance of memory buses. I believe co-arrays will deliver performance at or better than MPI on many architectures, and in many clusters. Any application re-write is minimal compared to inserting MPI calls. -- Cheers! Dan Nagle Purple Sage Computing Solutions, Inc. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 22:49 ` Dan Nagle @ 2006-05-23 23:21 ` Dr. Adrian Wrigley 2006-05-24 0:49 ` Dan Nagle 0 siblings, 1 reply; 314+ messages in thread From: Dr. Adrian Wrigley @ 2006-05-23 23:21 UTC (permalink / raw) On Tue, 23 May 2006 22:49:50 +0000, Dan Nagle wrote: > > Dr. Adrian Wrigley wrote: ... >> I did briefly work >> on more general ideas for language/hardware co-design using >> co-arrays and similar. Initial results were encouraging, but >> to make use of the technology, you really need a custom parallel >> processor built. And rewriting your software :( > > But today, interprocessor communications are approaching > the performance of memory buses. I believe co-arrays > will deliver performance at or better than MPI on many architectures, > and in many clusters. > > Any application re-write is minimal compared to inserting MPI calls. Thank you for your input on this topic. I had looked at things like UPC when I was looking into the topic. There still seems to be quite a spread of different ideas, but only modest "progress". Something like the new 'Cell' processor would be a good target for a co-array language. But instead, the Cell programming tools have a disappointing, ad-hoc collection of programming (band) aids to help coders use the parallelism. It's expected to be "guru" level coding to get decent efficiency code at the metal. As regards interprocessor communications vs. memory buses, it depends at what levels you make the comparisons. On chip memory arrays may be 1000's of bits wide at GHz speeds. Communications between processor chips are a small fraction of that. The challenge for co-array and SIMD computing is to work out simple hardware that can utilize inter- and intra-chip communications and memories efficiently. This needs to have a sensible programming model (eg assembly language, instruction set architecture). Only then is it worth considering what language features (if any) are necessary to compile for it. Sadly, what passed for SIMD computing in the '70s and '80s was way too restrictive and often inefficient for its capabilities to be worth providing for in general-purpose languages. Progress is glacial :( I wish the Fortran committee the best of luck with their co-arrays! -- Adrian ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 23:21 ` Dr. Adrian Wrigley @ 2006-05-24 0:49 ` Dan Nagle 2006-05-24 5:07 ` GF Thomas 0 siblings, 1 reply; 314+ messages in thread From: Dan Nagle @ 2006-05-24 0:49 UTC (permalink / raw) Hello, Dr. Adrian Wrigley wrote: <snip> > As regards interprocessor communications vs. memory buses, it depends > at what levels you make the comparisons. On chip memory arrays > may be 1000's of bits wide at GHz speeds. Communications between > processor chips are a small fraction of that. Yes. The mix of what's to be on-chip is always changing. Some folks (with applications programmer perspectives) were a little unsure co-arrays are a long-term solution (remember what happened to HPF). I expect co-arrays to work well on any architecture from SMP to DMP, at least as well as the competing spellings of parallelism. > The challenge for co-array and SIMD computing is to work out > simple hardware that can utilize inter- and intra-chip > communications and memories efficiently. This needs to have > a sensible programming model (eg assembly language, instruction > set architecture). Only then is it worth considering what language > features (if any) are necessary to compile for it. > Sadly, what passed for SIMD computing in the '70s and '80s > was way too restrictive and often inefficient for its capabilities > to be worth providing for in general-purpose languages. We usually call co-arrays SPMD (single program multiple data) because it's not really "single instruction", but check the synchronization rules. > Progress is glacial :( Yes. :-( > I wish the Fortran committee the best of luck with their co-arrays! Thanks. Best of luck with Ada. -- Cheers! Dan Nagle Purple Sage Computing Solutions, Inc. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 0:49 ` Dan Nagle @ 2006-05-24 5:07 ` GF Thomas 0 siblings, 0 replies; 314+ messages in thread From: GF Thomas @ 2006-05-24 5:07 UTC (permalink / raw) "Dan Nagle" <dannagle@verizon.net> wrote in message news:U4Ocg.7543$ix2.6912@trnddc03... > Best of luck with Ada. > Ada doesn't depend on luck in the way that FORTRAN does. -- You're Welcome, Gerry T. ______ "There's man all over for you, blaming on his boots the fault of his feet." -- Samuel Beckett. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 22:20 ` Dr. Adrian Wrigley 2006-05-23 22:49 ` Dan Nagle @ 2006-05-24 12:56 ` J.F. Cornwall 2006-05-24 13:39 ` Dr. Adrian Wrigley 1 sibling, 1 reply; 314+ messages in thread From: J.F. Cornwall @ 2006-05-24 12:56 UTC (permalink / raw) Dr. Adrian Wrigley wrote: > On Tue, 23 May 2006 17:07:06 +0000, Dan Nagle wrote: > >> Hello, >> >> Dr. Adrian Wrigley wrote: >> >> <snip> >> >>> Isn't this a bit more like MPI? >> Well, it's like MPI minus the rubbish. >> >>> In Ada, these concurrency features are very robust and streamlined. >>> This is because tasking was a primary goal of the language design. >>> >>> For a typical Fortran application like weather forecasting, this >> ^^^^ >>> is exactly the architecture you might choose. It may be >>> hardcore numerical code. >> I'm not sure which antecedent "this" references, >> but the meteorological folks are one of the main >> forces behind co-arrays. > > In a weather forecasting program you want to have data > acquisition (real-time), prediction (computation) and display > (real-time GUIs) running on a continuous, high uptime basis > across a network of machines. > > If Fortran had strong multitasking, real-time and distributed > capabilities, these goals would be reasonable and achievable > within the language. Absence of these features means such > systems would often (I guess) be multi-language setups, with things > like Java, C++, Tcl/Tk, shell scripts, cron jobs etc. playing a part. > Has anyone here worked on a big meteorological system? Am I right? > In my US Air Force days, I worked at a large global weather-forecasting facility. We had multiple data input systems (a variety of comm links talking to several Univac mainframes), multiple number-crunching systems (a couple more Univacs and a Cray), and an cluster of 40 or so Vax 11/780s for interactive tweaking of the forecasts. The majority of the software for the comm was in assembler, just about all of the remainder was Fortran (IV and 77, this was back in the early 80's...). We also used Fortran mixed with assembly code for a new comm front-end machine that was implemented in '88. Fortran was used for comm, utility programs, forecasting models, database input/output/maintenance, and just about everything else in that system. Worked fine. Nowadays, I have no idea what they're running. Bet there's still a lot of Fortran though :-) Jim > Co-arrays fill a longstanding, unmet need in programming languages. > Fortran, in particular, should have had this feature long ago. > But the design/semantics are challenging to get something which > is truly general-purpose, useful and understandable. > Co-arrays would be great if they were widely available and performant. > But they're not :( > > Designing a co-array system involves bridging the semantic gap > between the software author, the compiler and the hardware > architecture. The Co-array Fortran proposals seem to fill an > existing need within the community, but I don't think make a > good model to adopt in other languages. I did briefly work > on more general ideas for language/hardware co-design using > co-arrays and similar. Initial results were encouraging, but > to make use of the technology, you really need a custom parallel > processor built. And rewriting your software :( > -- > Adrian > ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 12:56 ` J.F. Cornwall @ 2006-05-24 13:39 ` Dr. Adrian Wrigley 2006-05-24 16:49 ` J.F. Cornwall 0 siblings, 1 reply; 314+ messages in thread From: Dr. Adrian Wrigley @ 2006-05-24 13:39 UTC (permalink / raw) On Wed, 24 May 2006 12:56:29 +0000, J.F. Cornwall wrote: > Dr. Adrian Wrigley wrote: >> On Tue, 23 May 2006 17:07:06 +0000, Dan Nagle wrote: >> >>> Hello, >>> >>> Dr. Adrian Wrigley wrote: >>> >>> <snip> >>> >>>> Isn't this a bit more like MPI? >>> Well, it's like MPI minus the rubbish. >>> >>>> In Ada, these concurrency features are very robust and streamlined. >>>> This is because tasking was a primary goal of the language design. >>>> >>>> For a typical Fortran application like weather forecasting, this >>> ^^^^ >>>> is exactly the architecture you might choose. It may be >>>> hardcore numerical code. >>> I'm not sure which antecedent "this" references, >>> but the meteorological folks are one of the main >>> forces behind co-arrays. >> >> In a weather forecasting program you want to have data >> acquisition (real-time), prediction (computation) and display >> (real-time GUIs) running on a continuous, high uptime basis >> across a network of machines. >> >> If Fortran had strong multitasking, real-time and distributed >> capabilities, these goals would be reasonable and achievable >> within the language. Absence of these features means such >> systems would often (I guess) be multi-language setups, with things >> like Java, C++, Tcl/Tk, shell scripts, cron jobs etc. playing a part. >> Has anyone here worked on a big meteorological system? Am I right? >> > > In my US Air Force days, I worked at a large global weather-forecasting > facility. We had multiple data input systems (a variety of comm links > talking to several Univac mainframes), multiple number-crunching systems > (a couple more Univacs and a Cray), and an cluster of 40 or so Vax > 11/780s for interactive tweaking of the forecasts. The majority of the > software for the comm was in assembler, just about all of the remainder > was Fortran (IV and 77, this was back in the early 80's...). > > We also used Fortran mixed with assembly code for a new comm front-end > machine that was implemented in '88. Fortran was used for comm, utility > programs, forecasting models, database input/output/maintenance, and > just about everything else in that system. Worked fine. Interesting. I *think* you are supporting my view that in practice, Fortran requires additional support or coding outside of the language to tie together the different parts of a complex system. You speak of utility programs, forecasting programs, database I/O programs. Invoking these in the right order, at the right time, on the right files at the right terminals is always done outside of the pure Fortran application. At the very least it requires an OS command interpreter. It probably involves scripts to delete old files or do other housekeeping. In Ada, the separate program components can form a *single* running application program entity, with a single invocation - even if the program is running across several loosely connected machines and consists of many different executable files. The program execution is a network of cooperating processes and shared data stores. Parts of the program can even be recompiled as it runs - without affecting the shared data stores or other executing tasks. In fact Ada supports persistent variables with hold their values even if the program is stopped completely and restarted later. No mainstream language comes even close to this program execution model. -- Adrian ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 13:39 ` Dr. Adrian Wrigley @ 2006-05-24 16:49 ` J.F. Cornwall 2006-05-24 18:08 ` Dr. Adrian Wrigley 0 siblings, 1 reply; 314+ messages in thread From: J.F. Cornwall @ 2006-05-24 16:49 UTC (permalink / raw) Dr. Adrian Wrigley wrote: > On Wed, 24 May 2006 12:56:29 +0000, J.F. Cornwall wrote: > >> Dr. Adrian Wrigley wrote: >>> On Tue, 23 May 2006 17:07:06 +0000, Dan Nagle wrote: >>> >>>> Hello, >>>> >>>> Dr. Adrian Wrigley wrote: >>>> >>>> <snip> >>>> >>>>> Isn't this a bit more like MPI? >>>> Well, it's like MPI minus the rubbish. >>>> >>>>> In Ada, these concurrency features are very robust and streamlined. >>>>> This is because tasking was a primary goal of the language design. >>>>> >>>>> For a typical Fortran application like weather forecasting, this >>>> ^^^^ >>>>> is exactly the architecture you might choose. It may be >>>>> hardcore numerical code. >>>> I'm not sure which antecedent "this" references, >>>> but the meteorological folks are one of the main >>>> forces behind co-arrays. >>> In a weather forecasting program you want to have data >>> acquisition (real-time), prediction (computation) and display >>> (real-time GUIs) running on a continuous, high uptime basis >>> across a network of machines. >>> >>> If Fortran had strong multitasking, real-time and distributed >>> capabilities, these goals would be reasonable and achievable >>> within the language. Absence of these features means such >>> systems would often (I guess) be multi-language setups, with things >>> like Java, C++, Tcl/Tk, shell scripts, cron jobs etc. playing a part. >>> Has anyone here worked on a big meteorological system? Am I right? >>> >> In my US Air Force days, I worked at a large global weather-forecasting >> facility. We had multiple data input systems (a variety of comm links >> talking to several Univac mainframes), multiple number-crunching systems >> (a couple more Univacs and a Cray), and an cluster of 40 or so Vax >> 11/780s for interactive tweaking of the forecasts. The majority of the >> software for the comm was in assembler, just about all of the remainder >> was Fortran (IV and 77, this was back in the early 80's...). >> >> We also used Fortran mixed with assembly code for a new comm front-end >> machine that was implemented in '88. Fortran was used for comm, utility >> programs, forecasting models, database input/output/maintenance, and >> just about everything else in that system. Worked fine. > > Interesting. > > I *think* you are supporting my view that in practice, Fortran > requires additional support or coding outside of the language to tie > together the different parts of a complex system. > You speak of utility programs, forecasting programs, database I/O > programs. Invoking these in the right order, at the right time, > on the right files at the right terminals is always done > outside of the pure Fortran application. At the very least > it requires an OS command interpreter. It probably involves > scripts to delete old files or do other housekeeping. > > In Ada, the separate program components can form a *single* > running application program entity, with a single invocation - even > if the program is running across several loosely connected machines > and consists of many different executable files. The program > execution is a network of cooperating processes and shared > data stores. Parts of the program can even be recompiled as > it runs - without affecting the shared data stores or other > executing tasks. In fact Ada supports persistent variables > with hold their values even if the program is stopped completely > and restarted later. No mainstream language comes even close > to this program execution model. > -- > Adrian Actually, in that particular environment, everything was tied together in a complicated web of cross-ties. The Fortran code couldn't do everything, the assembly code couldn't do everything, the scripting and batch control languages couldn't do everything. etc... That would have been the case had we been using Ada, as well. And we did look at Ada when starting out on the comm front-end project. At that time it wouldn't do what we needed it to do, so we went with a continuing mixture of F77 and assembler. Sorry, I don't recall the details of what we needed that it couldn't do, recall that this was in the early 1980s. Jim ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 16:49 ` J.F. Cornwall @ 2006-05-24 18:08 ` Dr. Adrian Wrigley 0 siblings, 0 replies; 314+ messages in thread From: Dr. Adrian Wrigley @ 2006-05-24 18:08 UTC (permalink / raw) On Wed, 24 May 2006 16:49:10 +0000, J.F. Cornwall wrote: > Dr. Adrian Wrigley wrote: >> On Wed, 24 May 2006 12:56:29 +0000, J.F. Cornwall wrote: >> >>> Dr. Adrian Wrigley wrote: >>>> On Tue, 23 May 2006 17:07:06 +0000, Dan Nagle wrote: ... >>> In my US Air Force days, I worked at a large global >>> weather-forecasting facility. We had multiple data input systems (a >>> variety of comm links talking to several Univac mainframes), multiple >>> number-crunching systems (a couple more Univacs and a Cray), and an >>> cluster of 40 or so Vax 11/780s for interactive tweaking of the >>> forecasts. The majority of the software for the comm was in >>> assembler, just about all of the remainder was Fortran (IV and 77, >>> this was back in the early 80's...). >>> >>> We also used Fortran mixed with assembly code for a new comm front-end >>> machine that was implemented in '88. Fortran was used for comm, >>> utility programs, forecasting models, database >>> input/output/maintenance, and just about everything else in that >>> system. Worked fine. >> >> Interesting. >> >> I *think* you are supporting my view that in practice, Fortran requires >> additional support or coding outside of the language to tie together >> the different parts of a complex system. You speak of utility programs, >> forecasting programs, database I/O programs. Invoking these in the >> right order, at the right time, on the right files at the right >> terminals is always done outside of the pure Fortran application. At >> the very least it requires an OS command interpreter. It probably >> involves scripts to delete old files or do other housekeeping. >> >> In Ada, the separate program components can form a *single* running >> application program entity, with a single invocation - even if the >> program is running across several loosely connected machines and >> consists of many different executable files. The program execution is >> a network of cooperating processes and shared data stores. Parts of >> the program can even be recompiled as it runs - without affecting the >> shared data stores or other executing tasks. In fact Ada supports >> persistent variables with hold their values even if the program is >> stopped completely and restarted later. No mainstream language comes >> even close to this program execution model. >> -- >> Adrian > > Actually, in that particular environment, everything was tied together > in a complicated web of cross-ties. The Fortran code couldn't do > everything, the assembly code couldn't do everything, the scripting and > batch control languages couldn't do everything. etc... That would have > been the case had we been using Ada, as well. You're quite right. But one thing apparent in this discussion is that the Ada programmer's view of Fortran is the FORTRAN 77 many learned in college, but the Fortran programmer's view of Ada is of Ada 83, when that was hot technology. Neither view has much relevance in determining the technical suitability of the contemporary languages for new projects. I'd be overselling the features of modern Ada to say that the scripting and batch control 'glue' can *all* be done within the language - but a huge part of it can be. And this brings a major benefit to system portability, complexity and integrity. -- Adrian ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 11:40 ` Dan Nagle 2006-05-23 13:14 ` Dr. Adrian Wrigley @ 2006-05-24 5:26 ` robin 2006-05-27 5:18 ` Aldebaran 2 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-05-24 5:26 UTC (permalink / raw) "Dan Nagle" <dannagle@verizon.net> wrote in message news:LwCcg.5894$p13.115@trnddc07... > Jean-Pierre Rosen wrote: > > In practice, compiling an Ada program with or without bounds checking > > shows very little difference in execution speed, because only the really > > useful checks are left, all the spurious ones have been eliminated. > > In practice, bounds checking is available with every Fortran compiler. But it's not part of the language. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 11:40 ` Dan Nagle 2006-05-23 13:14 ` Dr. Adrian Wrigley 2006-05-24 5:26 ` robin @ 2006-05-27 5:18 ` Aldebaran 2 siblings, 0 replies; 314+ messages in thread From: Aldebaran @ 2006-05-27 5:18 UTC (permalink / raw) El Tue, 23 May 2006 11:40:27 +0000, Dan Nagle escribiᅵ: > Hello, > > Jean-Pierre Rosen wrote: >> Disclaimer: >> I am an Ada guy. FORTRAN was my first programming language, but it was >> long, long ago, and I don't know about the latest improvements to the >> language. I'll answer some questions here, but I'd like to be >> enlightened on the latest trade from the FORTRAN side. After reading all posts in this thread, a have decided to learn Ocaml. Thanks a lot. Aldebaran ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 8:25 ` Jean-Pierre Rosen 2006-05-23 11:40 ` Dan Nagle @ 2006-05-23 17:09 ` Dick Hendrickson 2006-05-23 17:53 ` Georg Bauhaus ` (3 more replies) 2006-05-24 6:46 ` Jan Vorbrüggen 2 siblings, 4 replies; 314+ messages in thread From: Dick Hendrickson @ 2006-05-23 17:09 UTC (permalink / raw) Jean-Pierre Rosen wrote: > Disclaimer: > I am an Ada guy. FORTRAN was my first programming language, but it was > long, long ago, and I don't know about the latest improvements to the > language. I'll answer some questions here, but I'd like to be > enlightened on the latest trade from the FORTRAN side. > > Dan Nagle a �crit : > >> Hello, >> >> Jean-Pierre Rosen wrote: >> >>> Nasser Abbasi a �crit : >>> >>>> What are the technical language specific reasons why Fortran would >>>> be selected over Ada? >>>> >>> Some immediate reasons: >>> 1) Packaging. Packages allow better organization of software, which >>> is good for any kind of application. >> >> >> Can you compare and contrast Ada packages >> with Fortran modules and submodules? > > Honestly, I don't know about Fortran modules. I think they're alot like packages. They provide a scope where you can define variables, data types, operators, functions, etc. They can be either PUBLIC or PRIVATE and the USEr of the module can control which ones he imports. They allow for a lot of name and implementation hiding. Data types can have PRIVATE or PUBLIC internal parts, independent of whether or not the data type itself is PUBLIC or PRIVATE. > >>> 2) Strong typing. Scientific applications often deal with physical >>> units, and Ada is great at supporting these. >> >> >> What specific features of Ada provide better support >> than the comparable feature of Fortran? > > Is it possible in Fortran to define three *incompatible* types Length, > Time, and Speed, and define a "/" operator between Length and Time that > returns Speed? Yes, it's possible. There was a long thread inb c.l.f a year or two ago abou this. It's a bit of a pain in the butt to cover all the cases. Is E = m*c**2 the same as E=m*c*c? Fortran 2003 has polymorphic variables which might make it easier to write a complete set of units and operators. That probably would lose some compile time checking. > >>> 3) User defined accuracy. Ada allows you to define the accuracy you >>> need, the compiler chooses the appropriate representation. Note that >>> you are not limited to only two floating point types (many machines >>> have more than that). >> >> >> How is this better than Fortran's kind mechanism? > > I need to be educated about Fortran's kind, but can you use it to > specify that you want a type with guaranteed 5 digits accuracy? Yes, Fortran allows you to specify both decimal precision and exponent range for variables. The compiler must pick the best representation it supports (where "best" is defined in the standard). If it doesn't support the precision or range, it must give an error. > >>> 4) Fixed points. Not available in Fortran >> >> >> Agreed. How important is this for floating point work? >> Fortran is rarely used for imbedded software (at least, >> I wouldn't). > > It's not important for floating point work, it's important for fixed > point work :-) It's not important for Fortran work ;) . Technically, it probably could be done under the KIND mechanism, but I don't think any compiler support it directly. There is enabling stuff coming in F2008. > > Because Fortran has no fixed points, the scientific community sees > floating point as the only way to model real numbers. Actually, fixed > points have nothing to do with embedded software, they are a different > way of modelling real (in the mathematical sense) numbers, with > different numerical properties. Depending on the problem, fixed point > may (or not) be more appropriate. > >>> 5) Guaranteed accuracy, not only for basic arithmetic, but for the >>> whole mathematical library >> >> >> Can you compare Ada's accuracy requirements with Fortran's >> support for IEEE 754? > > Ada's accuracy requirement is independent from any hardware (or > software) implementation of floating points, and are applicable even for > non IEEE machines. Nope. The IEEE features in Fortran 2003 encourage processors to do the right thing, but don't mandate it. What does Ada say about things like COS(1.1E300)? It's unclear to me what that could or should mean on a machine with finite (or at least less than 300 ;) ) digits of precision. > >>> 6) Standardization. All compilers process exactly the same language. >> >> >> Again, how is this different? Fortran compilers are required >> to be able to report use of extensions to the standard. > > AFAIK, there is a formal validation suite for Fortran, but Nope, there isn't a formal validation suite for "modern" Fortran. There was one for FORTRAN 77, but, realistically, it wasn't very good. If you need one, I know where you can get a real good one ;). > 1) Compilers rarely report their validation result > 2) A compiler is deemed "passed" if it succeeds 95% of the tests, while > 100% is required for Ada > >>> 7) Interfacing. Easy to call libraries in foreing languages => all >>> libraries available for Fortran are available for Ada. >> >> >> Can you compare Interfaces.C to ISO_C_BINDING? >> How is one better or worse than the other? > > Sorry, I don't know what you are refering to (except for Interfaces.C :-) The ISO_C_BINDING module specifiecs a raft of C like things, named constants, some procedure interfaces, etc. There is also a BIND(C) attribute for externals that forces the compiler to use a C compatible calling sequence. Between that and the named constants, etc., you can define interfaces to just about any C function and vice-versa. There are some purely Fortran things, like multi-dimensional array sections, that have no defined passing mechanism. > >>> 8) Concurrency, built into the language >> >> >> Co-arrays and concurrent loops are coming in Fortran 2008. > > Concurrency has been in Ada since 1983! Moreover, it's a multi-tasking > model, not concurrent statements model. Both models have benefits and > drawbacks, it depends on the needs. > >> >>> 9) Generics. Stop rewriting these damn sorting routines 1000 times. >> >> >> Intelligent Macros are coming in Fortran 2008. > > I don't know what an "intelligent macro" is, but certainly generics > (once again available since 1983"), are much more than macros, even > intelligent ones. > > For one thing, the legality of generics is checked when the generic is > compiled. This means that, provided actual parameters meet the > requirements of the formals, there is no neeed to recheck at > instantiation time, and ensures that any legal instantiation will work > as expected. AFAIK, this cannot be achieved by macros. This is a flaw (or feature?) of the macro approach. The macro system is Fortran aware, so really stupid things can be caught early on, but most bugs would be caught when a version is instantiated. Basically you define a template-like thing, which can have embedded IF-THEN logic, and instantiate it yourself for whatever set of conditions you need. There's an old joke that says "all compilers are multi-pass optimizers, some of them require the programmer to make the first few passes." In that sense, Fortran's intelligent macros are just like Ada's generics. > >>> 10) Default parameters. Makes complex subprograms (simplex...) much >>> easier to use. >> >> >> Agreed. Agreed also. Fortran subroutines can have optional arguments. However, there's no magic default. The subroutine programmer must check for the absence of an optional argument and do the right thing manually. >> >>> 11) Operators on any types, including arrays. Define a matrix product >>> as "*"... >> >> >> How is Ada's operators for types better or worse than Fortran's? >> Is Ada's "*" operator better than Fortran's matmul()? > > More convenient to write: > Mat1 := Mat2 * Mat3; There's been several comments on this one. Basically, Fortran went the route that all intrinsic operators operate on an element-by-element basis. User defined operators can do whatever is desired. Personally, I think code is hard to read when * sometimes operates one way and sometimes another. But, that's just my opinion. > >>> 12) Bounds checking, with a very low penalty. Makes bounds checking >>> really usable. >> >> >> How is Ada's bounds checking better or worse than Fortran's? > > I may miss something on the Fortran side, but Ada's very precise typing > allows to define variables whose bounds are delimited. If these > variables are later used to index an array (and if the language features > are properly used), the compiler statically knows that no out-of-bound > can occur. In short, most of the time, an Ada compiler is able to prove > that bounds checking is not necessary, and corresponding checks are not > generated. > > In practice, compiling an Ada program with or without bounds checking > shows very little difference in execution speed, because only the really > useful checks are left, all the spurious ones have been eliminated. > Ada's is surely better. Knowing that a subscript has to be in range, because it's checked when a value is assigned to the subscript variable, has to be more efficient than what Fortran can do. In general, Fortran has to check the value of the subscripts on every array reference. In practice, most array references take place in DO loops and compilers can usually hoist the checks outside the loop, so they have minimal cost at run time. Dick Hendrickson ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 17:09 ` Dick Hendrickson @ 2006-05-23 17:53 ` Georg Bauhaus 2006-05-23 18:21 ` Dmitry A. Kazakov ` (2 subsequent siblings) 3 siblings, 0 replies; 314+ messages in thread From: Georg Bauhaus @ 2006-05-23 17:53 UTC (permalink / raw) Dick Hendrickson wrote: > What does Ada say about things like COS(1.1E300)? > It's unclear to me what that could or should mean on a > machine with finite (or at least less than 300 ;) ) digits > of precision. It means the compiler will tell you ;-) Messages will indeed depend on the amount of digits possible, but compilers cannot give up just because of a slightly bigger number. 4. if 1.1E300 < 1.2E300 then | >>> warning: condition is always True This can be compiled, and can be used in conditionals etc. without trouble. The generic Ada.Numerics.Generic_Elementary_Functions can be instantiated with any floating point type, so if you get a type definition of at least 300 digits past the compiler, then COS(1.1E300) might well have its standard meaning ;-) -- Georg ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 17:09 ` Dick Hendrickson 2006-05-23 17:53 ` Georg Bauhaus @ 2006-05-23 18:21 ` Dmitry A. Kazakov 2006-05-23 18:34 ` Brooks Moses 2006-05-23 20:33 ` Dick Hendrickson 2006-05-24 7:52 ` Jean-Pierre Rosen 2006-05-24 14:50 ` robin 3 siblings, 2 replies; 314+ messages in thread From: Dmitry A. Kazakov @ 2006-05-23 18:21 UTC (permalink / raw) On Tue, 23 May 2006 17:09:03 GMT, Dick Hendrickson wrote: > Jean-Pierre Rosen wrote: >> Is it possible in Fortran to define three *incompatible* types Length, >> Time, and Speed, and define a "/" operator between Length and Time that >> returns Speed? > Yes, it's possible. There was a long thread inb c.l.f a > year or two ago abou this. It's a bit of a pain in the butt > to cover all the cases. Is E = m*c**2 the same as E=m*c*c? Hmm, where is any problem? As I remember 2 is integer in Fortran. I hope it can distinguish signatures ** : R x I -> R and ** : R x R -> R. Or do you mean a geometrical explosion of variants? > Fortran 2003 has polymorphic variables which might make it > easier to write a complete set of units and operators. That > probably would lose some compile time checking. That depends on which kind of polymorphism it is. Ada provides three forms of: 1. generics (like C++ templates) and overloading 2. tagged types (like C++ classes) 3. discriminated types. All three can be used for dimensioned values. The last one is IMO the most promising, because it supports constrained subtypes. It is similar to Positive being a constrained Integer. So, for example, Energy can be statically constrained Dimensioned. Now, when all constraints are statically known, the compiler can potentially remove all run-time checks, as well as any memory overhead required to keep the dimension at run-time. Then there is a problem of constraint propagation. Developing a matrix library you surely would like to have dimensioned vectors and matrices all constrained in a "coherent" way rather than on per-element basis. I doubt that either 1. or 2. would be able to do it in an easy way. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 18:21 ` Dmitry A. Kazakov @ 2006-05-23 18:34 ` Brooks Moses 2006-05-24 7:15 ` Dmitry A. Kazakov 2006-05-23 20:33 ` Dick Hendrickson 1 sibling, 1 reply; 314+ messages in thread From: Brooks Moses @ 2006-05-23 18:34 UTC (permalink / raw) Dmitry A. Kazakov wrote: > On Tue, 23 May 2006 17:09:03 GMT, Dick Hendrickson wrote: >>Fortran 2003 has polymorphic variables which might make it >>easier to write a complete set of units and operators. That >>probably would lose some compile time checking. > > That depends on which kind of polymorphism it is. Ada provides three forms > of: > > 1. generics (like C++ templates) and overloading > 2. tagged types (like C++ classes) > 3. discriminated types. [...] > Then there is a problem of constraint propagation. Developing a matrix > library you surely would like to have dimensioned vectors and matrices all > constrained in a "coherent" way rather than on per-element basis. I doubt > that either 1. or 2. would be able to do it in an easy way. For what it's worth, the OpenFOAM CFD program that I've been working with (written in C++) has dimensioned variables, with dimension checking. It's all run-time, though; they've only got one type of "dimensioned variable" (a class with the variable value and a length-seven array containing the exponents of the dimensions) and then assignments and arithmetic operators check to make sure the dimensions agree. What they've done for their array library is to define a dimensioned array by adding dimension information to an array of undimensioned scalars, rather than constructing it directly as an array of dimensioned individual numbers. In practice, it seems to work quite well, and I suspect the run-time checking is generally of negligible cost since it's happening only once per array operation. (Of course, in code where it does happen once per scalar operation, it's rather more expensive.) - Brooks -- The "bmoses-nospam" address is valid; no unmunging needed. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 18:34 ` Brooks Moses @ 2006-05-24 7:15 ` Dmitry A. Kazakov 0 siblings, 0 replies; 314+ messages in thread From: Dmitry A. Kazakov @ 2006-05-24 7:15 UTC (permalink / raw) On Tue, 23 May 2006 11:34:44 -0700, Brooks Moses wrote: > Dmitry A. Kazakov wrote: >> On Tue, 23 May 2006 17:09:03 GMT, Dick Hendrickson wrote: >>>Fortran 2003 has polymorphic variables which might make it >>>easier to write a complete set of units and operators. That >>>probably would lose some compile time checking. >> >> That depends on which kind of polymorphism it is. Ada provides three forms >> of: >> >> 1. generics (like C++ templates) and overloading >> 2. tagged types (like C++ classes) >> 3. discriminated types. > [...] >> Then there is a problem of constraint propagation. Developing a matrix >> library you surely would like to have dimensioned vectors and matrices all >> constrained in a "coherent" way rather than on per-element basis. I doubt >> that either 1. or 2. would be able to do it in an easy way. > > For what it's worth, the OpenFOAM CFD program that I've been working > with (written in C++) has dimensioned variables, with dimension > checking. It's all run-time, though; they've only got one type of > "dimensioned variable" (a class with the variable value and a > length-seven array containing the exponents of the dimensions) and then > assignments and arithmetic operators check to make sure the dimensions > agree. Yes, this is also the variant 3, but without an ability to have subtypes, because the discriminant is not exposed. I implemented a commercial C++ library (used for data acquisition an control) in a similar way. The problem is that one can change both value and dimension of a variable, which is a safety breach. GPL Ada version, I wrote allows things like: subtype Speed is Measure (Velocity); Car_Speed : Speed; -- Only velocities are allowed . . . Car_Speed := 10.0 * km / h; -- OK Car_Speed := A; -- Ampere is illegal, Constraint_Error > What they've done for their array library is to define a dimensioned > array by adding dimension information to an array of undimensioned > scalars, rather than constructing it directly as an array of dimensioned > individual numbers. Yes, so did I for similar things (dimensioned fuzzy numbers, linguistic variables etc.) > In practice, it seems to work quite well, and I suspect the run-time > checking is generally of negligible cost since it's happening only once > per array operation. (Of course, in code where it does happen once per > scalar operation, it's rather more expensive.) Overhead exists, and it is relatively high when you do something like: forall Matrix do some per-element operation involving dimensions. To get rid of this overhead one should manually fold dimension checks in all cross-typed operations: like [Dimensioned] Matrix x [Dimensioned] Scalar, [Dimensioned] Matrix x [Dimensioned] Vector. If there are many types, many operations and many arguments, this quickly brings the problem of geometrical explosion back. Basically, this all is a consequence of lacking a constraint propagation mechanism capable to move dimension checks out of bodies. Theoretically the compiler could do it when all operations were inlined, but practically, I saw no compiler that does it. Then one cannot inline everything. I think the language should provide something for this, in particular, separation of a subroutine body into inlined (dimension checks) and not inlined (numeric semantics) parts. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 18:21 ` Dmitry A. Kazakov 2006-05-23 18:34 ` Brooks Moses @ 2006-05-23 20:33 ` Dick Hendrickson 1 sibling, 0 replies; 314+ messages in thread From: Dick Hendrickson @ 2006-05-23 20:33 UTC (permalink / raw) Dmitry A. Kazakov wrote: > On Tue, 23 May 2006 17:09:03 GMT, Dick Hendrickson wrote: > > >>Jean-Pierre Rosen wrote: > > >>>Is it possible in Fortran to define three *incompatible* types Length, >>>Time, and Speed, and define a "/" operator between Length and Time that >>>returns Speed? >> >>Yes, it's possible. There was a long thread inb c.l.f a >>year or two ago abou this. It's a bit of a pain in the butt >>to cover all the cases. Is E = m*c**2 the same as E=m*c*c? > > > Hmm, where is any problem? As I remember 2 is integer in Fortran. I hope it > can distinguish signatures ** : R x I -> R and ** : R x R -> R. Or do you > mean a geometrical explosion of variants? > > I was thinking of the geometric explosion. You'd need routines to do (mass) times (velocity squared), (velocity) times (velocity), (mass) times (velocity), and (mass*velocity) * velocity. And intermediate types to hold the partial answers. You probably don't want to say E = momentum*velocity. But that's what you'd get if you write it as m*c*c. Straightforward to cover all of this, but just a ton of cases.. Dick Hendrickson ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 17:09 ` Dick Hendrickson 2006-05-23 17:53 ` Georg Bauhaus 2006-05-23 18:21 ` Dmitry A. Kazakov @ 2006-05-24 7:52 ` Jean-Pierre Rosen 2006-05-24 14:50 ` robin 3 siblings, 0 replies; 314+ messages in thread From: Jean-Pierre Rosen @ 2006-05-24 7:52 UTC (permalink / raw) Dick Hendrickson a �crit : > What does Ada say about things like COS(1.1E300)? > It's unclear to me what that could or should mean on a > machine with finite (or at least less than 300 ;) ) digits > of precision. If you are interested in issues with accuracies, I suggest you read annex G of the Ada reference manual, and for this precise issue, G.2.4(10). The idea is that there is a maximum angle threshold, beyond which accuracy cannot be guaranteed. But this is defined and documented. Ada is not in the business on making requirements that would be impossible to meet. Ada is about having actual implementations that work, in the most *possibly* portable way. -- --------------------------------------------------------- J-P. Rosen (rosen@adalog.fr) Visit Adalog's web site at http://www.adalog.fr ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 17:09 ` Dick Hendrickson ` (2 preceding siblings ...) 2006-05-24 7:52 ` Jean-Pierre Rosen @ 2006-05-24 14:50 ` robin 2006-05-24 15:19 ` Dick Hendrickson 3 siblings, 1 reply; 314+ messages in thread From: robin @ 2006-05-24 14:50 UTC (permalink / raw) "Dick Hendrickson" <dick.hendrickson@att.net> wrote in message news:PkHcg.90575$Fs1.7198@bgtnsc05-news.ops.worldnet.att.net... > > Ada's is surely better. Knowing that a subscript has to be > in range, because it's checked when a value is assigned to > the subscript variable, has to be more efficient than what > Fortran can do. In general, Fortran has to check the value > of the subscripts on every array reference. It can do this only if it is a compiler option. It is not a feature the language. > In practice, > most array references take place in DO loops and compilers > can usually hoist the checks outside the loop, so they have > minimal cost at run time. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 14:50 ` robin @ 2006-05-24 15:19 ` Dick Hendrickson 2006-05-24 15:43 ` Dr. Adrian Wrigley ` (3 more replies) 0 siblings, 4 replies; 314+ messages in thread From: Dick Hendrickson @ 2006-05-24 15:19 UTC (permalink / raw) robin wrote: > "Dick Hendrickson" <dick.hendrickson@att.net> wrote in message > news:PkHcg.90575$Fs1.7198@bgtnsc05-news.ops.worldnet.att.net... > >>Ada's is surely better. Knowing that a subscript has to be >>in range, because it's checked when a value is assigned to >>the subscript variable, has to be more efficient than what >>Fortran can do. In general, Fortran has to check the value >>of the subscripts on every array reference. > > > It can do this only if it is a compiler option. > It is not a feature the language. There's a ambiguous "it" in those sentences. ;) But, if "it" refers to Fortran, subscript bounds rules ARE a feature of the language. You are NEVER allowed to execute an out-of-bounds array reference in a Fortran program. In practice, the historical run-time cost of checking bounds was [thought to be] too high, so compilers either didn't do it, or did it under some sort of command line option control. Dick Hendrickson > > >> In practice, >>most array references take place in DO loops and compilers >>can usually hoist the checks outside the loop, so they have >>minimal cost at run time. > > > ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 15:19 ` Dick Hendrickson @ 2006-05-24 15:43 ` Dr. Adrian Wrigley 2006-05-24 17:12 ` Dick Hendrickson 2006-05-24 16:03 ` Richard E Maine ` (2 subsequent siblings) 3 siblings, 1 reply; 314+ messages in thread From: Dr. Adrian Wrigley @ 2006-05-24 15:43 UTC (permalink / raw) On Wed, 24 May 2006 15:19:23 +0000, Dick Hendrickson wrote: > > > robin wrote: >> "Dick Hendrickson" <dick.hendrickson@att.net> wrote in message >> news:PkHcg.90575$Fs1.7198@bgtnsc05-news.ops.worldnet.att.net... >> >>>Ada's is surely better. Knowing that a subscript has to be >>>in range, because it's checked when a value is assigned to >>>the subscript variable, has to be more efficient than what >>>Fortran can do. In general, Fortran has to check the value >>>of the subscripts on every array reference. >> >> >> It can do this only if it is a compiler option. >> It is not a feature the language. > > There's a ambiguous "it" in those sentences. ;) > > But, if "it" refers to Fortran, subscript bounds rules > ARE a feature of the language. You are NEVER allowed to > execute an out-of-bounds array reference in a Fortran > program. ... So what does the standard say must happen if you attempt such an access? Can a program fail unpredictably under such (rather common!) circumstances - as routinely happens in C and C++, sometimes at great cost? -- Adrian ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 15:43 ` Dr. Adrian Wrigley @ 2006-05-24 17:12 ` Dick Hendrickson 2006-05-24 17:32 ` Richard E Maine 2006-05-24 17:54 ` Dr. Adrian Wrigley 0 siblings, 2 replies; 314+ messages in thread From: Dick Hendrickson @ 2006-05-24 17:12 UTC (permalink / raw) Dr. Adrian Wrigley wrote: > On Wed, 24 May 2006 15:19:23 +0000, Dick Hendrickson wrote: > > >> >>robin wrote: >> >>>"Dick Hendrickson" <dick.hendrickson@att.net> wrote in message >>>news:PkHcg.90575$Fs1.7198@bgtnsc05-news.ops.worldnet.att.net... >>> >>> >>>>Ada's is surely better. Knowing that a subscript has to be >>>>in range, because it's checked when a value is assigned to >>>>the subscript variable, has to be more efficient than what >>>>Fortran can do. In general, Fortran has to check the value >>>>of the subscripts on every array reference. >>> >>> >>>It can do this only if it is a compiler option. >>>It is not a feature the language. >> >>There's a ambiguous "it" in those sentences. ;) >> >>But, if "it" refers to Fortran, subscript bounds rules >>ARE a feature of the language. You are NEVER allowed to >>execute an out-of-bounds array reference in a Fortran >>program. > > ... > > So what does the standard say must happen if you attempt > such an access? Can a program fail unpredictably under > such (rather common!) circumstances - as routinely happens > in C and C++, sometimes at great cost? The Fortran standard says nothing at all about what must happen for most run-time errors. There is a requirement that a compiler be able to diagnose syntax-like errors at compile time. There is also a requirement that some (unspecified) I/O errors and some memory management errors be checked for at run time. The job will abort unless the program uses one of the error detection methods. But for things like subscript bounds errors, or subroutine argument mismatches, the standard doesn't impose anything on the compiler. In general, the standard imposes restrictions on standard conforming programs, not on the compiler. This allows compilers to extend the standard in "useful" ways. Technically, a standard conforming program is not allowed to use these extensions, but many do ;). Most compilers implement a command line option to do enhanced syntax checking and report use of extensions. Subscript bounds errors usually go unchecked and do whatever they do. They're really fun to debug because adding a PRINT statement usually moves the effect to some other part of the program. This isn't Fortran's greatest strength ;) . It was a compromise between safety and speed. The other big problem with [old] Fortran programs was messing up the argument list in a procedure call. Separate compilation made this a lot easier to do. The Fortran 90 addition of MODULES essenially closes this hole. Most procedure interfaces now can be explicit and the compiler must check for calling consistency. It's harder to shoot yourself in the foot now, but people can still lie to the compiler. Dick Hendrickson > -- > Adrian > ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 17:12 ` Dick Hendrickson @ 2006-05-24 17:32 ` Richard E Maine 2006-05-24 17:54 ` Dr. Adrian Wrigley 1 sibling, 0 replies; 314+ messages in thread From: Richard E Maine @ 2006-05-24 17:32 UTC (permalink / raw) Dick Hendrickson <dick.hendrickson@att.net> wrote: > The Fortran 90 addition of MODULES essenially closes > this hole. Most procedure interfaces now can be explicit > and the compiler must check for calling consistency. Unless I missed it, I don't think that the standard actually requires compilers to check for calling consistency. I don't think it is a constraint or any of those other things that the standard requires diagnosis of. The standard is "clearly" designed to encourage such diagnosis and to make it "natural" for compilers to do so. To my knowledge, every compiler in existence does so. So perhaps the point is a bit academic, I'll admit. But I'll make it anyway. Of course, if we are being as nitpicky as I am above, you can probably your statement by noting that you said "must" instead of "shall". :-) -- Richard Maine | Good judgment comes from experience; email: my first.last at org.domain| experience comes from bad judgment. org: nasa, domain: gov | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 17:12 ` Dick Hendrickson 2006-05-24 17:32 ` Richard E Maine @ 2006-05-24 17:54 ` Dr. Adrian Wrigley 2006-05-24 18:10 ` Richard E Maine ` (3 more replies) 1 sibling, 4 replies; 314+ messages in thread From: Dr. Adrian Wrigley @ 2006-05-24 17:54 UTC (permalink / raw) On Wed, 24 May 2006 17:12:55 +0000, Dick Hendrickson wrote: > > > Dr. Adrian Wrigley wrote: >> On Wed, 24 May 2006 15:19:23 +0000, Dick Hendrickson wrote: >> >> >>> >>>robin wrote: >>> >>>>"Dick Hendrickson" <dick.hendrickson@att.net> wrote in message >>>>news:PkHcg.90575$Fs1.7198@bgtnsc05-news.ops.worldnet.att.net... >>>> >>>> >>>>>Ada's is surely better. Knowing that a subscript has to be >>>>>in range, because it's checked when a value is assigned to >>>>>the subscript variable, has to be more efficient than what >>>>>Fortran can do. In general, Fortran has to check the value >>>>>of the subscripts on every array reference. >>>> >>>> >>>>It can do this only if it is a compiler option. >>>>It is not a feature the language. >>> >>>There's a ambiguous "it" in those sentences. ;) >>> >>>But, if "it" refers to Fortran, subscript bounds rules >>>ARE a feature of the language. You are NEVER allowed to >>>execute an out-of-bounds array reference in a Fortran >>>program. >> >> ... >> >> So what does the standard say must happen if you attempt >> such an access? Can a program fail unpredictably under >> such (rather common!) circumstances - as routinely happens >> in C and C++, sometimes at great cost? > > The Fortran standard says nothing at all about what must > happen for most run-time errors. There is a requirement > that a compiler be able to diagnose syntax-like errors at > compile time. There is also a requirement that some > (unspecified) I/O errors and some memory management errors > be checked for at run time. The job will abort unless the > program uses one of the error detection methods. But for > things like subscript bounds errors, or subroutine argument > mismatches, the standard doesn't impose anything on the > compiler. ... > The other big problem with [old] Fortran programs was > messing up the argument list in a procedure call. > Separate compilation made this a lot easier to do. > The Fortran 90 addition of MODULES essenially closes > this hole. Most procedure interfaces now can be explicit > and the compiler must check for calling consistency. > It's harder to shoot yourself in the foot now, but > people can still lie to the compiler. I think this is an area that Ada really shines. The standard requires numerous checks for consistency at both compile time and runtime. Versions of code that don't match properly can't be linked together or can't be run together (as appropriate). Using the language gives a feeling of integrity of coding, with mistakes often being caught very early on. Unfortunately, the language features for integrity cannot be added to an existing language without breaking old code. This is because the integrity features are often a result of prohibiting "dodgy" code, flawed syntax or misfeatures. The history of the C family of languages illustrates this. I'm not sure where modern Fortran sits in relation to its forbears in terms of safety and security though. It's noteworthy that Ada and Fortran are on convergent paths (modules, user defined types, templates etc). With array subscripts, an exception must be raised if the bounds are exceeded. The same with arithmetic operations. (curiously, compiling Ada under gcc (GNAT), a compilation switch is needed to be standards compliant - a mistake:(). The checks can be switched on and off in the source code as desired. One of the benefits of the compile- and run-time checking is that refactoring code becomes much easier because the compiler will usually tell you about what parts haven't been fixed up yet. Languages like C or Perl are at the opposite end of the spectrum, I find. From what I read here, Fortran is somewhere in between. -- Adrian ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 17:54 ` Dr. Adrian Wrigley @ 2006-05-24 18:10 ` Richard E Maine 2006-05-24 18:39 ` Nasser Abbasi 2006-05-24 18:34 ` Gordon Sande ` (2 subsequent siblings) 3 siblings, 1 reply; 314+ messages in thread From: Richard E Maine @ 2006-05-24 18:10 UTC (permalink / raw) Dr. Adrian Wrigley <amtw@linuxchip.demon.co.uk.uk.uk> wrote: > It's noteworthy that Ada and Fortran are on convergent > paths (modules, user defined types, templates etc). Yes. On multiple occasions, I have often heard Fortran 77 users misleadingly describe Fortran 90 and subsequent versions as like a cross between Fortran 77 and C. I attribute this description mostly to low breadth of language exposure from people who have not seen much other than those two languages and thus narrowly associate features like structures with C. When I hear this misleading description, the one-sentence version of my reply tends to be "No, Fortran 90 is more like a cross between Fortran 77 and Ada." Like most analogies, that one is far from perfect, but it is about as well as I can do with that few words. -- Richard Maine | Good judgment comes from experience; email: my first.last at org.domain| experience comes from bad judgment. org: nasa, domain: gov | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 18:10 ` Richard E Maine @ 2006-05-24 18:39 ` Nasser Abbasi 2006-05-24 19:36 ` Gautier 2006-05-24 19:37 ` Gautier 0 siblings, 2 replies; 314+ messages in thread From: Nasser Abbasi @ 2006-05-24 18:39 UTC (permalink / raw) "Richard E Maine" <nospam@see.signature> wrote in message news:1hfu968.1i5puf26uqbldN%nospam@see.signature... ..... >"No, Fortran 90 is more like a cross between Fortran > 77 and Ada." Like most analogies, that one is far from perfect, but it > is about as well as I can do with that few words. > Well, at least one thing is common between Ada and Fortran: Both are case INSENSITIVE. I think this is a good thing, because I like to see the KEYWORDS in uppercase. This seems to make the code more readable to me. Nasser ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 18:39 ` Nasser Abbasi @ 2006-05-24 19:36 ` Gautier 2006-05-24 19:37 ` Gautier 1 sibling, 0 replies; 314+ messages in thread From: Gautier @ 2006-05-24 19:36 UTC (permalink / raw) Nasser Abbasi: > Well, at least one thing is common between Ada and Fortran: Both are case > INSENSITIVE. Two other points in common are readability (or, non-cryptic syntax) and (Fortran: 77+ ?) full-bracketing (conditional or loop statements terminated by END). Both things are extremely helpful for revising code, which is crucial for scientific programming, and separate the pre- (Pascal, C) and post-1977 compiled languages. G. NB: For a direct answer, e-mail address on the Web site! ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 18:39 ` Nasser Abbasi 2006-05-24 19:36 ` Gautier @ 2006-05-24 19:37 ` Gautier 2006-05-24 19:56 ` Richard E Maine 2006-05-26 2:58 ` robin 1 sibling, 2 replies; 314+ messages in thread From: Gautier @ 2006-05-24 19:37 UTC (permalink / raw) Nasser Abbasi: > Well, at least one thing is common between Ada and Fortran: Both are case INSENSITIVE. Two other points in common are readability (or, non-cryptic syntax) and (Fortran: 77+ ?) full-bracketing (conditional or loop statements terminated by END). Both things are extremely helpful for revising code, which is crucial for scientific programming, and separate the pre- (Pascal, C) and post-1977 compiled languages. G. _______________________________________________________________ Ada programming -- http://www.mysunrise.ch/users/gdm/gsoft.htm NB: For a direct answer, e-mail address on the Web site! ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 19:37 ` Gautier @ 2006-05-24 19:56 ` Richard E Maine 2006-05-30 19:39 ` Craig Powers 2006-05-26 2:58 ` robin 1 sibling, 1 reply; 314+ messages in thread From: Richard E Maine @ 2006-05-24 19:56 UTC (permalink / raw) Gautier <gautier@fakeaddress.nil> wrote: > Nasser Abbasi: > > > Well, at least one thing is common between Ada and Fortran: Both are > > case INSENSITIVE. > > Two other points in common are readability (or, non-cryptic syntax) and > (Fortran: 77+ ?) full-bracketing (conditional or loop statements > terminated by END). Both things are extremely helpful for revising code, > which is crucial for scientific programming, and separate the pre- > (Pascal, C) and post-1977 compiled languages. I had also noticed the similarity between Fortran 90 modules and Ada packages. Not idenical by any means, but there are sure some similarities. And the possibility of specifying procedure arguments by keyword instead of just positionally. You find that in some scripting languages. And you find things like that in lots of other contexts, including the syntax typically used to invoke compilers. But in compiled languages, it seems like the feature is rare; it is shared by Fortran 90 and Ada, and then I start slowing down a lot in naming compiled languaages in widespread use that have it. -- Richard Maine | Good judgment comes from experience; email: my first.last at org.domain| experience comes from bad judgment. org: nasa, domain: gov | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 19:56 ` Richard E Maine @ 2006-05-30 19:39 ` Craig Powers 0 siblings, 0 replies; 314+ messages in thread From: Craig Powers @ 2006-05-30 19:39 UTC (permalink / raw) Richard E Maine wrote: > > And the possibility of specifying procedure arguments by keyword instead > of just positionally. You find that in some scripting languages. And you > find things like that in lots of other contexts, including the syntax > typically used to invoke compilers. But in compiled languages, it seems > like the feature is rare; it is shared by Fortran 90 and Ada, and then I > start slowing down a lot in naming compiled languaages in widespread use > that have it. Visual Basic, which turned into a compiled language around version 4 or version 5. Dunno how widespread the use is as a compiled language, vs. as a scripting language for Excel and Access, though. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 19:37 ` Gautier 2006-05-24 19:56 ` Richard E Maine @ 2006-05-26 2:58 ` robin 1 sibling, 0 replies; 314+ messages in thread From: robin @ 2006-05-26 2:58 UTC (permalink / raw) "Gautier" <gautier@fakeaddress.nil> wrote in message news:4474b632_1@news.bluewin.ch... > Nasser Abbasi: > > > Well, at least one thing is common between Ada and Fortran: Both are case INSENSITIVE. > > Two other points in common are readability (or, non-cryptic syntax) and (Fortran: 77+ ?) > full-bracketing (conditional or loop statements terminated by END). Both things are > extremely helpful for revising code, which is crucial for scientific programming, and > separate the pre- (Pascal, C) and post-1977 compiled languages. Except that Algol had those by 1960 and PL/I in 1966. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 17:54 ` Dr. Adrian Wrigley 2006-05-24 18:10 ` Richard E Maine @ 2006-05-24 18:34 ` Gordon Sande 2006-05-24 18:40 ` Ed Falis 2006-05-24 18:43 ` Ed Falis 2006-05-24 21:04 ` Dick Hendrickson 2006-05-25 3:40 ` robin 3 siblings, 2 replies; 314+ messages in thread From: Gordon Sande @ 2006-05-24 18:34 UTC (permalink / raw) On 2006-05-24 14:54:12 -0300, "Dr. Adrian Wrigley" <amtw@linuxchip.demon.co.uk.uk.uk> said: > On Wed, 24 May 2006 17:12:55 +0000, Dick Hendrickson wrote: > >> >> >> Dr. Adrian Wrigley wrote: >>> On Wed, 24 May 2006 15:19:23 +0000, Dick Hendrickson wrote: >>> >>> >>>> >>>> robin wrote: >>>> >>>>> "Dick Hendrickson" <dick.hendrickson@att.net> wrote in message >>>>> news:PkHcg.90575$Fs1.7198@bgtnsc05-news.ops.worldnet.att.net... >>>>> >>>>> >>>>>> Ada's is surely better. Knowing that a subscript has to be >>>>>> in range, because it's checked when a value is assigned to >>>>>> the subscript variable, has to be more efficient than what >>>>>> Fortran can do. In general, Fortran has to check the value >>>>>> of the subscripts on every array reference. >>>>> >>>>> >>>>> It can do this only if it is a compiler option. >>>>> It is not a feature the language. >>>> >>>> There's a ambiguous "it" in those sentences. ;) >>>> >>>> But, if "it" refers to Fortran, subscript bounds rules >>>> ARE a feature of the language. You are NEVER allowed to >>>> execute an out-of-bounds array reference in a Fortran >>>> program. >>> >>> ... >>> >>> So what does the standard say must happen if you attempt >>> such an access? Can a program fail unpredictably under >>> such (rather common!) circumstances - as routinely happens >>> in C and C++, sometimes at great cost? >> >> The Fortran standard says nothing at all about what must >> happen for most run-time errors. There is a requirement >> that a compiler be able to diagnose syntax-like errors at >> compile time. There is also a requirement that some >> (unspecified) I/O errors and some memory management errors >> be checked for at run time. The job will abort unless the >> program uses one of the error detection methods. But for >> things like subscript bounds errors, or subroutine argument >> mismatches, the standard doesn't impose anything on the >> compiler. > > ... > >> The other big problem with [old] Fortran programs was >> messing up the argument list in a procedure call. >> Separate compilation made this a lot easier to do. >> The Fortran 90 addition of MODULES essenially closes >> this hole. Most procedure interfaces now can be explicit >> and the compiler must check for calling consistency. >> It's harder to shoot yourself in the foot now, but >> people can still lie to the compiler. > > I think this is an area that Ada really shines. The standard > requires numerous checks for consistency at both compile > time and runtime. Versions of code that don't match properly > can't be linked together or can't be run together (as appropriate). > Using the language gives a feeling of integrity of coding, > with mistakes often being caught very early on. > > Unfortunately, the language features for integrity cannot > be added to an existing language without breaking old > code. This is because the integrity features are often a result > of prohibiting "dodgy" code, flawed syntax or misfeatures. > The history of the C family of languages illustrates this. > I'm not sure where modern Fortran sits in relation to > its forbears in terms of safety and security though. > It's noteworthy that Ada and Fortran are on convergent > paths (modules, user defined types, templates etc). > > With array subscripts, an exception must be raised if the > bounds are exceeded. The same with arithmetic operations. > (curiously, compiling Ada under gcc (GNAT), a compilation > switch is needed to be standards compliant - a mistake:(). > The checks can be switched on and off in the source code > as desired. > > One of the benefits of the compile- and run-time checking > is that refactoring code becomes much easier because the > compiler will usually tell you about what parts haven't > been fixed up yet. Languages like C or Perl are at the > opposite end of the spectrum, I find. From what I read here, > Fortran is somewhere in between. There is a distinction to be made between what the standard requires and what the various compilers offer. Some systems are oriented to the ultimate SpecMark(??) benchmark figures while others offer tightly monitored executions. Subscript checking can be turned on for those systems. Some even go the extra mile of offering checking for usage of undefined (uninitialized) variables. Some undefineds can be caught as a byproduct of flow checking at compile time but others, like array elements, are only possible at run time. Some "real" programmers disdain the use of such tools but others are glad for all the aids that are available. As with most groups there are subgroups. Some Fortran programmers dismiss any notions of less than full exploitation of every last quirk of the hardware and software of the day. Their equivalents in other programming groups are probably the folks who ignore all interrupts. The urban legends have the Fortran error of a DO loop that changed into an assignment because of a typo changing a comma into a period and a satellite was lost. For Ada it is a tossed interrupt that caused a launch failure. Bad practice of one will always be inferior to good practice of the other. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 18:34 ` Gordon Sande @ 2006-05-24 18:40 ` Ed Falis 2006-05-25 22:31 ` Brooks Moses 2006-05-24 18:43 ` Ed Falis 1 sibling, 1 reply; 314+ messages in thread From: Ed Falis @ 2006-05-24 18:40 UTC (permalink / raw) I have to say as an Ada guy, that I'm finding this thread more interesting than most language comparison fests. You Fortran guys are presenting mature, intelligent and interesting perspectives. Kudos to you. - Ed ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 18:40 ` Ed Falis @ 2006-05-25 22:31 ` Brooks Moses 0 siblings, 0 replies; 314+ messages in thread From: Brooks Moses @ 2006-05-25 22:31 UTC (permalink / raw) Ed Falis wrote: > I have to say as an Ada guy, that I'm finding this thread more interesting > than most language comparison fests. You Fortran guys are presenting > mature, intelligent and interesting perspectives. Kudos to you. And kudos to you as well -- I had just been thinking much the same thing about the Ada crossover. I've found it a very thought-provoking thread! - Brooks, posting from comp.lang.fortran -- The "bmoses-nospam" address is valid; no unmunging needed. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 18:34 ` Gordon Sande 2006-05-24 18:40 ` Ed Falis @ 2006-05-24 18:43 ` Ed Falis 2006-05-24 18:59 ` J.F. Cornwall ` (2 more replies) 1 sibling, 3 replies; 314+ messages in thread From: Ed Falis @ 2006-05-24 18:43 UTC (permalink / raw) On Wed, 24 May 2006 14:34:58 -0400, Gordon Sande <g.sande@worldnet.att.net> wrote: > The urban legends have the Fortran error of a DO loop that changed into > an assignment because of a typo changing a comma into a period and > a satellite was lost. For Ada it is a tossed interrupt that caused a > launch failure. Bad practice of one will always be inferior to good > practice of the other. In the Ariane 5 case, it wasn't the language, but mismanagement in applying software appropriate to a launcher with different flight parameters to a new one without review. - Ed ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 18:43 ` Ed Falis @ 2006-05-24 18:59 ` J.F. Cornwall 2006-05-24 19:10 ` Gordon Sande 2006-05-25 3:40 ` robin 2 siblings, 0 replies; 314+ messages in thread From: J.F. Cornwall @ 2006-05-24 18:59 UTC (permalink / raw) Ed Falis wrote: > On Wed, 24 May 2006 14:34:58 -0400, Gordon Sande > <g.sande@worldnet.att.net> wrote: > >> The urban legends have the Fortran error of a DO loop that changed into >> an assignment because of a typo changing a comma into a period and >> a satellite was lost. For Ada it is a tossed interrupt that caused a >> launch failure. Bad practice of one will always be inferior to good >> practice of the other. > > In the Ariane 5 case, it wasn't the language, but mismanagement in > applying software appropriate to a launcher with different flight > parameters to a new one without review. > > - Ed Hence the urban legend part of the reference... Jim ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 18:43 ` Ed Falis 2006-05-24 18:59 ` J.F. Cornwall @ 2006-05-24 19:10 ` Gordon Sande 2006-05-25 3:40 ` robin 2 siblings, 0 replies; 314+ messages in thread From: Gordon Sande @ 2006-05-24 19:10 UTC (permalink / raw) On 2006-05-24 15:43:13 -0300, "Ed Falis" <falis@verizon.net> said: > On Wed, 24 May 2006 14:34:58 -0400, Gordon Sande > <g.sande@worldnet.att.net> wrote: > >> The urban legends have the Fortran error of a DO loop that changed into >> an assignment because of a typo changing a comma into a period and >> a satellite was lost. For Ada it is a tossed interrupt that caused a >> launch failure. Bad practice of one will always be inferior to good >> practice of the other. > > In the Ariane 5 case, it wasn't the language, but mismanagement in > applying software appropriate to a launcher with different flight > parameters to a new one without review. > > - Ed Is that English vrs French or Ada vrs Fortran? Of course it does not matter as mismanagement transends both natural and formal languages and is possible for both. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 18:43 ` Ed Falis 2006-05-24 18:59 ` J.F. Cornwall 2006-05-24 19:10 ` Gordon Sande @ 2006-05-25 3:40 ` robin 2006-05-25 15:19 ` Martin Krischik 2 siblings, 1 reply; 314+ messages in thread From: robin @ 2006-05-25 3:40 UTC (permalink / raw) "Ed Falis" <falis@verizon.net> wrote in message news:op.s92jl8t85afhvo@dogen... > On Wed, 24 May 2006 14:34:58 -0400, Gordon Sande > <g.sande@worldnet.att.net> wrote: > > > The urban legends have the Fortran error of a DO loop that changed into > > an assignment because of a typo changing a comma into a period and > > a satellite was lost. For Ada it is a tossed interrupt that caused a > > launch failure. Bad practice of one will always be inferior to good > > practice of the other. > > In the Ariane 5 case, it wasn't the language, but mismanagement in > applying software appropriate to a launcher with different flight > parameters to a new one without review. That wasn't the case. The code was reviewed, and it was decided that the particular conversion didn't require a check for overflow (even though similar conversions in the vicinity had protection). (This was a conversion from float to 16-bit integer). Anyone experienced with real time programming would have pointed out that it was a stupid thing to do. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 3:40 ` robin @ 2006-05-25 15:19 ` Martin Krischik 2006-05-27 14:29 ` robin 0 siblings, 1 reply; 314+ messages in thread From: Martin Krischik @ 2006-05-25 15:19 UTC (permalink / raw) robin wrote: > "Ed Falis" <falis@verizon.net> wrote in message > news:op.s92jl8t85afhvo@dogen... >> On Wed, 24 May 2006 14:34:58 -0400, Gordon Sande >> <g.sande@worldnet.att.net> wrote: >> >> > The urban legends have the Fortran error of a DO loop that changed into >> > an assignment because of a typo changing a comma into a period and >> > a satellite was lost. For Ada it is a tossed interrupt that caused a >> > launch failure. Bad practice of one will always be inferior to good >> > practice of the other. >> >> In the Ariane 5 case, it wasn't the language, but mismanagement in >> applying software appropriate to a launcher with different flight >> parameters to a new one without review. > > That wasn't the case. The code was reviewed, > and it was decided that the particular conversion > didn't require a check for overflow (even though > similar conversions in the vicinity had protection). The review was only done for the Ariane 4 rocket. The real mistake was to reuse the software on the Ariane 5 rocket without rerunning the test suite or redoing the reviews. Martin -- mailto://krischik@users.sourceforge.net Ada programming at: http://ada.krischik.com ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 15:19 ` Martin Krischik @ 2006-05-27 14:29 ` robin 2006-05-27 13:22 ` Georg Bauhaus ` (2 more replies) 0 siblings, 3 replies; 314+ messages in thread From: robin @ 2006-05-27 14:29 UTC (permalink / raw) "Martin Krischik" <krischik@users.sourceforge.net> wrote in message news:1319222.cHklMpk1fa@linux1.krischik.com... > robin wrote: > > > "Ed Falis" <falis@verizon.net> wrote in message > > news:op.s92jl8t85afhvo@dogen... > >> On Wed, 24 May 2006 14:34:58 -0400, Gordon Sande > >> <g.sande@worldnet.att.net> wrote: > >> > >> > The urban legends have the Fortran error of a DO loop that changed into > >> > an assignment because of a typo changing a comma into a period and > >> > a satellite was lost. For Ada it is a tossed interrupt that caused a > >> > launch failure. Bad practice of one will always be inferior to good > >> > practice of the other. > >> > >> In the Ariane 5 case, it wasn't the language, but mismanagement in > >> applying software appropriate to a launcher with different flight > >> parameters to a new one without review. > > > > That wasn't the case. The code was reviewed, > > and it was decided that the particular conversion > > didn't require a check for overflow (even though > > similar conversions in the vicinity had protection). > > The review was only done for the Ariane 4 rocket. The report specifically states that the code was reviewed for Ariane 5, as I just described. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-27 14:29 ` robin @ 2006-05-27 13:22 ` Georg Bauhaus 2006-05-29 11:46 ` Jan Vorbrüggen 2006-05-29 17:37 ` Martin Krischik 2 siblings, 0 replies; 314+ messages in thread From: Georg Bauhaus @ 2006-05-27 13:22 UTC (permalink / raw) On Sat, 2006-05-27 at 14:29 +0000, robin wrote: > "Martin Krischik" <krischik@users.sourceforge.net> wrote in message > news:1319222.cHklMpk1fa@linux1.krischik.com... > > robin wrote: > > > > > "Ed Falis" <falis@verizon.net> wrote in message > > > news:op.s92jl8t85afhvo@dogen... > > >> On Wed, 24 May 2006 14:34:58 -0400, Gordon Sande > > >> <g.sande@worldnet.att.net> wrote: > > >> > > >> > The urban legends ... > > >> > > >> In the Ariane 5 case, it wasn't ... > > > > > > That wasn't the case. ... > > > > The review was only done for the Ariane 4 rocket. > > The report specifically states ... Please, could the non-insiders stop any further short cut exegesis of rocket history in favor of discussing readily accessible language issues? Georg ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-27 14:29 ` robin 2006-05-27 13:22 ` Georg Bauhaus @ 2006-05-29 11:46 ` Jan Vorbrüggen 2006-05-29 17:37 ` Martin Krischik 2 siblings, 0 replies; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-29 11:46 UTC (permalink / raw) >>The review was only done for the Ariane 4 rocket. > The report specifically states that the code was reviewed > for Ariane 5, as I just described. Quote, please. You've claimed this before, and it's not true. I _have_ read the report - have you? Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-27 14:29 ` robin 2006-05-27 13:22 ` Georg Bauhaus 2006-05-29 11:46 ` Jan Vorbrüggen @ 2006-05-29 17:37 ` Martin Krischik 2 siblings, 0 replies; 314+ messages in thread From: Martin Krischik @ 2006-05-29 17:37 UTC (permalink / raw) robin wrote: >> The review was only done for the Ariane 4 rocket. > The report specifically states that the code was reviewed > for Ariane 5, as I just described. Not in the two reports I read: 1) http://ravel.esrin.esa.it/docs/esa-x-1819eng.pdf Pg.11: "However, no test was performed to verify that the SRI would behave correctly when being subjected to the count-down and flight time sequence and the trajectory of Ariane 5" cont. Pg. 11: "The main explanation for the absence of this test has already been mentioned above, i.e. the SRI specification (...) *does not* contain the Ariane 5 trajectory data as a functional requirement" 2) http://www-aix.gsi.de/~giese/swr/ariane5.html "Ein intensiver Test des Navigations- und Hauptrechners wurde nicht unternommen, da die Software bei Ariane 4 erprobt war." "Trotz des ganz anderen Verhaltens der Ariane 5 wurde dieser Wert nicht neu ï¿œberlegt." "Diese Beweise galten jedoch nicht fï¿œr die Ariane 5 und wurden dafï¿œr auch gar nicht nachvollzogen." (Bablefish [1] translation work well on the above German quotes) But thinking about it: "review the code" is not the same as "test the code" or "review the requirement". Martin [1] http://babelfish.altavista.com -- mailto://krischik@users.sourceforge.net Ada programming at: http://ada.krischik.com ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 17:54 ` Dr. Adrian Wrigley 2006-05-24 18:10 ` Richard E Maine 2006-05-24 18:34 ` Gordon Sande @ 2006-05-24 21:04 ` Dick Hendrickson 2006-05-31 4:26 ` robert.corbett 2006-05-25 3:40 ` robin 3 siblings, 1 reply; 314+ messages in thread From: Dick Hendrickson @ 2006-05-24 21:04 UTC (permalink / raw) Dr. Adrian Wrigley wrote: > On Wed, 24 May 2006 17:12:55 +0000, Dick Hendrickson wrote: [snip, discussion mostly of subscript bounds checking and some discussion of subroutine calling consistency] > I think this is an area that Ada really shines. The standard > requires numerous checks for consistency at both compile > time and runtime. Versions of code that don't match properly > can't be linked together or can't be run together (as appropriate). > Using the language gives a feeling of integrity of coding, > with mistakes often being caught very early on. > > Unfortunately, the language features for integrity cannot > be added to an existing language without breaking old > code. This is because the integrity features are often a result > of prohibiting "dodgy" code, flawed syntax or misfeatures. Yes, in Fortran 90 the decision was made to allow complete compatability with existing standard conforming code. For good or bad, old code written in the 60s dealt with 32,000 word memories and, as a result, the standard adopted features that let memory be reused and viewed in different ways. COMMON, EQUIVALENCE, the ability to magically reshape arrays across the CALL boundary, alternate ENRTY points in subroutines and functions all, in my opinion, trace their ancestry to dealing with small memories and slow (or nonexistent) rotating storage. Keeping those codes alive required some compromises. The need for separate compilation of 1,000,000 line programs and the use of well tested existing libraries (often in C or assembly) basically prevented Fortran from adopting strict CALL interface rules. However, I think more and more libraries are being retro-fitted with a clean module interface where possible. So things are getting better. There's no real excuse for writting a new program without using modules to specify the interfaces and this essentially guarantees that the compiler will do the right thing. One serious practical problem with large codes has been compilation cascades. Any change to a low-level module almost always forced a recompilation of everything that used the module, even if the change had no effect on the interfaces. This is mostly due to the way make interacts with the commonest implementation of modules, I think. This will be fixed (or at least changed ;) ) in F2008. Dick Hendrickson ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 21:04 ` Dick Hendrickson @ 2006-05-31 4:26 ` robert.corbett 0 siblings, 0 replies; 314+ messages in thread From: robert.corbett @ 2006-05-31 4:26 UTC (permalink / raw) Dick Hendrickson wrote: > There's no real excuse for writting a new program > without using modules to specify the interfaces and > this essentially guarantees that the compiler will > do the right thing. One serious practical problem > with large codes has been compilation cascades. Any > change to a low-level module almost always forced a > recompilation of everything that used the module, even > if the change had no effect on the interfaces. This > is mostly due to the way make interacts with the > commonest implementation of modules, I think. This > will be fixed (or at least changed ;) ) in F2008. There is a way to write makefiles that avoids recompilation cascades without omitting essential dependencies. The problem is it takes a make expert to figure out how to do it. Sun's make expert showed me how to do it. A greater problem is that different implementations of modules require different makefiles. A UNIX/Linux standard for implementing modules would be a boon to users. Bob Corbett ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 17:54 ` Dr. Adrian Wrigley ` (2 preceding siblings ...) 2006-05-24 21:04 ` Dick Hendrickson @ 2006-05-25 3:40 ` robin 3 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-05-25 3:40 UTC (permalink / raw) "Dr. Adrian Wrigley" <amtw@linuxchip.demon.co.uk.uk.uk> wrote in message news:pan.2006.05.24.17.53.03.81353@linuxchip.demon.co.uk.uk.uk... > On Wed, 24 May 2006 17:12:55 +0000, Dick Hendrickson wrote: > > Dr. Adrian Wrigley wrote: > >> On Wed, 24 May 2006 15:19:23 +0000, Dick Hendrickson wrote: > >> > >> > >>> > >>>robin wrote: > >>> > >>>>"Dick Hendrickson" <dick.hendrickson@att.net> wrote in message > >>>>news:PkHcg.90575$Fs1.7198@bgtnsc05-news.ops.worldnet.att.net... > >>>> > >>>>>Ada's is surely better. Knowing that a subscript has to be > >>>>>in range, because it's checked when a value is assigned to > >>>>>the subscript variable, has to be more efficient than what > >>>>>Fortran can do. In general, Fortran has to check the value > >>>>>of the subscripts on every array reference. > >>>> > >>>> > >>>>It can do this only if it is a compiler option. > >>>>It is not a feature the language. > >>> > >>>There's a ambiguous "it" in those sentences. ;) > >>> > >>>But, if "it" refers to Fortran, subscript bounds rules > >>>ARE a feature of the language. You are NEVER allowed to > >>>execute an out-of-bounds array reference in a Fortran > >>>program. > >> > >> ... > >> > >> So what does the standard say must happen if you attempt > >> such an access? Can a program fail unpredictably under > >> such (rather common!) circumstances - as routinely happens > >> in C and C++, sometimes at great cost? > > > > The Fortran standard says nothing at all about what must > > happen for most run-time errors. There is a requirement > > that a compiler be able to diagnose syntax-like errors at > > compile time. There is also a requirement that some > > (unspecified) I/O errors and some memory management errors > > be checked for at run time. The job will abort unless the > > program uses one of the error detection methods. But for > > things like subscript bounds errors, or subroutine argument > > mismatches, the standard doesn't impose anything on the > > compiler. > > ... > > > The other big problem with [old] Fortran programs was > > messing up the argument list in a procedure call. > > Separate compilation made this a lot easier to do. > > The Fortran 90 addition of MODULES essenially closes > > this hole. Most procedure interfaces now can be explicit > > and the compiler must check for calling consistency. > > It's harder to shoot yourself in the foot now, but > > people can still lie to the compiler. > > I think this is an area that Ada really shines. The standard > requires numerous checks for consistency at both compile > time and runtime. Versions of code that don't match properly > can't be linked together or can't be run together (as appropriate). > Using the language gives a feeling of integrity of coding, > with mistakes often being caught very early on. > > Unfortunately, the language features for integrity cannot > be added to an existing language without breaking old > code. That was not the case for Fortran. Old Fortran codes can still run, even though the language now provides the means for consistency checks. > This is because the integrity features are often a result > of prohibiting "dodgy" code, flawed syntax or misfeatures. > The history of the C family of languages illustrates this. > I'm not sure where modern Fortran sits in relation to > its forbears in terms of safety and security though. > It's noteworthy that Ada and Fortran are on convergent > paths (modules, user defined types, templates etc). > > With array subscripts, an exception must be raised if the > bounds are exceeded. As is the case with PL/I (given that the programmer has enabled that check). > The same with arithmetic operations. > (curiously, compiling Ada under gcc (GNAT), a compilation > switch is needed to be standards compliant - a mistake:(). > The checks can be switched on and off in the source code > as desired. As is the case with PL/I. > One of the benefits of the compile- and run-time checking > is that refactoring code becomes much easier because the > compiler will usually tell you about what parts haven't > been fixed up yet. Languages like C or Perl are at the > opposite end of the spectrum, I find. From what I read here, > Fortran is somewhere in between. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 15:19 ` Dick Hendrickson 2006-05-24 15:43 ` Dr. Adrian Wrigley @ 2006-05-24 16:03 ` Richard E Maine 2006-05-24 19:08 ` glen herrmannsfeldt 2006-05-25 3:40 ` robin 2006-05-25 11:32 ` Martin Krischik 3 siblings, 1 reply; 314+ messages in thread From: Richard E Maine @ 2006-05-24 16:03 UTC (permalink / raw) Dick Hendrickson <dick.hendrickson@att.net> wrote: > There's a ambiguous "it" in those sentences. ;) Ah, Dick. You missed such a great opportunity to phrase that self-referentially as just "It is ambiguous." :-) -- Richard Maine | Good judgment comes from experience; email: my first.last at org.domain| experience comes from bad judgment. org: nasa, domain: gov | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 16:03 ` Richard E Maine @ 2006-05-24 19:08 ` glen herrmannsfeldt 0 siblings, 0 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-05-24 19:08 UTC (permalink / raw) Richard E Maine wrote: > Dick Hendrickson <dick.hendrickson@att.net> wrote: >>There's a ambiguous "it" in those sentences. ;) > Ah, Dick. You missed such a great opportunity to phrase that > self-referentially as just "It is ambiguous." :-) I was going to say: "It depends on what the meaning of the word 'it' is." -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 15:19 ` Dick Hendrickson 2006-05-24 15:43 ` Dr. Adrian Wrigley 2006-05-24 16:03 ` Richard E Maine @ 2006-05-25 3:40 ` robin 2006-05-25 5:04 ` Nasser Abbasi 2006-05-25 11:02 ` Ada vs Fortran for scientific applications Dan Nagle 2006-05-25 11:32 ` Martin Krischik 3 siblings, 2 replies; 314+ messages in thread From: robin @ 2006-05-25 3:40 UTC (permalink / raw) "Dick Hendrickson" <dick.hendrickson@att.net> wrote in message news:%P_cg.155733$eR6.26337@bgtnsc04-news.ops.worldnet.att.net... > robin wrote: > > "Dick Hendrickson" <dick.hendrickson@att.net> wrote in message > > news:PkHcg.90575$Fs1.7198@bgtnsc05-news.ops.worldnet.att.net... > > > >>Ada's is surely better. Knowing that a subscript has to be > >>in range, because it's checked when a value is assigned to > >>the subscript variable, has to be more efficient than what > >>Fortran can do. In general, Fortran has to check the value > >>of the subscripts on every array reference. > > > It can do this only if it is a compiler option. > > It is not a feature the language. > > There's a ambiguous "it" in those sentences. ;) > > But, if "it" refers to Fortran, subscript bounds rules > ARE a feature of the language. Subscript bounds checking is not part of the Fortran language. > You are NEVER allowed to > execute an out-of-bounds array reference in a Fortran > program. In practice, the historical run-time cost of > checking bounds was [thought to be] too high, so compilers > either didn't do it, or did it under some sort of command > line option control. But in some languages [PL/I included] bounds checking is part of the language, and can be controlled by the programmer. Subscript checking is an important part of any program. > Dick Hendrickson ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 3:40 ` robin @ 2006-05-25 5:04 ` Nasser Abbasi 2006-05-25 6:04 ` Richard Maine 2006-05-25 11:02 ` Ada vs Fortran for scientific applications Dan Nagle 1 sibling, 1 reply; 314+ messages in thread From: Nasser Abbasi @ 2006-05-25 5:04 UTC (permalink / raw) "robin" <robin_v@bigpond.com> wrote in message news:6H9dg.10258$S7.9150@news-server.bigpond.net.au... > "Dick Hendrickson" <dick.hendrickson@att.net> wrote in message > news:%P_cg.155733$eR6.26337@bgtnsc04-news.ops.worldnet.att.net... > >> robin wrote: >> > "Dick Hendrickson" <dick.hendrickson@att.net> wrote in message >> > news:PkHcg.90575$Fs1.7198@bgtnsc05-news.ops.worldnet.att.net... >> > >> >>Ada's is surely better. Knowing that a subscript has to be >> >>in range, because it's checked when a value is assigned to >> >>the subscript variable, has to be more efficient than what >> >>Fortran can do. In general, Fortran has to check the value >> >>of the subscripts on every array reference. >> >> > It can do this only if it is a compiler option. >> > It is not a feature the language. >> >> There's a ambiguous "it" in those sentences. ;) >> >> But, if "it" refers to Fortran, subscript bounds rules >> ARE a feature of the language. > > Subscript bounds checking is not part of the Fortran language. > I just did this simple test, declare an array and go overbound and see if we get a run-time error: ----------------- FORTRAN 95 ------ $ g95 --version G95 (GCC 4.0.2 (g95!) Mar 3 2006) Copyright (C) 2002-2005 Free Software Foundation, Inc. $ cat f.f90 PROGRAM MAIN INTEGER A(10) DO I=1,11 A(I)=0 END DO END PROGRAM $ g95 f.f90 $ ./a.exe $ <------------------- NO runtime ERROR ---------------- ADA gnat2005 ---------- $ cat main.adb procedure Main is A : array( INTEGER RANGE 1..10) OF INTEGER; BEGIN FOR I IN 1..11 LOOP A(I):=0; END LOOP; END Main; gnatmake etc..... successful compilation/build $ ./main.exe raised CONSTRAINT_ERROR : main.adb:6 index check failed <---- ERROR Nasser ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 5:04 ` Nasser Abbasi @ 2006-05-25 6:04 ` Richard Maine 2006-05-25 10:42 ` Shmuel (Seymour J.) Metz ` (8 more replies) 0 siblings, 9 replies; 314+ messages in thread From: Richard Maine @ 2006-05-25 6:04 UTC (permalink / raw) Nasser Abbasi <nma@12000.org> wrote: > I just did this simple test, declare an array and go overbound and see if we > get a run-time error: ... > $ g95 f.f90 ... > $ <------------------- NO runtime ERROR This part of the thread has started drifting away from relevance to much of anything, but that particular sample is just drifting yet further. It illustrates neither much about subscript bounds rules being part of the language nor about bounds checking being part of the language, which were the two topics mentioned earlier in the subthread. Instead, the example illustrates only that g95 does not have the bounds check option enabled by default, which is yet a third question (and one mentioned in more generality earlier). As with most compilers, g95 does have a bounds check option; it just isn't enabled by default. Compiling your same example, but asking for bounds checking, gets it. In particular, compiling and running your example code on my Mac here with g95 -fbounds-check clf.f90 ./a.out Gives me: At line 4 of file clf.f90 Traceback: not available, compile with -ftrace=frame or -ftrace=full Fortran runtime error: Array element out of bounds: 11 in (1:10), dim=1 which is, in fact, more detailed than the message you showed from gnat. (Turning on the trace options gets rid of the message about not having one, but it is trivial and adds nothing else useful for this example.) -- Richard Maine | Good judgement comes from experience; email: last name at domain . net | experience comes from bad judgement. domain: summertriangle | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 6:04 ` Richard Maine @ 2006-05-25 10:42 ` Shmuel (Seymour J.) Metz 2006-05-25 15:09 ` Richard E Maine 2006-05-25 12:09 ` Dr. Adrian Wrigley ` (7 subsequent siblings) 8 siblings, 1 reply; 314+ messages in thread From: Shmuel (Seymour J.) Metz @ 2006-05-25 10:42 UTC (permalink / raw) In <1hfv5wb.1x4ab1tbdzk7eN%nospam@see.signature>, on 05/24/2006 at 11:04 PM, nospam@see.signature (Richard Maine) said: >As with most compilers, g95 does have a bounds check option; it just >isn't enabled by default. If the language semantics require checking then that is a compiler bug. In the case of PL/I, the programmer can use (SUBSCRIPTRANGE) and (NOSUBSCRIPTRANGE) to control array bounds checking, and would be justifiably upset if the compiler ignored the requests. The same applies to string bounds checking, with different condition prefixes. -- Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel> Unsolicited bulk E-mail subject to legal action. I reserve the right to publicly post or ridicule any abusive E-mail. Reply to domain Patriot dot net user shmuel+news to contact me. Do not reply to spamtrap@library.lspace.org ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 10:42 ` Shmuel (Seymour J.) Metz @ 2006-05-25 15:09 ` Richard E Maine 2006-05-25 19:39 ` Shmuel (Seymour J.) Metz 0 siblings, 1 reply; 314+ messages in thread From: Richard E Maine @ 2006-05-25 15:09 UTC (permalink / raw) Shmuel (Seymour J.) Metz <spamtrap@library.lspace.org.invalid> wrote: > In <1hfv5wb.1x4ab1tbdzk7eN%nospam@see.signature>, on 05/24/2006 > at 11:04 PM, nospam@see.signature (Richard Maine) said: > > >As with most compilers, g95 does have a bounds check option; it just > >isn't enabled by default. > > If the language semantics require checking.... It was mentioned multiple times previously in the thread... oh, but you are probably posting from the pl1 group, which wasn't in that part of the thread. Robin apparently added the pl1 group to the list, which I hadn't noticed until just now. I have no idea whether he also added any pl1-relevant content or not, as I have him kill-filed. In any case, no, the Fortran language does not require such checking, making the clause that I elided from the above sentence moot. The language rules on bounds are requirements on the programmer - not the compiler. Pretty much all compilers have a bounds checking option; most of them have it of by default. I will refrain from debating the wisdom of this, a comparison of it with PL1, or indeed, anything in a Fortran vs PL1 subthread that has Robin as a participant. I won't read his postings, and it would be too hard to participate usefully in a discussion where I didn't see half the postings. -- Richard Maine | Good judgment comes from experience; email: my first.last at org.domain| experience comes from bad judgment. org: nasa, domain: gov | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 15:09 ` Richard E Maine @ 2006-05-25 19:39 ` Shmuel (Seymour J.) Metz 0 siblings, 0 replies; 314+ messages in thread From: Shmuel (Seymour J.) Metz @ 2006-05-25 19:39 UTC (permalink / raw) In <1hfvver.ahdy9h1018b36N%nospam@see.signature>, on 05/25/2006 at 08:09 AM, nospam@see.signature (Richard E Maine) said: >It was mentioned multiple times previously in the thread... oh, but >you are probably posting from the pl1 group, which wasn't in that >part of the thread. True, but I believe that the comment applies just as much to Ada as it does to FORTRAN. >as I have him kill-filed. Your filters, your rules. I have one of the FORTRAN trolls filtered[1]. >Pretty much all compilers have a bounds checking option; most of >them have it of by default. Again, that is true for FORTRAN but is not true for compilers of languages for which bounds checking is part of the semantics. [1] I won't identify him, but I wouldn't be surprised if a lot of the FORTRAN regulars have him filtered as well. -- Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel> Unsolicited bulk E-mail subject to legal action. I reserve the right to publicly post or ridicule any abusive E-mail. Reply to domain Patriot dot net user shmuel+news to contact me. Do not reply to spamtrap@library.lspace.org ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 6:04 ` Richard Maine 2006-05-25 10:42 ` Shmuel (Seymour J.) Metz @ 2006-05-25 12:09 ` Dr. Adrian Wrigley 2006-05-25 12:42 ` Dan Nagle ` (4 more replies) [not found] ` <pan.2006.05.25.12.11.52.919554@linuxchip.demon.co.uk.u <2006052514574816807-gsande@worldnetattnet> ` (6 subsequent siblings) 8 siblings, 5 replies; 314+ messages in thread From: Dr. Adrian Wrigley @ 2006-05-25 12:09 UTC (permalink / raw) On Wed, 24 May 2006 23:04:03 -0700, Richard Maine wrote: > Nasser Abbasi <nma@12000.org> wrote: > >> I just did this simple test, declare an array and go overbound and see if we >> get a run-time error: > ... >> $ g95 f.f90 > ... >> $ <------------------- NO runtime ERROR > > This part of the thread has started drifting away from relevance to much > of anything, but that particular sample is just drifting yet further. It > illustrates neither much about subscript bounds rules being part of the > language nor about bounds checking being part of the language, which > were the two topics mentioned earlier in the subthread. > > Instead, the example illustrates only that g95 does not have the bounds > check option enabled by default, which is yet a third question (and one > mentioned in more generality earlier). > > As with most compilers, g95 does have a bounds check option; it just > isn't enabled by default. Compiling your same example, but asking for > bounds checking, gets it. In particular, compiling and running your > example code on my Mac here with > > g95 -fbounds-check clf.f90 > ./a.out > > Gives me: > > At line 4 of file clf.f90 > Traceback: not available, compile with -ftrace=frame or -ftrace=full > Fortran runtime error: Array element out of bounds: 11 in (1:10), dim=1 > > which is, in fact, more detailed than the message you showed from gnat. > (Turning on the trace options gets rid of the message about not having > one, but it is trivial and adds nothing else useful for this example.) Bounds checking code is not needed if it can be proved never to fail. Sometimes the compiler can do that. Sometimes the programmer can do that, even if the compiler can't. This is why the *source code* should be able to disable and re-enable checks with fine granularity. Programmers can comment the code as to why checks are unnecessary and disable them. This fine-grain control over checking needs to be standardized across compilers, otherwise source files become non-portable. (note: IIRC, the Ariane 5 launch failure was linked to disabling a range check after careful analysis... of Ariane 4 trajectory) A couple of questions about Fortran: Are bounds check failures trappable in a standard way so the program can continue? Can it be controlled on a finer grain than per compilation? Do user defined numerical types have restricted bounds too? The adverse consequences of exceeding bounds can be seen to outweigh the (usually) modest costs in code size and performance that even mature code should ship with checks enabled, IMO. Compilers generally should be shipped with the 'failsafe' options on by default. -- Adrian ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 12:09 ` Dr. Adrian Wrigley @ 2006-05-25 12:42 ` Dan Nagle 2006-05-25 12:45 ` Gordon Sande ` (3 subsequent siblings) 4 siblings, 0 replies; 314+ messages in thread From: Dan Nagle @ 2006-05-25 12:42 UTC (permalink / raw) Hello, I've trimmed c.l.pl1 to stay on-topic. Dr. Adrian Wrigley wrote: <snip> > A couple of questions about Fortran: > Are bounds check failures trappable in a standard way so the > program can continue? Generally, no. The Fortran standard has never defined compiler directives, or what Ada calls pragmas. These are always defined by the compiler vendor. Whether the vendor chooses to report-and-continue or report-and-stop might be controllable or might not, depending on other compiler options. There is also an interaction with the Fortran 66 practice of declaring assumed size arrays to have an extent of 1, again, practice varies by vendor. > Can it be controlled on a finer grain than per compilation? Well, per compilation unit (roughly, procedure). > Do user defined numerical types have restricted bounds too? This can be done via a user defined assignment, but it must be written by the application programmer. > The adverse consequences of exceeding bounds can be seen to > outweigh the (usually) modest costs in code size and performance that > even mature code should ship with checks enabled, IMO. > Compilers generally should be shipped with the 'failsafe' > options on by default. Points well taken. -- Cheers! Dan Nagle Purple Sage Computing Solutions, Inc. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 12:09 ` Dr. Adrian Wrigley 2006-05-25 12:42 ` Dan Nagle @ 2006-05-25 12:45 ` Gordon Sande 2006-05-25 16:23 ` Checking for Undefined [was Re: Ada vs Fortran for scientific applications] Bob Lidral ` (2 more replies) 2006-05-25 16:25 ` Bounds Check Overhead [was: Re: Ada vs Fortran for scientific applications] Bob Lidral ` (2 subsequent siblings) 4 siblings, 3 replies; 314+ messages in thread From: Gordon Sande @ 2006-05-25 12:45 UTC (permalink / raw) On 2006-05-25 09:09:59 -0300, "Dr. Adrian Wrigley" <amtw@linuxchip.demon.co.uk.uk.uk> said: > On Wed, 24 May 2006 23:04:03 -0700, Richard Maine wrote: > >> Nasser Abbasi <nma@12000.org> wrote: >> >>> I just did this simple test, declare an array and go overbound and see if we >>> get a run-time error: >> ... >>> $ g95 f.f90 >> ... >>> $ <------------------- NO runtime ERROR >> >> This part of the thread has started drifting away from relevance to much >> of anything, but that particular sample is just drifting yet further. It >> illustrates neither much about subscript bounds rules being part of the >> language nor about bounds checking being part of the language, which >> were the two topics mentioned earlier in the subthread. >> >> Instead, the example illustrates only that g95 does not have the bounds >> check option enabled by default, which is yet a third question (and one >> mentioned in more generality earlier). >> >> As with most compilers, g95 does have a bounds check option; it just >> isn't enabled by default. Compiling your same example, but asking for >> bounds checking, gets it. In particular, compiling and running your >> example code on my Mac here with >> >> g95 -fbounds-check clf.f90 >> ./a.out >> >> Gives me: >> >> At line 4 of file clf.f90 >> Traceback: not available, compile with -ftrace=frame or -ftrace=full >> Fortran runtime error: Array element out of bounds: 11 in (1:10), dim=1 >> >> which is, in fact, more detailed than the message you showed from gnat. >> (Turning on the trace options gets rid of the message about not having >> one, but it is trivial and adds nothing else useful for this example.) > > Bounds checking code is not needed if it can be proved never > to fail. Sometimes the compiler can do that. Sometimes the > programmer can do that, even if the compiler can't. > This is why the *source code* should be able to disable and re-enable > checks with fine granularity. Programmers can comment the code > as to why checks are unnecessary and disable them. This fine-grain > control over checking needs to be standardized across compilers, > otherwise source files become non-portable. > (note: IIRC, the Ariane 5 launch failure was linked to disabling a > range check after careful analysis... of Ariane 4 trajectory) > > A couple of questions about Fortran: > Are bounds check failures trappable in a standard way so the > program can continue? > Can it be controlled on a finer grain than per compilation? I am a big fan of subscript checking and undefined variable checking. I have found that all of the errors that these aids have found have been in parts of my programs that I believed to be free of such errors because I had looked hard at them some time before. Either I had not done a good job of looking or the assumptions underlying the look had changed. That seems to be the nature of the beast. I can avoid (usually!) the trival bugs. I like getting help once the bug is not trivial. (Maybe that is a definition of trivial.) Fine grained control would have been of no use, and in fact harmful, if I had tried to use it. My take is it makes a great checklist feature that may help get past "desired feature checklists" but is otherwise not of any real benefit. The check is great but fine grained control is not. > Do user defined numerical types have restricted bounds too? That is a Pascal-like feature that I miss in Fortran. So I do my own checking. Because of my problem domain I have to be proactive in checking parameter values on entry to new procedures even if the invocation is from one that has already checked. The buzz word is programming by contract if I have read the programming fashion of the day rags. How many Ada systems can match the undefined variable checking of the old WatFor or the current Salford CheckMate or the Lahey/Fujitsu global checking? It seems to be a thing associated with places that run student cafteria computing on mainframes. Not much used anymore. There was a similar student checkout PL/I from Cornell if I recall correctly. > The adverse consequences of exceeding bounds can be seen to > outweigh the (usually) modest costs in code size and performance that > even mature code should ship with checks enabled, IMO. > Compilers generally should be shipped with the 'failsafe' > options on by default. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Checking for Undefined [was Re: Ada vs Fortran for scientific applications] 2006-05-25 12:45 ` Gordon Sande @ 2006-05-25 16:23 ` Bob Lidral 2006-05-25 17:48 ` Nasser Abbasi 2006-05-25 17:57 ` Gordon Sande 2006-05-26 2:58 ` Ada vs Fortran for scientific applications robin 2006-07-09 20:52 ` adaworks 2 siblings, 2 replies; 314+ messages in thread From: Bob Lidral @ 2006-05-25 16:23 UTC (permalink / raw) Gordon Sande wrote: > [...] > How many Ada systems can match the undefined variable checking of the > old WatFor or the current Salford CheckMate or the Lahey/Fujitsu > global checking? It seems to be a thing associated with places that > run student cafteria computing on mainframes. Not much used anymore. > There was a similar student checkout PL/I from Cornell if I recall > correctly. > That's one of the features I miss about the old CDC CYBER architectures. Their (one's complement, sigh) numeric format supported plus or minus infinity (e.g., divide a non-zero number by zero) and plus or minus indefinite (e.g., divide zero by zero, infinity by infinity, or use an "indefinite" value anywhere in a divide operation). IIRC (it has been a long time) these values were represented by specific bit patterns in just the sign and exponent fields. (It also helped that the CYBER floating point exponent bias was such that the exponent field was zero for integer values so that integer values representable by 48 or fewer bits were represented by the same bit patterns in either floating point or fixed point. Originally, CYBERs only had an 18-bit address space which was extended in later models to 21 bits and maybe even to 24 bits in still later models. The operating system required the high-order bit of any of those address fields to be zero for user programs. There was an option for the loader that allowed the exploitation of these hardware design features to provide automatic checking for uninitialized values at run time with no run time overhead. The loader provided a way to specify values to be placed into memory for any uninitialized program data (I don't remember whether the default was no value -- e.g., use previous process's leftovers -- or zero). The best patterns to choose for such initial values were {plus | minus} {infinity | indefinite} ORed with a word with bits 17, 20, and 23 set ORed with the load address of the word Unless one enabled the use of infinity and indefinite values for arithmetic processing (I never saw that done, but I'm sure some customer somewhere used it), any use of such values in an arithmetic operation would be detected by the hardware and stop the program with an error message containing the value(s) that caused the problem. Further, any attempt to use such a value as an address would cause the hardware to stop the program with an error because the high-order bit of the address field would be set. Although it didn't provide direct trace back to the source code (because it was strictly a hardware feature), it was possible to do such trace backs using load maps because each word contained the address at which it was originally loaded. Useful, because no such checking was done for assignments. So, if A were uninitialized in the source code, a sequence such as: B = A C = B D = C / 4 would halt the program at the divide operation and print out the values, one of which would contain the address at which A was loaded into memory. All overhead occurred at initial program load time. One obvious limitation was that it only worked well for static data and wasn't all that much help for values allocated on the stack or for values in FORTRAN blank COMMON or the equivalents in other languages. I cringe every time I hear some recent grad new hire insist earnestly that there's no need to initialize anything to zero because that's always done automatically. And they believe that's true for all languages -- even for variables allocated on the stack or heap. Worse, that is in the definitions of a few languages, thus re-inforcing their belief that it's true for all languages. Bob Lidral lidral at alum dot mit dot edu ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined [was Re: Ada vs Fortran for scientific applications] 2006-05-25 16:23 ` Checking for Undefined [was Re: Ada vs Fortran for scientific applications] Bob Lidral @ 2006-05-25 17:48 ` Nasser Abbasi 2006-05-25 17:57 ` Gordon Sande 1 sibling, 0 replies; 314+ messages in thread From: Nasser Abbasi @ 2006-05-25 17:48 UTC (permalink / raw) "Bob Lidral" <l1dralspamba1t@comcast.net> wrote in message news:4475DA0F.5030603@comcast.net... > Gordon Sande wrote: > > That's one of the features I miss about the old CDC CYBER architectures. > Their (one's complement, sigh) numeric format supported plus or minus > infinity (e.g., divide a non-zero number by zero) and plus or minus > indefinite (e.g., divide zero by zero, infinity by infinity, or use an > "indefinite" value anywhere in a divide operation). hi, fyi, Matlab supports NAN's, and it has Inf and -Inf (infinities) >> a=1/0; Warning: Divide by zero. >> a a = Inf >> a-Inf ans = NaN >> 1-Inf ans = -Inf Mathematica also: In[12]:= 1./0; (warning displayed like in Matlab) Out[13]= ComplexInfinity Nasser ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined [was Re: Ada vs Fortran for scientific applications] 2006-05-25 16:23 ` Checking for Undefined [was Re: Ada vs Fortran for scientific applications] Bob Lidral 2006-05-25 17:48 ` Nasser Abbasi @ 2006-05-25 17:57 ` Gordon Sande 2006-05-25 20:20 ` Bob Lidral ` (3 more replies) 1 sibling, 4 replies; 314+ messages in thread From: Gordon Sande @ 2006-05-25 17:57 UTC (permalink / raw) On 2006-05-25 13:23:43 -0300, Bob Lidral <l1dralspamba1t@comcast.net> said: > Gordon Sande wrote: > >> [...] >> How many Ada systems can match the undefined variable checking of the >> old WatFor or the current Salford CheckMate or the Lahey/Fujitsu >> global checking? It seems to be a thing associated with places that >> run student cafteria computing on mainframes. Not much used anymore. >> There was a similar student checkout PL/I from Cornell if I recall >> correctly. >> > That's one of the features I miss about the old CDC CYBER > architectures. Their (one's complement, sigh) numeric format > supported plus or minus infinity (e.g., divide a non-zero number by > zero) and plus or minus indefinite (e.g., divide zero by zero, infinity > by infinity, or use an "indefinite" value anywhere in a divide > operation). IIRC (it has been a long time) these values were > represented by specific bit patterns in just the sign and exponent > fields. (It also helped that the CYBER floating point exponent bias > was such that the exponent field was zero for integer values so that > integer values representable by 48 or fewer bits were represented by > the same bit patterns in either floating point or fixed point. > > Originally, CYBERs only had an 18-bit address space which was extended > in later models to 21 bits and maybe even to 24 bits in still later > models. The operating system required the high-order bit of any of > those address fields to be zero for user programs. > > There was an option for the loader that allowed the exploitation of > these hardware design features to provide automatic checking for > uninitialized values at run time with no run time overhead. The loader > provided a way to specify values to be placed into memory for any > uninitialized program data (I don't remember whether the default was no > value -- e.g., use previous process's leftovers -- or zero). The best > patterns to choose for such initial values were > > {plus | minus} {infinity | indefinite} > > ORed with > > a word with bits 17, 20, and 23 set > > ORed with > > the load address of the word > > Unless one enabled the use of infinity and indefinite values for > arithmetic processing (I never saw that done, but I'm sure some > customer somewhere used it), any use of such values in an arithmetic > operation would be detected by the hardware and stop the program with > an error message containing the value(s) that caused the problem. > Further, any attempt to use such a value as an address would cause the > hardware to stop the program with an error because the high-order bit > of the address field would be set. > > Although it didn't provide direct trace back to the source code > (because it was strictly a hardware feature), it was possible to do > such trace backs using load maps because each word contained the > address at which it was originally loaded. Useful, because no such > checking was done for assignments. So, if A were uninitialized in the > source code, a sequence such as: > > B = A > C = B > D = C / 4 > > would halt the program at the divide operation and print out the > values, one of which would contain the address at which A was loaded > into memory. All overhead occurred at initial program load time. > > One obvious limitation was that it only worked well for static data and > wasn't all that much help for values allocated on the stack or for > values in FORTRAN blank COMMON or the equivalents in other languages. > > > I cringe every time I hear some recent grad new hire insist earnestly > that there's no need to initialize anything to zero because that's > always done automatically. And they believe that's true for all > languages -- even for variables allocated on the stack or heap. Worse, > that is in the definitions of a few languages, thus re-inforcing their > belief that it's true for all languages. > > > Bob Lidral > lidral at alum dot mit dot edu The original WatFor for IBM/7040 used a hardware hack. They set bad parity but since they were a load and go system they had a symbol table around and did the lookup for you. When WatFor moved to IBM/360 the feature was so popular that they had to work hard to reproduce the feature. The Salford folks come form the same cafeteria for student debugging environment and did the same feature. They even do it for automatic storeage. They even set INTENT ( IN ) variables to undefined so a user can set the undefined attribute/(bit configuration) themselves if they are doing an internal storeage allocation themselves. Neat! NAG and Lahey/Fujitsu also have the capability but are missing the setting of automatics if I have the features scoped out correctly. Initialization to signaling NANs if a quick and effective approximation. Needs help from first the loader and then the storeage allocator. Until you have used it and it has saved a goodly block of time it is a feature that many just shrug and say "That's nice". They do not realize what they are missing. I am getting the impression from the silence of the cross postings that undefined checking has only shown up in Fortran systems. The exception is Salford who also have it for their C but one also seems to notice that their C and Fortran seem to share a lot of features. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined [was Re: Ada vs Fortran for scientific applications] 2006-05-25 17:57 ` Gordon Sande @ 2006-05-25 20:20 ` Bob Lidral 2006-05-27 14:29 ` Checking for Undefined robin 2006-05-25 20:35 ` Checking for Undefined [was Re: Ada vs Fortran for scientific applications] glen herrmannsfeldt ` (2 subsequent siblings) 3 siblings, 1 reply; 314+ messages in thread From: Bob Lidral @ 2006-05-25 20:20 UTC (permalink / raw) Gordon Sande wrote: > [...] > > The original WatFor for IBM/7040 used a hardware hack. They set bad parity > but since they were a load and go system they had a symbol table around > and did the lookup for you. When WatFor moved to IBM/360 the feature was so > popular that they had to work hard to reproduce the feature. > > The Salford folks come form the same cafeteria for student debugging > environment and did the same feature. They even do it for automatic > storeage. They even set INTENT ( IN ) variables to undefined so a > user can set the undefined attribute/(bit configuration) themselves > if they are doing an internal storeage allocation themselves. Neat! > > NAG and Lahey/Fujitsu also have the capability but are missing the > setting of automatics if I have the features scoped out correctly. > > Initialization to signaling NANs if a quick and effective approximation. > Needs help from first the loader and then the storeage allocator. > > Until you have used it and it has saved a goodly block of time it > is a feature that many just shrug and say "That's nice". They do not > realize what they are missing. > > I am getting the impression from the silence of the cross postings > that undefined checking has only shown up in Fortran systems. > The exception is Salford who also have it for their C but one also > seems to notice that their C and Fortran seem to share a lot of features. > Well, the cross-posting silence could be from fear of getting involved in a language comparison religious war. I believe PL/C also had undefined checking -- but that was another load-and-go implementation. Checking for undefined by having the loader use signaling NaN as the default value for data doesn't cost much -- and then only at load time and not during run time -- for static storage. But it does cost at run time for automatic and heap storage. This may be OK for load-and-go student programs and debugging but may be unacceptable in production code. This might also explain why it's more popular in Fortran compilers. IIRC, Fortran didn't have recursion or a need for automatic variables until the Fortran 77 standard. Also note that signaling NaNs will catch uninitialized floating point arithmetic data but it's not clear there's any guarantee they will catch uninitialized integer or pointer data. I remember when CDC added that feature to the loader. When one of our customers upgraded to that version of the operating system, I suggested turning it on for their build procedures and explained, I thought, the advantages. Thereupon followed much wailing, wringing of hands, gnashing of teeth, and rending of garments because of the horrible number or error messages resulting. I was roundly chastised and ordered to turn off that feature forthwith. Clearly there couldn't be that many errors in the code because it was already working at another site and, besides, the language (JOVIAL) automatically initializes all storage to zero (it doesn't). While true they wouldn't encounter anywhere near that many errors during actual production runs, that was only by accident because large portions of memory were pre-set to zero by other programs and utilities -- as expected and required by their programming conventions. But equally clear to me, if the language really did pre-set that storage to zero, it wouldn't have mattered which value the loader used for initialization before the program executed. But I could never explain that to the customer so I am certain there were undetected and detected but undiagnosed errors that will remain until they migrate the project to another language. As Gordon wrote, it's an extremely handy tool. Bob Lidral lidral at alum dot mit dot edu ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined 2006-05-25 20:20 ` Bob Lidral @ 2006-05-27 14:29 ` robin 0 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-05-27 14:29 UTC (permalink / raw) "Bob Lidral" <l1dralspamba1t@comcast.net> wrote in message news:447611A3.8080601@comcast.net... > I believe PL/C also had > undefined checking -- but that was another load-and-go implementation. Message EX-5D for those. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined [was Re: Ada vs Fortran for scientific applications] 2006-05-25 17:57 ` Gordon Sande 2006-05-25 20:20 ` Bob Lidral @ 2006-05-25 20:35 ` glen herrmannsfeldt 2006-05-25 22:02 ` Checking for Undefined Simon Wright 2006-05-27 14:29 ` robin 3 siblings, 0 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-05-25 20:35 UTC (permalink / raw) Gordon Sande wrote: (snip) > I am getting the impression from the silence of the cross postings > that undefined checking has only shown up in Fortran systems. > The exception is Salford who also have it for their C but one also > seems to notice that their C and Fortran seem to share a lot of features. Java does compile time undefined value checking for scalar variables. If the compiler isn't convinced that a variable is defined before it is used, is it a fatal compilation error. It might be compilers are getting better, but there are cases where I know a variable is always defined, but the compiler doesn't believe it. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined 2006-05-25 17:57 ` Gordon Sande 2006-05-25 20:20 ` Bob Lidral 2006-05-25 20:35 ` Checking for Undefined [was Re: Ada vs Fortran for scientific applications] glen herrmannsfeldt @ 2006-05-25 22:02 ` Simon Wright 2006-05-27 14:29 ` robin 3 siblings, 0 replies; 314+ messages in thread From: Simon Wright @ 2006-05-25 22:02 UTC (permalink / raw) Gordon Sande <g.sande@worldnet.att.net> writes: > I am getting the impression from the silence of the cross postings > that undefined checking has only shown up in Fortran systems. The > exception is Salford who also have it for their C but one also seems > to notice that their C and Fortran seem to share a lot of features. The pro version of GNAT (I don't know about the FSF version) has optional initialization with out-of-range values and checking even in places where it normally would be omitted because the compiler would assume it had already done the checks. This only works if there _are_ out-of-range values, so Integer can't be checked. Normally the recommendation is to define types appropriate to the application, so checks are possible. My current project is using an older compiler which is imperfect in this area, so we don't use this feature, but compiler warnings such as 'X may be used before initialization' are very valuable. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined 2006-05-25 17:57 ` Gordon Sande ` (2 preceding siblings ...) 2006-05-25 22:02 ` Checking for Undefined Simon Wright @ 2006-05-27 14:29 ` robin 2006-05-27 15:10 ` Gordon Sande 3 siblings, 1 reply; 314+ messages in thread From: robin @ 2006-05-27 14:29 UTC (permalink / raw) "Gordon Sande" <g.sande@worldnet.att.net> wrote in message news:2006052514574816807-gsande@worldnetattnet... > The original WatFor for IBM/7040 used a hardware hack. They set bad parity > but since they were a load and go system they had a symbol table around > and did the lookup for you. When WatFor moved to IBM/360 the feature was so > popular that they had to work hard to reproduce the feature. It wouldn't have been "hard". It only required the initialization of variables with a certain value, e.g., for integers, -2**31 and for float, -HUGE(1.0) etc. (An unlikely but feasible alternative : a bit map.) ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined 2006-05-27 14:29 ` robin @ 2006-05-27 15:10 ` Gordon Sande 0 siblings, 0 replies; 314+ messages in thread From: Gordon Sande @ 2006-05-27 15:10 UTC (permalink / raw) On 2006-05-27 11:29:31 -0300, "robin" <robin_v@bigpond.com> said: > "Gordon Sande" <g.sande@worldnet.att.net> wrote in message > news:2006052514574816807-gsande@worldnetattnet... > >> The original WatFor for IBM/7040 used a hardware hack. They set bad parity >> but since they were a load and go system they had a symbol table around >> and did the lookup for you. When WatFor moved to IBM/360 the feature was so >> popular that they had to work hard to reproduce the feature. > > It wouldn't have been "hard". > It only required the initialization of variables > with a certain value, e.g., for integers, -2**31 > and for float, -HUGE(1.0) etc. > (An unlikely but feasible alternative : a bit map.) The hardware no longer did it for free! ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 12:45 ` Gordon Sande 2006-05-25 16:23 ` Checking for Undefined [was Re: Ada vs Fortran for scientific applications] Bob Lidral @ 2006-05-26 2:58 ` robin 2006-07-09 20:52 ` adaworks 2 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-05-26 2:58 UTC (permalink / raw) "Gordon Sande" <g.sande@worldnet.att.net> wrote in message news:2006052509454116807-gsande@worldnetattnet... > I am a big fan of subscript checking and undefined variable checking. > I have found that all of the errors that these aids have found have been > in parts of my programs that I believed to be free of such errors > because I had looked hard at them some time before. Either I had > not done a good job of looking or the assumptions underlying the look > had changed. > > That seems to be the nature of the beast. > > I can avoid (usually!) the trival bugs. I like getting help once the > bug is not trivial. (Maybe that is a definition of trivial.) > > Fine grained control would have been of no use, and in fact harmful, > if I had tried to use it. My take is it makes a great checklist feature > that may help get past "desired feature checklists" but is otherwise > not of any real benefit. The check is great but fine grained control > is not. > > > Do user defined numerical types have restricted bounds too? > > That is a Pascal-like feature that I miss in Fortran. So I do my own > checking. Because of my problem domain I have to be proactive in checking > parameter values on entry to new procedures even if the invocation is > from one that has already checked. The buzz word is programming by contract > if I have read the programming fashion of the day rags. > > How many Ada systems can match the undefined variable checking of the > old WatFor or the current Salford CheckMate or the Lahey/Fujitsu > global checking? It seems to be a thing associated with places that > run student cafteria computing on mainframes. Not much used anymore. > There was a similar student checkout PL/I from Cornell if I recall > correctly. That's right. It was called "PL/C". > > The adverse consequences of exceeding bounds can be seen to > > outweigh the (usually) modest costs in code size and performance that > > even mature code should ship with checks enabled, IMO. > > Compilers generally should be shipped with the 'failsafe' > > options on by default. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 12:45 ` Gordon Sande 2006-05-25 16:23 ` Checking for Undefined [was Re: Ada vs Fortran for scientific applications] Bob Lidral 2006-05-26 2:58 ` Ada vs Fortran for scientific applications robin @ 2006-07-09 20:52 ` adaworks 2006-07-09 21:33 ` Brooks Moses ` (3 more replies) 2 siblings, 4 replies; 314+ messages in thread From: adaworks @ 2006-07-09 20:52 UTC (permalink / raw) "Gordon Sande" <g.sande@worldnet.att.net> wrote in message news:2006052509454116807-gsande@worldnetattnet... > > How many Ada systems can match the undefined variable checking of the > old WatFor or the current Salford CheckMate or the Lahey/Fujitsu > global checking? It seems to be a thing associated with places that > run student cafteria computing on mainframes. Not much used anymore. > There was a similar student checkout PL/I from Cornell if I recall > correctly. > The default for Ada is to do thorough range checking on all numeric types. A designer may suppress that default, selectively, if it is deemed unnecessary. Ada allows the explicity declaration of range constraints, bit mapping, and other low-level features. Examples: type My_Number is range 3..56; -- sets the upper and lower bounds -- on an integer type type My_Float is digits 7 range -200.0 .. 500.0; -- sets the number of digits -- and the range for this type These are just two examples. The same thing can be done for decimal types, modular types, fixed-point types, etc. Ada also uses the name-equivalence, rather than the structural equivalence approach to checking (at compile time). Therefore, type N1 is range 0..100; type N2 is range 0..100; with X : N1; -- X is of type N1 Y : N2; -- Y is of type N2; will reject, X := Y +1; Y is incompatible, by name, with X; Some languages will accept the above statement because the two values have types that are structurally equivalent. One of the principle features of Ada is the rigor of its checking both at compile-time and run-time. This includes indices, ordinary numeric types, subscripts, etc. Richard Riehle ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 20:52 ` adaworks @ 2006-07-09 21:33 ` Brooks Moses 2006-07-10 3:08 ` jimmaureenrogers 2006-07-10 11:23 ` Björn Persson 2006-07-09 21:36 ` James Giles ` (2 subsequent siblings) 3 siblings, 2 replies; 314+ messages in thread From: Brooks Moses @ 2006-07-09 21:33 UTC (permalink / raw) adaworks@sbcglobal.net wrote: > Ada also uses the name-equivalence, rather than the structural equivalence > approach to checking (at compile time). Therefore, > > type N1 is range 0..100; > type N2 is range 0..100; > > with > > X : N1; -- X is of type N1 > Y : N2; -- Y is of type N2; > > will reject, > > X := Y +1; Y is incompatible, by name, with X; As a curiousity question, how would this work for cases such as, say, a finite-volume grid where I have one range for the cells, and another range for the faces between the cells, and want to do soemthing like this: type ncells is range 1..100; type nfaces is range 0..100; cell : ncells; leftface : nfaces; rightface : nfaces; leftface = cell - 1; rightface = cell; According your explanation those last two commands, while making logical sense, would throw a type-mismatch error. How easy is that to "fix"? - Brooks -- The "bmoses-nospam" address is valid; no unmunging needed. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 21:33 ` Brooks Moses @ 2006-07-10 3:08 ` jimmaureenrogers 2006-07-10 11:23 ` Björn Persson 1 sibling, 0 replies; 314+ messages in thread From: jimmaureenrogers @ 2006-07-10 3:08 UTC (permalink / raw) Brooks Moses wrote: > As a curiousity question, how would this work for cases such as, say, a > finite-volume grid where I have one range for the cells, and another > range for the faces between the cells, and want to do soemthing like this: > > type ncells is range 1..100; > type nfaces is range 0..100; > > cell : ncells; > leftface : nfaces; > rightface : nfaces; > > leftface = cell - 1; > rightface = cell; > > According your explanation those last two commands, while making logical > sense, would throw a type-mismatch error. How easy is that to "fix"? Ada makes a distinction between types and subtypes. The Ada definition of a subtype is a new alias for a type, with a possible subset of the range of valid values. Your example could be performed with a little subtype manipulation: type nfaces is range 0..100; subtype ncells is nfaces range 1..100; cell : ncells; leftface : nfaces; rightface : nfaces; leftface := cell - 1; rightface := cell; This works in Ada because all instance of a subtype are also instances of their base type. Nonetheless, any instance of ncells is limited to the range of value specified for the subtype. Assignment of 0 to an instance of ncells results in an invalid object. The invalidity of the object becomes an error when the invalid object value is evaluated. Ada provides the 'Valid attribute, which returns a boolean value, to indicate whether or not an object is valid. Evaluating the 'Valid attribute does not constitute an evaluation of the object, and therefore does not raise the exception Constraint_Error. Jim Rogers ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 21:33 ` Brooks Moses 2006-07-10 3:08 ` jimmaureenrogers @ 2006-07-10 11:23 ` Björn Persson 2006-07-10 17:08 ` Brooks Moses 1 sibling, 1 reply; 314+ messages in thread From: Björn Persson @ 2006-07-10 11:23 UTC (permalink / raw) Brooks Moses wrote: > type ncells is range 1..100; > type nfaces is range 0..100; > > cell : ncells; > leftface : nfaces; > rightface : nfaces; > > leftface = cell - 1; > rightface = cell; > > According your explanation those last two commands, while making logical > sense, would throw a type-mismatch error. How easy is that to "fix"? One solution is subtypes, as Jim showed. For cases where you don't want subtypes, there is explicit type conversion: leftface := nfaces(cell - 1); rightface := nfaces(cell); -- Bj�rn Persson PGP key A88682FD omb jor ers @sv ge. r o.b n.p son eri nu ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-10 11:23 ` Björn Persson @ 2006-07-10 17:08 ` Brooks Moses 0 siblings, 0 replies; 314+ messages in thread From: Brooks Moses @ 2006-07-10 17:08 UTC (permalink / raw) Bj�rn Persson wrote: > Brooks Moses wrote: >> leftface = cell - 1; >> rightface = cell; >> >>According your explanation those last two commands, while making logical >>sense, would throw a type-mismatch error. How easy is that to "fix"? > > One solution is subtypes, as Jim showed. For cases where you don't want > subtypes, there is explicit type conversion: > > leftface := nfaces(cell - 1); > rightface := nfaces(cell); Thanks to you both. This does look like a useful language feature, indeed, particularly with the subtypes. - Brooks -- The "bmoses-nospam" address is valid; no unmunging needed. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 20:52 ` adaworks 2006-07-09 21:33 ` Brooks Moses @ 2006-07-09 21:36 ` James Giles 2006-07-09 22:29 ` Martin Dowie ` (3 more replies) 2006-07-11 15:14 ` robin 2006-11-20 9:39 ` robin 3 siblings, 4 replies; 314+ messages in thread From: James Giles @ 2006-07-09 21:36 UTC (permalink / raw) adaworks@sbcglobal.net wrote: > "Gordon Sande" <g.sande@worldnet.att.net> wrote in message > news:2006052509454116807-gsande@worldnetattnet... >> >> How many Ada systems can match the undefined variable checking of the >> old WatFor or the current Salford CheckMate or the Lahey/Fujitsu >> global checking? It seems to be a thing associated with places that >> run student cafteria computing on mainframes. Not much used anymore. >> There was a similar student checkout PL/I from Cornell if I recall >> correctly. >> > The default for Ada is to do thorough range checking on all numeric > types. A designer may suppress that default, selectively, if it is > deemed unnecessary. > [... lots of Ada features ...] All the stuff I elided is interesting. Many of the features are even good things for languages to have. None of them were checks for undefined variables. Given the Ada program fragment: COUNT, SUM : INTEGER; [... lots of code ...] [... some paths through which assign to SUM ...] [... and some don't ...] COUNT := SUM+1; -- is SUM defined here or not? In most Ada implementations, as for most other languages, all the bit patterns in the representation of an INTEGER data type are valid integer values. There is no bit pattern representing NOI (Not An Integer) corresponding to the IEEE float idea of a NAN. Determining whether a variable is defined or not is a complex problem. It's made worse by the fact that the user can make the error message go away (though not usually the problem) by initializing the variable in the declaration. -- J. Giles "I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies." -- C. A. R. Hoare ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 21:36 ` James Giles @ 2006-07-09 22:29 ` Martin Dowie 2006-07-09 23:07 ` James Giles 2006-07-09 23:19 ` glen herrmannsfeldt ` (2 subsequent siblings) 3 siblings, 1 reply; 314+ messages in thread From: Martin Dowie @ 2006-07-09 22:29 UTC (permalink / raw) James Giles wrote: > All the stuff I elided is interesting. Many of the features are even > good things for languages to have. None of them were checks for > undefined variables. > > Given the Ada program fragment: > > COUNT, SUM : INTEGER; > > [... lots of code ...] > [... some paths through which assign to SUM ...] > [... and some don't ...] > > COUNT := SUM+1; -- is SUM defined here or not? > > In most Ada implementations, as for most other languages, all > the bit patterns in the representation of an INTEGER data type > are valid integer values. That's not entirely true, the standard states Integer must include the range �2**15+1 .. +2**15�1 - thus (usually) leaving -2**15 as a possible 'default uninitialized' value. Implementations /may/ do something else (e.g. on 32-bit architectures providing 'Integer' with the range -2**31+1 .. 2**31-1) and this should be stated in their own documentation (as per Annex M of the Ada language standard). Cheers -- Martin ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 22:29 ` Martin Dowie @ 2006-07-09 23:07 ` James Giles 2006-07-09 23:44 ` glen herrmannsfeldt 2006-07-11 1:29 ` robin 0 siblings, 2 replies; 314+ messages in thread From: James Giles @ 2006-07-09 23:07 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 2046 bytes --] Martin Dowie wrote: > James Giles wrote: ... >> In most Ada implementations, as for most other languages, all >> the bit patterns in the representation of an INTEGER data type >> are valid integer values. > > That's not entirely true, the standard states Integer must include the > range �2**15+1 .. +2**15�1 - thus (usually) leaving -2**15 as a > possible 'default uninitialized' value. Well I allowed for that possibility. I made no absolute assertion. I said "most Ada implementations". In fact, most language definitions *allow* implementations to implement integers with a lot of flexibility. Most actual implementations use all the bit patterns their underlying hardware allows and *don't* use any of them for NOI (Not An Integer). And, let's not forget character data types (does your actual implementation have a value that means NOC - Not A Character?). What about booleans? Ada also has fixed point types, enumerations, records, etc. Does your actual implementation catch uses of undefined variables for all those? Indeed, does it even use NANs from the floating point hardware to detect uninitialized REALs? It's true: nearly all languages permit implementations to internally represent data with additional memory for the purpose of detecting and reporting uses of undefined variables (and for other reasons - like the value of a variable that was defined, but the expression involved an unhandled exception). In formal semantics, the idea is called a lifted domain. Applying it in software is expensive at run-time. Now, I've always liked the idea of using the anomalous value in two's complement integers (sign bit set, all others clear) as NOI. I don't think it's a likely implementation strategy unless it was treated that way in hardware. -- J. Giles "I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies." -- C. A. R. Hoare ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 23:07 ` James Giles @ 2006-07-09 23:44 ` glen herrmannsfeldt 2006-07-11 1:29 ` robin 1 sibling, 0 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-09 23:44 UTC (permalink / raw) (snip, someone wrote) >>That's not entirely true, the standard states Integer must include the >>range �2**15+1 .. +2**15�1 - thus (usually) leaving -2**15 as a >>possible 'default uninitialized' value. James Giles wrote: > Well I allowed for that possibility. I made no absolute > assertion. I said "most Ada implementations". In fact, > most language definitions *allow* implementations > to implement integers with a lot of flexibility. C pretty much requires binary for unsigned integers, but is more general for signed integers. C does allow "padding bits", that is, bits not used as value bits. Java requires twos complement binary with specific bit widths. Otherwise, most allow for other bases and representations. > Most > actual implementations use all the bit patterns their > underlying hardware allows and *don't* use any of > them for NOI (Not An Integer). There might be some hardware that can detect negative zero, or other unused representations. That is rare. > And, let's not forget > character data types (does your actual implementation > have a value that means NOC - Not A Character?). The only one I used with specific detection for undefined values is WATFIV which uses all bytes equal to X'81', even for CHARACTER*1 where it is 'a'. > What about booleans? Ada also has fixed point types, > enumerations, records, etc. Does your actual implementation > catch uses of undefined variables for all those? Indeed, > does it even use NANs from the floating point hardware to > detect uninitialized REALs? > It's true: nearly all languages permit implementations > to internally represent data with additional memory > for the purpose of detecting and reporting uses of > undefined variables (and for other reasons - like the > value of a variable that was defined, but the expression > involved an unhandled exception). Well, C allows it for any purpose or even no purpose at all. > In formal semantics, > the idea is called a lifted domain. Applying it in software > is expensive at run-time. > Now, I've always liked the idea of using the anomalous > value in two's complement integers (sign bit set, all others > clear) as NOI. I don't think it's a likely implementation > strategy unless it was treated that way in hardware. There might be some that will detect it for other than twos complement, but I don't know any hardware that does for twos complement. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 23:07 ` James Giles 2006-07-09 23:44 ` glen herrmannsfeldt @ 2006-07-11 1:29 ` robin 1 sibling, 0 replies; 314+ messages in thread From: robin @ 2006-07-11 1:29 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 2166 bytes --] James Giles wrote in message ... >Martin Dowie wrote: >> James Giles wrote: >... >>> In most Ada implementations, as for most other languages, all >>> the bit patterns in the representation of an INTEGER data type >>> are valid integer values. >> >> That's not entirely true, the standard states Integer must include the >> range �2**15+1 .. +2**15�1 - thus (usually) leaving -2**15 as a >> possible 'default uninitialized' value. > >Well I allowed for that possibility. I made no absolute >assertion. I said "most Ada implementations". In fact, >most language definitions *allow* implementations >to implement integers with a lot of flexibility. Most >actual implementations use all the bit patterns their >underlying hardware allows and *don't* use any of >them for NOI (Not An Integer). Exceptioins include WATFOR and WATFIV, PL/C. In BASIC, all variables have an initial value of zero. In IBM's PL/I [PC and mainframe] provision is made to detect uninitialized fixed-point decimal variables via hardware, and this has been the case of their mainframe PL/Is since 1965. The pre-decessor of WATFOR (IBM 704?) used hardware (parity bit set wrong) for uninitialized variables. > And, let's not forget >character data types (does your actual implementation >have a value that means NOC - Not A Character?). >What about booleans? Ada also has fixed point types, >enumerations, records, etc. Does your actual implementation >catch uses of undefined variables for all those? Indeed, >does it even use NANs from the floating point hardware to >detect uninitialized REALs? > >It's true: nearly all languages permit implementations >to internally represent data with additional memory >for the purpose of detecting and reporting uses of >undefined variables (and for other reasons - like the >value of a variable that was defined, but the expression >involved an unhandled exception). In formal semantics, >the idea is called a lifted domain. Applying it in software >is expensive at run-time. Not necessarily. It isn't in the case of Salford Fortran (now Silverfrost), nor in WATFOR and WATFIV, PL/C, and fixed-point decimal in IBM's PL/I compilers. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 21:36 ` James Giles 2006-07-09 22:29 ` Martin Dowie @ 2006-07-09 23:19 ` glen herrmannsfeldt 2006-07-11 1:29 ` robin 2006-07-10 7:38 ` Ada vs Fortran for scientific applications Dmitry A. Kazakov 2006-07-10 9:57 ` Georg Bauhaus 3 siblings, 1 reply; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-09 23:19 UTC (permalink / raw) James Giles wrote: (snip) > In most Ada implementations, as for most other languages, all > the bit patterns in the representation of an INTEGER data type > are valid integer values. For signed integer types, most, if not all, allow twos complement, ones complement, or sign magnitude representation. For representations with two zeros, the use of a zero representation not generated by the hardware is likely system dependent . > There is no bit pattern representing > NOI (Not An Integer) corresponding to the IEEE float idea of > a NAN. It might be that some hardware supports such a bit pattern. It is also legal in most, if not all, languages for additional bits to be in the representation that are not used for valid values. Even C allows that. Fortran specifically allows for any base greater than one. C pretty much only allows binary. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 23:19 ` glen herrmannsfeldt @ 2006-07-11 1:29 ` robin 2006-07-11 5:56 ` glen herrmannsfeldt 0 siblings, 1 reply; 314+ messages in thread From: robin @ 2006-07-11 1:29 UTC (permalink / raw) glen herrmannsfeldt wrote in message <5rSdnTfTP7NHEyzZnZ2dnUVZ_tidnZ2d@comcast.com>... >James Giles wrote: > >For signed integer types, most, if not all, allow twos complement, >ones complement, or sign magnitude representation. Virtually all use twos complement for negative values. Few ever used ones complement anyway. They were a PITA. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 1:29 ` robin @ 2006-07-11 5:56 ` glen herrmannsfeldt 2006-07-12 0:37 ` robin 0 siblings, 1 reply; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-11 5:56 UTC (permalink / raw) robin wrote: > glen herrmannsfeldt wrote in message <5rSdnTfTP7NHEyzZnZ2dnUVZ_tidnZ2d@comcast.com>... >>For signed integer types, most, if not all, allow twos complement, >>ones complement, or sign magnitude representation. > Virtually all use twos complement for negative values. > Few ever used ones complement anyway. They were a PITA. CDC used ones complement, and as I understand it Univac still does. The last sign magnitude binary machine I know of is the 7090. S/360 and successors use sign magnitude for fixed point decimal arithmetic. I don't know of any nines complement machines, but there probably were some along the way. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 5:56 ` glen herrmannsfeldt @ 2006-07-12 0:37 ` robin 2006-07-12 1:15 ` glen herrmannsfeldt 0 siblings, 1 reply; 314+ messages in thread From: robin @ 2006-07-12 0:37 UTC (permalink / raw) glen herrmannsfeldt wrote in message ... >robin wrote: > >> glen herrmannsfeldt wrote in message <5rSdnTfTP7NHEyzZnZ2dnUVZ_tidnZ2d@comcast.com>... > >>>For signed integer types, most, if not all, allow twos complement, >>>ones complement, or sign magnitude representation. > >> Virtually all use twos complement for negative values. >> Few ever used ones complement anyway. They were a PITA. > >CDC used ones complement, Yes, and they were a PITA. But they are long-gone machines, as I said; some 20 + years. > and as I understand it Univac still does. > >The last sign magnitude binary machine I know of is the 7090. 40+ years ago. >S/360 and successors use sign magnitude for fixed point decimal >arithmetic. Off topic. We're referring to binary integers, not decimal > I don't know of any nines complement machines, off topic. They probably never existed. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-12 0:37 ` robin @ 2006-07-12 1:15 ` glen herrmannsfeldt 2006-07-14 2:47 ` Randy Brukardt 2006-07-18 1:48 ` robin 0 siblings, 2 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-12 1:15 UTC (permalink / raw) robin wrote: > glen herrmannsfeldt wrote in message ... (snip) >>CDC used ones complement, > Yes, and they were a PITA. > But they are long-gone machines, as I said; some 20 + years. >>and as I understand it Univac still does. No comment on this one? I don't know which compilers are available for it, but it does seem to still be in production. (snip) >>S/360 and successors use sign magnitude for fixed point decimal >>arithmetic. > Off topic. We're referring to binary integers, not decimal Why is it off topic? Fortran allows any base greater than one. PL/I definitely allows decimal, most likely even for FIXED BINARY variables. Someone else will have to say what ADA allows, but those three groups are in the discussion, so those should determine what is on topic. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-12 1:15 ` glen herrmannsfeldt @ 2006-07-14 2:47 ` Randy Brukardt 2006-07-14 2:56 ` glen herrmannsfeldt 2006-07-18 1:48 ` robin 1 sibling, 1 reply; 314+ messages in thread From: Randy Brukardt @ 2006-07-14 2:47 UTC (permalink / raw) "glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message news:bK-dnWNbV9-w0CnZnZ2dnUVZ_qednZ2d@comcast.com... > >>and as I understand it Univac still does. > > No comment on this one? I don't know which compilers are > available for it, but it does seem to still be in production. We did an Ada 95 compiler for the U2200 series in the late 1990's and it surely was still available then, and it certainly was one's complement. That caused a lot of problems with the compiler front-end (it assumed two's complement in a number of places), and we uncovered a number of issues with the Ada Standard as well. Randy. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-14 2:47 ` Randy Brukardt @ 2006-07-14 2:56 ` glen herrmannsfeldt 0 siblings, 0 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-14 2:56 UTC (permalink / raw) Randy Brukardt wrote: (snip on ones complement machines) > We did an Ada 95 compiler for the U2200 series in the late 1990's and it > surely was still available then, and it certainly was one's complement. That > caused a lot of problems with the compiler front-end (it assumed two's > complement in a number of places), and we uncovered a number of issues with > the Ada Standard as well. Thanks. I tried to find it on the univac web site once, but they don't give details for most of the machines. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-12 1:15 ` glen herrmannsfeldt 2006-07-14 2:47 ` Randy Brukardt @ 2006-07-18 1:48 ` robin 2006-07-18 18:35 ` glen herrmannsfeldt 1 sibling, 1 reply; 314+ messages in thread From: robin @ 2006-07-18 1:48 UTC (permalink / raw) glen herrmannsfeldt wrote in message ... >robin wrote: > >> glen herrmannsfeldt wrote in message ... > >(snip) >>>CDC used ones complement, > >> Yes, and they were a PITA. >> But they are long-gone machines, as I said; some 20 + years. > >>>and as I understand it Univac still does. > >No comment on this one? I don't know which compilers are >available for it, but it does seem to still be in production. > >(snip) > >>>S/360 and successors use sign magnitude for fixed point decimal >>>arithmetic. > >> Off topic. We're referring to binary integers, not decimal > >Why is it off topic? For the reason that i said, viz, we're refering to binary integers. >Fortran allows any base greater than one. Let me know if you find a compiler that uses a base other than 2 for binary integers. >PL/I definitely allows decimal, Not for binary integers. > most likely even for FIXED BINARY variables. Might have some trouble doing indexing and logical operations. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-18 1:48 ` robin @ 2006-07-18 18:35 ` glen herrmannsfeldt 2006-07-19 14:35 ` BINARY INTEGER robin 0 siblings, 1 reply; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-18 18:35 UTC (permalink / raw) robin wrote: > glen herrmannsfeldt wrote in message ... (snip) >>PL/I definitely allows decimal, > Not for binary integers. I know of an IBM PL/I implementation that did FIXED DECIMAL using binary arithmetic. I don't know why doing FIXED BINARY using decimal arithmetic would be any less legal. Note that most implementations do FLOAT DECIMAL in binary. The scaling operations required for fixed point operations with the radix point not immediately to the right of the least significant digit are a little easier in the appropriate base, but that isn't required. >>most likely even for FIXED BINARY variables. > Might have some trouble doing indexing and logical operations. There have been machines that did indexing in decimal. Logical operations are defined for BIT strings. The conversion between FIXED BINARY implemented in decimal and BIT strings would have to satisfy the language requirements, but otherwise should be legal. (As is the conversion of FIXED DECIMAL to BIT strings.) -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: BINARY INTEGER 2006-07-18 18:35 ` glen herrmannsfeldt @ 2006-07-19 14:35 ` robin 0 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-07-19 14:35 UTC (permalink / raw) glen herrmannsfeldt wrote in message <49qdnUCa5515tCDZnZ2dnUVZ_uqdnZ2d@comcast.com>... >robin wrote: >> glen herrmannsfeldt wrote in message ... > >(snip) > >>>PL/I definitely allows decimal, > >> Not for binary integers. > >I know of an IBM PL/I implementation that did FIXED DECIMAL using >binary arithmetic. So do I, but that's irrelevant. It's off-topic. > I don't know why doing FIXED BINARY using >decimal arithmetic would be any less legal. General purpose computers need to have fixed-point binary as one of the basic operatiions, along with the logical operations. It follows that FIXED BINARY will be implemented courtesy of fixed-point binary hardware. > Note that most >implementations do FLOAT DECIMAL in binary. That's because most hardware is float biunary, not float decimal. But again, that's irrelevant. > The scaling operations >required for fixed point operations with the radix point not immediately >to the right of the least significant digit are a little easier in the >appropriate base, but that isn't required. > >>>most likely even for FIXED BINARY variables. > >> Might have some trouble doing indexing and logical operations. > >There have been machines that did indexing in decimal. Which? >Logical operations are defined for BIT strings. In a general pupose computer, logical operations are defined in terms of operations on binary integers. > The conversion between >FIXED BINARY implemented in decimal Who is going to implement binary oiperations in decimal? > and BIT strings would have to >satisfy the language requirements, but otherwise should be legal. >(As is the conversion of FIXED DECIMAL to BIT strings.) Way off topic. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 21:36 ` James Giles 2006-07-09 22:29 ` Martin Dowie 2006-07-09 23:19 ` glen herrmannsfeldt @ 2006-07-10 7:38 ` Dmitry A. Kazakov 2006-07-10 16:41 ` adaworks 2006-07-11 1:29 ` robin 2006-07-10 9:57 ` Georg Bauhaus 3 siblings, 2 replies; 314+ messages in thread From: Dmitry A. Kazakov @ 2006-07-10 7:38 UTC (permalink / raw) On Sun, 09 Jul 2006 21:36:07 GMT, James Giles wrote: > All the stuff I elided is interesting. Many of the features are even > good things for languages to have. None of them were checks for > undefined variables. > > Given the Ada program fragment: > > COUNT, SUM : INTEGER; > > [... lots of code ...] > [... some paths through which assign to SUM ...] > [... and some don't ...] > > COUNT := SUM+1; -- is SUM defined here or not? > > In most Ada implementations, as for most other languages, all > the bit patterns in the representation of an INTEGER data type > are valid integer values. There is no bit pattern representing > NOI (Not An Integer) corresponding to the IEEE float idea of > a NAN. Determining whether a variable is defined or not is a > complex problem. It's made worse by the fact that the user can > make the error message go away (though not usually the problem) > by initializing the variable in the declaration. Yes, unfortunately this is one case, where Ada's default is not safe. A better design would be to require explicit initialization for all variables of types with assignment. If the programmer wanted to leave something uninitialized, he should do it explicitly: Sum : Integer := 0; Count : Integer := <>; Bar : Integer; -- Error: no public default constructor visible But not-a-value is IMO not a good idea. Firstly, it is run-time, i.e. too late. Secondly, it does not work for all types. What is not-a-bit, not-an-array, not-a-user-defined-type? If types like Integer had user-definable constructors, one could easily achieve not-a-value functionality using subtypes. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-10 7:38 ` Ada vs Fortran for scientific applications Dmitry A. Kazakov @ 2006-07-10 16:41 ` adaworks 2006-07-10 18:12 ` John W. Kennedy 2006-07-11 1:29 ` robin 1 sibling, 1 reply; 314+ messages in thread From: adaworks @ 2006-07-10 16:41 UTC (permalink / raw) "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:1kzktalo9krea$.z8n9wev45xct$.dlg@40tude.net... > On Sun, 09 Jul 2006 21:36:07 GMT, James Giles wrote: > > Yes, unfortunately this is one case, where Ada's default is not safe. A > better design would be to require explicit initialization for all variables > of types with assignment. If the programmer wanted to leave something > uninitialized, he should do it explicitly: > > Sum : Integer := 0; > Count : Integer := <>; > Bar : Integer; -- Error: no public default constructor visible > Actually, explicit initialization with valid values, as shown above, can increase the likelihood of errors. Ada will detect, at run-time, a value that does not conform to the given type definition. A better solution, is to use Ada's pragma Normalize_Scalars as defined in the Safety and Security Annex of the Ada Language Reference Manual. Further, it is not wise to use the predefined type, Integer, in most cases. Rather, one should define the type with ranges and constraints that conform exactly to the problem being solved. This latter approach is one of Ada's strengths. For example, type My number_1 is range -2**15 .. 2**15 -1; for My_Number_1'Size use 16; -- more representation clauses, where appropriate or even type My number_2 is range -473..473; for My_Number_2'Size use 12; for My_Number'Alignment use some-boundary-number -- more representation clauses, where appropriate Since this is a multi-language discussion, I am interested in the method some of the other languages might use to define My_Number_1 and My_Number_2. In particular, since Fortran and PL/I advocates are contributing to the discussion, how do these languages approach this issue. I'm sure PL/I has a good way to do this. Richard Riehle ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-10 16:41 ` adaworks @ 2006-07-10 18:12 ` John W. Kennedy 2006-07-11 1:29 ` robin 0 siblings, 1 reply; 314+ messages in thread From: John W. Kennedy @ 2006-07-10 18:12 UTC (permalink / raw) adaworks@sbcglobal.net wrote: > "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message > news:1kzktalo9krea$.z8n9wev45xct$.dlg@40tude.net... >> On Sun, 09 Jul 2006 21:36:07 GMT, James Giles wrote: >> >> Yes, unfortunately this is one case, where Ada's default is not safe. A >> better design would be to require explicit initialization for all variables >> of types with assignment. If the programmer wanted to leave something >> uninitialized, he should do it explicitly: >> >> Sum : Integer := 0; >> Count : Integer := <>; >> Bar : Integer; -- Error: no public default constructor visible >> > Actually, explicit initialization with valid values, as shown above, can > increase the likelihood of errors. Ada will detect, at run-time, a value > that does not conform to the given type definition. > > A better solution, is to use Ada's > > pragma Normalize_Scalars > > as defined in the Safety and Security Annex of the Ada Language > Reference Manual. > > Further, it is not wise to use the predefined type, Integer, in most > cases. Rather, one should define the type with ranges and > constraints that conform exactly to the problem being solved. > This latter approach is one of Ada's strengths. For example, > > type My number_1 is range -2**15 .. 2**15 -1; > for My_Number_1'Size use 16; > -- more representation clauses, where appropriate > > or even > > type My number_2 is range -473..473; > for My_Number_2'Size use 12; > for My_Number'Alignment use some-boundary-number > -- more representation clauses, where appropriate > > Since this is a multi-language discussion, I am interested in the method > some of the other languages might use to define My_Number_1 and > My_Number_2. In particular, since Fortran and PL/I advocates are > contributing to the discussion, how do these languages approach this > issue. I'm sure PL/I has a good way to do this. PL/I is too old (1964) to have the range trick, but it does at least strive for portability by specifications such as: Declare My_number_2 fixed binary(8,0); If the (SIZE) option is turned on, it will at least check that the range is between -511 and +511. For alignment, it only has the options ALIGNED and UNALIGNED, both of which are implementation-defined (though on typical present-day hardware ALIGNED means aligned to the double-word/word/half-word/byte boundary appropriate to the data and UNALIGNED means aligned to the nearest byte, except for BIT, where ALIGNED means aligned to the nearest byte and UNALIGNED means aligned to the nearest bit. -- John W. Kennedy "The blind rulers of Logres Nourished the land on a fallacy of rational virtue." -- Charles Williams. "Taliessin through Logres: Prelude" ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-10 18:12 ` John W. Kennedy @ 2006-07-11 1:29 ` robin 2006-07-11 2:49 ` John W. Kennedy 0 siblings, 1 reply; 314+ messages in thread From: robin @ 2006-07-11 1:29 UTC (permalink / raw) John W. Kennedy wrote in message <2Nwsg.210$Dp4.31@fe09.lga>... >adaworks@sbcglobal.net wrote: >> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message >> news:1kzktalo9krea$.z8n9wev45xct$.dlg@40tude.net... >>> On Sun, 09 Jul 2006 21:36:07 GMT, James Giles wrote: >>> >PL/I is too old (1964) to have the range trick, Can be handled by ASSERT. > but it does at least >strive for portability by specifications such as: > > Declare My_number_2 fixed binary(8,0); > >If the (SIZE) option is turned on, it will at least check that the range >is between -511 and +511. Actually, -512 to +511. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 1:29 ` robin @ 2006-07-11 2:49 ` John W. Kennedy 2006-07-12 0:37 ` robin 0 siblings, 1 reply; 314+ messages in thread From: John W. Kennedy @ 2006-07-11 2:49 UTC (permalink / raw) robin wrote: > John W. Kennedy wrote in message <2Nwsg.210$Dp4.31@fe09.lga>... >> adaworks@sbcglobal.net wrote: >>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message >>> news:1kzktalo9krea$.z8n9wev45xct$.dlg@40tude.net... >>>> On Sun, 09 Jul 2006 21:36:07 GMT, James Giles wrote: >>>> >> PL/I is too old (1964) to have the range trick, > > Can be handled by ASSERT. > >> but it does at least >> strive for portability by specifications such as: >> >> Declare My_number_2 fixed binary(8,0); >> >> If the (SIZE) option is turned on, it will at least check that the range >> is between -511 and +511. > > Actually, -512 to +511. No, PL/I does not demand 2's-complement representation. -- John W. Kennedy "The blind rulers of Logres Nourished the land on a fallacy of rational virtue." -- Charles Williams. "Taliessin through Logres: Prelude" ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 2:49 ` John W. Kennedy @ 2006-07-12 0:37 ` robin 0 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-07-12 0:37 UTC (permalink / raw) John W. Kennedy wrote in message ... >robin wrote: >> John W. Kennedy wrote in message <2Nwsg.210$Dp4.31@fe09.lga>... >>> adaworks@sbcglobal.net wrote: >>>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message >>>> news:1kzktalo9krea$.z8n9wev45xct$.dlg@40tude.net... >>>>> On Sun, 09 Jul 2006 21:36:07 GMT, James Giles wrote: >>>>> >>> PL/I is too old (1964) to have the range trick, >> >> Can be handled by ASSERT. >> >>> but it does at least >>> strive for portability by specifications such as: >>> >>> Declare My_number_2 fixed binary(8,0); >>> >>> If the (SIZE) option is turned on, it will at least check that the range >>> is between -511 and +511. >> >> Actually, -512 to +511. > >No, PL/I does not demand 2's-complement representation. On the contrary, your statement is wrong. PL/I at least checks -512 to +511. It may (on a ones complement or sign-magnitude machine), check a smaller range of -511 to +511. The S/360 and successors checked -512 to +511, not the smaller range that you mentioned. On the PC, the range checked is -512 to +511. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-10 7:38 ` Ada vs Fortran for scientific applications Dmitry A. Kazakov 2006-07-10 16:41 ` adaworks @ 2006-07-11 1:29 ` robin 2006-07-11 6:46 ` adaworks 1 sibling, 1 reply; 314+ messages in thread From: robin @ 2006-07-11 1:29 UTC (permalink / raw) Dmitry A. Kazakov wrote in message <1kzktalo9krea$.z8n9wev45xct$.dlg@40tude.net>... >On Sun, 09 Jul 2006 21:36:07 GMT, James Giles wrote: > >> In most Ada implementations, as for most other languages, all >> the bit patterns in the representation of an INTEGER data type >> are valid integer values. There is no bit pattern representing >> NOI (Not An Integer) corresponding to the IEEE float idea of >> a NAN. Determining whether a variable is defined or not is a >> complex problem. It's made worse by the fact that the user can >> make the error message go away (though not usually the problem) >> by initializing the variable in the declaration. > >Yes, unfortunately this is one case, where Ada's default is not safe. A >better design would be to require explicit initialization for all variables >of types with assignment. If the programmer wanted to leave something >uninitialized, he should do it explicitly: Compilers can check for uninitialized variables during compilation. Some Fortran compilers do, including Salford (from FORTRAN 77 days to the present). ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 1:29 ` robin @ 2006-07-11 6:46 ` adaworks 2006-07-11 7:30 ` James Giles ` (3 more replies) 0 siblings, 4 replies; 314+ messages in thread From: adaworks @ 2006-07-11 6:46 UTC (permalink / raw) "robin" <robin_v@bigpond.com> wrote in message news:z9Dsg.2740$tE5.2374@news-server.bigpond.net.au... > > Compilers can check for uninitialized variables during compilation. > True. In fact, Ada compilers issue a warning for any variable that is used before a value is assigned to it. If a parameter is included in a method (function/procedure/subroutine) and never referenced, a warning is issued. Sometimes the pragma Normalize_Scalars is useful. Often, the correct design is to leave variables uninitialized until they are used so an exception can be raised. However, since the compiler will emit a warning about variables that have never been assigned a value in an algorithm that tries to use it, no harm is really done since the careful programmer will not release a program with warnings in it. So, I am assuming, Robin, that PL/I does something similar: a warning for any variable that is used in a program before a value is assigned to it. Richard Riehle ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 6:46 ` adaworks @ 2006-07-11 7:30 ` James Giles 2006-07-11 20:46 ` Simon Wright 2006-07-11 11:44 ` Jeffrey Creem ` (2 subsequent siblings) 3 siblings, 1 reply; 314+ messages in thread From: James Giles @ 2006-07-11 7:30 UTC (permalink / raw) adaworks@sbcglobal.net wrote: ... > [...] However, since the compiler will emit a warning > about variables that have never been assigned a value in an algorithm > that tries to use it, no harm is really done since the careful > programmer will not release a program with warnings in it. It's not always possible to detect that at compile time. The execution of the code may depend on input data. I'm guessing that your implementation emits a warning if it detects that a *possible* control path includes a use without a previous definition (that's presumably why it's a warning and not fatal). Nor is initialization a very good solution. Well, unless you have a value you can initialize with that's obviously wrong (that's why IEEE has NANs). Initializing with an arbitrary plausible value often conceals errors and leads to plausible wrong answers. I still remember many cases where people developed a code in an implementation that always set all of memory to zero, but subsequently move to an implementation that did not. When their code crashed they wanted to know how to get their new implementation to clear all of memory so they could get their old "correct" answers. I've seldom seen cases of this where the old answers were really correct. Runtime testing can be expensive. But there are cases where it's the only reliable way. -- J. Giles "I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies." -- C. A. R. Hoare ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 7:30 ` James Giles @ 2006-07-11 20:46 ` Simon Wright 0 siblings, 0 replies; 314+ messages in thread From: Simon Wright @ 2006-07-11 20:46 UTC (permalink / raw) "James Giles" <jamesgiles@worldnet.att.net> writes: > Nor is initialization a very good solution. Well, unless > you have a value you can initialize with that's obviously > wrong (that's why IEEE has NANs). Initializing with > an arbitrary plausible value often conceals errors and > leads to plausible wrong answers. The Ada compiler we probably have in mind (GNAT) can: * set otherwise-uninitialised variables to an out-of-range value, where possible (eg, for a Boolean, set it to 255, since only 0 and 1 are legal values) * check validity even where the rules of the language would normally mean you don't need to, eg if a procedure takes a Boolean parameter, no need to check, because it has to be correct, right?! ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 6:46 ` adaworks 2006-07-11 7:30 ` James Giles @ 2006-07-11 11:44 ` Jeffrey Creem 2006-07-11 14:51 ` adaworks 2006-07-11 13:53 ` Tom Linden 2006-07-11 14:46 ` John W. Kennedy 3 siblings, 1 reply; 314+ messages in thread From: Jeffrey Creem @ 2006-07-11 11:44 UTC (permalink / raw) adaworks@sbcglobal.net wrote: > "robin" <robin_v@bigpond.com> wrote in message > news:z9Dsg.2740$tE5.2374@news-server.bigpond.net.au... > >>Compilers can check for uninitialized variables during compilation. >> > > True. In fact, Ada compilers issue a warning for any variable > that is used before a value is assigned to it. If a parameter is > included in a method (function/procedure/subroutine) and never > referenced, a warning is issued. Which LRM section requires that? (Since "Ada compilers" do it, I assume it is a requirement of the LRM?) Since I'd like to have it removed or ammended to require meaningful operation of pragma suppress warnings. I'd guess that at least as many bugs are created by people adding := 0 to suppress the spurious "**might be used before assigned**" warnings that some of the compilers I use spit out as come up from the real unitialized case. I suppose that at least the behaviour when the code hits the case where it is not assigned and it should have been is the same every time (generally) v.s. the real unitialized case but if the case that really triggers the "not assigned meaningfully" case is rare it probably does not make the bug any more difficult to find. Truth be told, it takes an awful lot of work on the part of a compiler to really tell if something *will* be used before it is assigned. I strongly suspect of course that each person that has asserted that language X does something really means that "compiler X, sometimes Y and often Z" does it. And further, that those compilers don't "do it" but that they do something close it it. Compilers that I have used that have the "IS used before it is initialized" are generally providing meaninful information but just as often they spit out "might be used before initialized" and in the vast majority of cases, the "might" is wrong. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 11:44 ` Jeffrey Creem @ 2006-07-11 14:51 ` adaworks 2006-07-12 1:29 ` Jeffrey Creem 0 siblings, 1 reply; 314+ messages in thread From: adaworks @ 2006-07-11 14:51 UTC (permalink / raw) "Jeffrey Creem" <jeff@thecreems.com> wrote in message news:ghfco3-4am.ln1@newserver.thecreems.com... > adaworks@sbcglobal.net wrote: >> "robin" <robin_v@bigpond.com> wrote in message >> news:z9Dsg.2740$tE5.2374@news-server.bigpond.net.au... >> >>>Compilers can check for uninitialized variables during compilation. >>> >> >> True. In fact, Ada compilers issue a warning for any variable >> that is used before a value is assigned to it. If a parameter is >> included in a method (function/procedure/subroutine) and never >> referenced, a warning is issued. > > Which LRM section requires that? (Since "Ada compilers" do it, I assume it is > a requirement of the LRM?) Since I'd like to have it removed or ammended to > require meaningful operation of pragma suppress warnings. > Notice that I used the word "warning." An Ada compiler can detect a lot of information due to very nature of the language. In fact, one of the design goals of Ada is to detect the maximum number of errors as early in the development process as possible. Consider the following code: with Ada.Integer_Text_IO; procedure Unitialized_Variable is function F1 (A : Integer; B : Float) return Float is Temp : Float; begin return Temp; end F1; X, Y, Z : Integer; -- unitialized variables begin Y := X; Ada.Integer_Text_IO.Get(X); Z := X; end Unitialized_Variable; The compiler issues a warning, "...X may be used before it has a value." For the function, it issues a warning that we have never referenced the parameters A and B. It also warns me that Temp is never assigned a value. These are not a fatal errors. The Ada Language Reference manual does not require this warning. However, as mentioned above, the design of the language makes it easy to issue such warnings. Those of use who use Ada quite a bit are happy to have a language which provides the maximum of information we need to avoid making foolish mistakes. This may not be the case with other languages, but it is the case with Ada. Richard Riehle ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 14:51 ` adaworks @ 2006-07-12 1:29 ` Jeffrey Creem 0 siblings, 0 replies; 314+ messages in thread From: Jeffrey Creem @ 2006-07-12 1:29 UTC (permalink / raw) adaworks@sbcglobal.net wrote: > "Jeffrey Creem" <jeff@thecreems.com> wrote in message > news:ghfco3-4am.ln1@newserver.thecreems.com... > >>adaworks@sbcglobal.net wrote: >> >>>"robin" <robin_v@bigpond.com> wrote in message >>>news:z9Dsg.2740$tE5.2374@news-server.bigpond.net.au... >>> >>> >>>>Compilers can check for uninitialized variables during compilation. >>>> >>> >>>True. In fact, Ada compilers issue a warning for any variable >>>that is used before a value is assigned to it. If a parameter is >>>included in a method (function/procedure/subroutine) and never >>>referenced, a warning is issued. >> >>Which LRM section requires that? (Since "Ada compilers" do it, I assume it is >>a requirement of the LRM?) Since I'd like to have it removed or ammended to >>require meaningful operation of pragma suppress warnings. >> > > Notice that I used the word "warning." An Ada compiler can detect a lot > of information due to very nature of the language. In fact, one of the design > goals of Ada is to detect the maximum number of errors as early in the > development process as possible. Consider the following code: > I noticed that you said "Warning". I also noticed that you said "Ada compilers". Not some Ada compilers, or compiler X or GNAT. One can only say "Ada compilers" do something in comparison to other compilers if it is a requirement of the language. Now, to be fair, half of the discussions in usenet say things like "Ada doesn't" or "C has a function" when what they really mean is that some compiler that they used did something. stuff deleted...and then > Ada quite a bit are happy to have a language which provides the maximum > of information we need to avoid making foolish mistakes. This may not be the > case with other languages, but it is the case with Ada. > > Richard Riehle Actually, I use it all the time. In fact, it is the only thing I program in. While I agree that I usually like extra information, I can certainly say that the quality, frequency and quantity of false positives for these warnings varies quite a bit from vendor to vendor with the rate of false positives being so high on some compilers as to render the warning useless. Again, probably not different than any other language but the lack of a standardized way of suppressing specific warnings is certainly a pain when one is trying to have a single common code base that compiles without warning across 4 or 5 architectures and vendors. Going in and adding a few hundred := 0 everplace to make one or two of the chatty ones quiet is not something that can always be done when a project is far along it its lifecycle. The key point here is that this discussion, like most other language comparison discussions has degraded into a comparison of implementations while asserting that it is a comparison of languages. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 6:46 ` adaworks 2006-07-11 7:30 ` James Giles 2006-07-11 11:44 ` Jeffrey Creem @ 2006-07-11 13:53 ` Tom Linden 2006-07-11 15:02 ` adaworks 2006-07-11 14:46 ` John W. Kennedy 3 siblings, 1 reply; 314+ messages in thread From: Tom Linden @ 2006-07-11 13:53 UTC (permalink / raw) On Mon, 10 Jul 2006 23:46:45 -0700, <adaworks@sbcglobal.net> wrote: > > "robin" <robin_v@bigpond.com> wrote in message > news:z9Dsg.2740$tE5.2374@news-server.bigpond.net.au... >> >> Compilers can check for uninitialized variables during compilation. >> > True. In fact, Ada compilers issue a warning for any variable > that is used before a value is assigned to it. If a parameter is > included in a method (function/procedure/subroutine) and never > referenced, a warning is issued. Sometimes the pragma > Normalize_Scalars is useful. Often, the correct design is to > leave variables uninitialized until they are used so an exception > can be raised. However, since the compiler will emit a warning > about variables that have never been assigned a value in an algorithm > that tries to use it, no harm is really done since the careful programmer > will not release a program with warnings in it. In general, this is not possible, and it is somewhat silly to have the compiler issue such messages, because on the average it will be wrong as often as it is right. This can not be done at compile-time but must be done at run-time and it requires the compiler to generate a lot of machinery to produce such mediocre messages. Wht we did in PL/I was to produce in the cross-reference listing information on where a variable was referenced or assigned, but this was also somewhat incomplete because it requires a further analysis of aliasing. My view is that it is of dubious value. > > So, I am assuming, Robin, that PL/I does something similar: a warning > for any variable that is used in a program before a value is assigned to > it. > > Richard Riehle > > ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 13:53 ` Tom Linden @ 2006-07-11 15:02 ` adaworks 2006-07-11 15:35 ` Tom Linden 0 siblings, 1 reply; 314+ messages in thread From: adaworks @ 2006-07-11 15:02 UTC (permalink / raw) "Tom Linden" <tom@kednos.com> wrote in message news:op.tci16qgszgicya@hyrrokkin... On Mon, 10 Jul 2006 23:46:45 -0700, <adaworks@sbcglobal.net> wrote: > > "robin" <robin_v@bigpond.com> wrote in message > news:z9Dsg.2740$tE5.2374@news-server.bigpond.net.au... >> >> Compilers can check for uninitialized variables during compilation. >> > True. In fact, Ada compilers issue a warning for any variable > that is used before a value is assigned to it. If a parameter is > included in a method (function/procedure/subroutine) and never > referenced, a warning is issued. Sometimes the pragma > Normalize_Scalars is useful. Often, the correct design is to > leave variables uninitialized until they are used so an exception > can be raised. However, since the compiler will emit a warning > about variables that have never been assigned a value in an algorithm > that tries to use it, no harm is really done since the careful programmer > will not release a program with warnings in it. TL>In general, this is not possible, and it is somewhat silly to have the TL>compiler issue such messages, because on the average it will be TL>wrong as often as it is right. This can not be done at compile-time TL>but must be done at run-time and it requires the compiler to TL>generate a lot of machinery to produce such mediocre messages. TL>Wht we did in PL/I was to produce in the cross-reference TL>listing information on where a variable was referenced or TL>assigned, but this was also somewhat incomplete because TL>it requires a further analysis of aliasing. My view is that it is TL>of dubious value. TL> It might be of dubious value in PL/I, but it is quite helpful in Ada. These warnings often help with better structuring of a program. In a large, complex program, they prevent errors that result from simple little oversights we all make in the normal course of programming. These kinds of warnings extend to methods declared, but never invoked. For example, in the sample code in the earlier post, if I had never used Ada.Integer_Text_IO in my program, the compiler would have informed me that I had a package in scope that I never used. I have never known the compiler to issue a message that was wrong. On the other hand, it is a warning, not a fatal error because I really might want to use that artifact in a future iteration of my development. When I do use it, the warning goes away. I personally find this very helpful. Others may find it annoying. In the long run, it is a good feature when developing safety-critical software where extraneous code is not a good thing. Richard Riehle ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 15:02 ` adaworks @ 2006-07-11 15:35 ` Tom Linden 2006-07-11 16:54 ` Alex R. Mosteo 2006-07-11 17:43 ` adaworks 0 siblings, 2 replies; 314+ messages in thread From: Tom Linden @ 2006-07-11 15:35 UTC (permalink / raw) On Tue, 11 Jul 2006 08:02:47 -0700, <adaworks@sbcglobal.net> wrote: > > "Tom Linden" <tom@kednos.com> wrote in message > news:op.tci16qgszgicya@hyrrokkin... > On Mon, 10 Jul 2006 23:46:45 -0700, <adaworks@sbcglobal.net> wrote: > >> >> "robin" <robin_v@bigpond.com> wrote in message >> news:z9Dsg.2740$tE5.2374@news-server.bigpond.net.au... >>> >>> Compilers can check for uninitialized variables during compilation. >>> >> True. In fact, Ada compilers issue a warning for any variable >> that is used before a value is assigned to it. If a parameter is >> included in a method (function/procedure/subroutine) and never >> referenced, a warning is issued. Sometimes the pragma >> Normalize_Scalars is useful. Often, the correct design is to >> leave variables uninitialized until they are used so an exception >> can be raised. However, since the compiler will emit a warning >> about variables that have never been assigned a value in an algorithm >> that tries to use it, no harm is really done since the careful >> programmer >> will not release a program with warnings in it. > > TL>In general, this is not possible, and it is somewhat silly to have the > TL>compiler issue such messages, because on the average it will be > TL>wrong as often as it is right. This can not be done at compile-time > TL>but must be done at run-time and it requires the compiler to > TL>generate a lot of machinery to produce such mediocre messages. > TL>Wht we did in PL/I was to produce in the cross-reference > TL>listing information on where a variable was referenced or > TL>assigned, but this was also somewhat incomplete because > TL>it requires a further analysis of aliasing. My view is that it is > TL>of dubious value. > TL> > > It might be of dubious value in PL/I, but it is quite helpful > in Ada. These warnings often help with better structuring of a > program. In a large, complex program, they prevent errors that > result from simple little oversights we all make in the normal > course of programming. > > These kinds of warnings extend to methods declared, but never > invoked. For example, in the sample code in the earlier post, > if I had never used Ada.Integer_Text_IO in my program, the > compiler would have informed me that I had a package in > scope that I never used. I have never known the compiler > to issue a message that was wrong. On the other hand, > it is a warning, not a fatal error because I really might want > to use that artifact in a future iteration of my development. > When I do use it, the warning goes away. I personally > find this very helpful. Others may find it annoying. In > the long run, it is a good feature when developing > safety-critical software where extraneous code is not a > good thing. My comments were not restricted to PL/I, but apply generally to any compiler, and specifically Ada was intended. Since I had put in the scaffolding for producing the info in cross reference listings I also played with generating precisely the sort of warnings that you refer to, but found it generated too much "clutter" of questionable value. That was my view, FWIW > > Richard Riehle > > > ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 15:35 ` Tom Linden @ 2006-07-11 16:54 ` Alex R. Mosteo 2006-07-11 17:43 ` adaworks 1 sibling, 0 replies; 314+ messages in thread From: Alex R. Mosteo @ 2006-07-11 16:54 UTC (permalink / raw) Tom Linden wrote: > On Tue, 11 Jul 2006 08:02:47 -0700, <adaworks@sbcglobal.net> wrote: > >> >> "Tom Linden" <tom@kednos.com> wrote in message >> news:op.tci16qgszgicya@hyrrokkin... >> On Mon, 10 Jul 2006 23:46:45 -0700, <adaworks@sbcglobal.net> wrote: >> >>> >>> "robin" <robin_v@bigpond.com> wrote in message >>> news:z9Dsg.2740$tE5.2374@news-server.bigpond.net.au... >>>> >>>> Compilers can check for uninitialized variables during compilation. >>>> >>> True. In fact, Ada compilers issue a warning for any variable >>> that is used before a value is assigned to it. If a parameter is >>> included in a method (function/procedure/subroutine) and never >>> referenced, a warning is issued. Sometimes the pragma >>> Normalize_Scalars is useful. Often, the correct design is to >>> leave variables uninitialized until they are used so an exception >>> can be raised. However, since the compiler will emit a warning >>> about variables that have never been assigned a value in an algorithm >>> that tries to use it, no harm is really done since the careful >>> programmer >>> will not release a program with warnings in it. >> >> TL>In general, this is not possible, and it is somewhat silly to have the >> TL>compiler issue such messages, because on the average it will be >> TL>wrong as often as it is right. This can not be done at compile-time >> TL>but must be done at run-time and it requires the compiler to >> TL>generate a lot of machinery to produce such mediocre messages. >> TL>Wht we did in PL/I was to produce in the cross-reference >> TL>listing information on where a variable was referenced or >> TL>assigned, but this was also somewhat incomplete because >> TL>it requires a further analysis of aliasing. My view is that it is >> TL>of dubious value. >> TL> >> >> It might be of dubious value in PL/I, but it is quite helpful >> in Ada. These warnings often help with better structuring of a >> program. In a large, complex program, they prevent errors that >> result from simple little oversights we all make in the normal >> course of programming. >> >> These kinds of warnings extend to methods declared, but never >> invoked. For example, in the sample code in the earlier post, >> if I had never used Ada.Integer_Text_IO in my program, the >> compiler would have informed me that I had a package in >> scope that I never used. I have never known the compiler >> to issue a message that was wrong. On the other hand, >> it is a warning, not a fatal error because I really might want >> to use that artifact in a future iteration of my development. >> When I do use it, the warning goes away. I personally >> find this very helpful. Others may find it annoying. In >> the long run, it is a good feature when developing >> safety-critical software where extraneous code is not a >> good thing. > > My comments were not restricted to PL/I, but apply generally to > any compiler, and specifically Ada was intended. Since I had put > in the scaffolding for producing the info in cross reference listings > I also played with generating precisely the sort of warnings that you > refer to, but found it generated too much "clutter" of questionable > value. That was my view, FWIW I have to disagree. Warnings for unused entities have helped me in many instances. Forgotten calls, forgotten updates to loop control variables are the obvious examples. These warnings don't usually reveal subtle bugs that would go unnoticed, but evident errors that would be caught in testing, but this undoubtedly reduces wasted time. In my experience, 90% of these warnings did correspond to a real bug and the rest were due to removed code, so the unused entities could be removed/commented as well, which I prefer. In particular, I know the precise warnings Richard is referring to (I guess we use the same compiler), and they have never been clutter for me. I can't imagine code generating lots of these warnings not being a maintenance nightmare. In general, if compiler warnings are well done, which basically means no false positives that lead to ignoring/disabling them, are another excellent tool for better development. My Ada compiler has never disappointed me in this regard. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 15:35 ` Tom Linden 2006-07-11 16:54 ` Alex R. Mosteo @ 2006-07-11 17:43 ` adaworks 2006-07-11 18:15 ` Ed Falis 1 sibling, 1 reply; 314+ messages in thread From: adaworks @ 2006-07-11 17:43 UTC (permalink / raw) "Tom Linden" <tom@kednos.com> wrote in message news:op.tci6xqiizgicya@hyrrokkin... On Tue, 11 Jul 2006 08:02:47 -0700, <adaworks@sbcglobal.net> wrote: My comments were not restricted to PL/I, but apply generally to any compiler, and specifically Ada was intended. Since I had put in the scaffolding for producing the info in cross reference listings I also played with generating precisely the sort of warnings that you refer to, but found it generated too much "clutter" of questionable value. That was my view, FWIW > I think you are right about clutter in programs that contain a lot of global variables. For example, in a COBOL program, where the data is organized (pre-OOCOBOL) in a DATA DIVISION, these kinds of warnings would be clutter. In other languages with global variables the same would be true. I recall early Ada programs where the designers would create package COMMON is -- Fortran programmers package DATA_DIVISION is -- COBOL Programmers package COMPOOL is -- CMS-2 and Jovial programmers thereby thwarting the good programming practices available to them. However, when these kinds of messages are localized to the specific module to which they apply, they are quite helpful. This is especially true when the language separates the concerns of scope and visibility, as Ada does. We can know that an artifact in scope, is never referenced through a warning. From that warning we might ask whether we have that artifact at the right place in a program design. The above example of scope is not simply academic. I recall a very large Ada project, a weapon system of several million lines of code, where the compiler did not issue such warnings (in the mid-1980's) even though it could have. As a consequence, many packages were "with'ed" too early in the design. The programs worked fine, but those artifacts could have been brought into scope much later in the design (usually at the package body level instead of at the specification level) to improve compilation time and later debugging. When the group doing development finally realized this, they were able to move those artifacts to the correct level of scope. However, that early compiler failed to notify them, even though it could have been designed to do so. No modern Ada compiler would fail to take advantage of this capability. In part, it is Ada's package module and powerful separate compilation capability that helps make this a worthwhile feature of a compiler. Most modern languages, including Java and C++ fall short in this regard. Even Eiffel, a language I like a lot, does not support this kind of thing as well as I would prefer. If the warnings are "clutter," so be it. I don't like to release a program for production until the clutter is attended to. I would rather have the clutter than the unitialized variable, the inability to produce a program where every artifact was exactly where it needed to be, and the knowledge that I did not have formal parameters in a method that I forgot to use in the computation in that method. FWIW. Richard Riehle ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 17:43 ` adaworks @ 2006-07-11 18:15 ` Ed Falis 0 siblings, 0 replies; 314+ messages in thread From: Ed Falis @ 2006-07-11 18:15 UTC (permalink / raw) Although the technique described in this paper is not static/compile time, it may be of interest to this thread: http://www.adacore.com/2006/ 06/02/exposing-uninitialized-variables-strengthening-and-extending-run-t ime-checks-in-ada/ - Ed ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 6:46 ` adaworks ` (2 preceding siblings ...) 2006-07-11 13:53 ` Tom Linden @ 2006-07-11 14:46 ` John W. Kennedy 2006-07-11 14:45 ` Tom Linden ` (3 more replies) 3 siblings, 4 replies; 314+ messages in thread From: John W. Kennedy @ 2006-07-11 14:46 UTC (permalink / raw) adaworks@sbcglobal.net wrote: > "robin" <robin_v@bigpond.com> wrote in message > news:z9Dsg.2740$tE5.2374@news-server.bigpond.net.au... >> Compilers can check for uninitialized variables during compilation. >> > True. In fact, Ada compilers issue a warning for any variable > that is used before a value is assigned to it. If a parameter is > included in a method (function/procedure/subroutine) and never > referenced, a warning is issued. Sometimes the pragma > Normalize_Scalars is useful. Often, the correct design is to > leave variables uninitialized until they are used so an exception > can be raised. However, since the compiler will emit a warning > about variables that have never been assigned a value in an algorithm > that tries to use it, no harm is really done since the careful programmer > will not release a program with warnings in it. > > So, I am assuming, Robin, that PL/I does something similar: a warning > for any variable that is used in a program before a value is assigned to it. Smart compilers may try to do so, as a by-product of optimization, but PL/I is too old a design for this sort of thing to be taken for granted; there are unavoidable holes, because PL/I A) passes by reference, B) does not have in-out parameter declaration, and C) uses separate compilation and a dumb linker, making global flow analysis impossible. (Java is the only language I know of that will actually fail the compile unless it /knows/ that every variable is initialized before use in every possible path.) -- John W. Kennedy "The blind rulers of Logres Nourished the land on a fallacy of rational virtue." -- Charles Williams. "Taliessin through Logres: Prelude" ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 14:46 ` John W. Kennedy @ 2006-07-11 14:45 ` Tom Linden 2006-07-11 17:06 ` John W. Kennedy 2006-07-11 15:16 ` glen herrmannsfeldt ` (2 subsequent siblings) 3 siblings, 1 reply; 314+ messages in thread From: Tom Linden @ 2006-07-11 14:45 UTC (permalink / raw) On Tue, 11 Jul 2006 07:46:04 -0700, John W. Kennedy <jwkenne@attglobal.net> wrote: > adaworks@sbcglobal.net wrote: >> "robin" <robin_v@bigpond.com> wrote in message >> news:z9Dsg.2740$tE5.2374@news-server.bigpond.net.au... >>> Compilers can check for uninitialized variables during compilation. >>> >> True. In fact, Ada compilers issue a warning for any variable >> that is used before a value is assigned to it. If a parameter is >> included in a method (function/procedure/subroutine) and never >> referenced, a warning is issued. Sometimes the pragma >> Normalize_Scalars is useful. Often, the correct design is to >> leave variables uninitialized until they are used so an exception >> can be raised. However, since the compiler will emit a warning >> about variables that have never been assigned a value in an algorithm >> that tries to use it, no harm is really done since the careful >> programmer >> will not release a program with warnings in it. >> So, I am assuming, Robin, that PL/I does something similar: a warning >> for any variable that is used in a program before a value is assigned >> to it. > > Smart compilers may try to do so, as a by-product of optimization, but > PL/I is too old a design for this sort of thing to be taken for granted; > there are unavoidable holes, because PL/I A) passes by reference, B) > does not have in-out parameter declaration, and C) uses separate > compilation and a dumb linker, making global flow analysis impossible. PL/I on VAX has done global flow analysis since 1980. > > (Java is the only language I know of that will actually fail the compile > unless it /knows/ that every variable is initialized before use in every > possible path.) > ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 14:45 ` Tom Linden @ 2006-07-11 17:06 ` John W. Kennedy 0 siblings, 0 replies; 314+ messages in thread From: John W. Kennedy @ 2006-07-11 17:06 UTC (permalink / raw) Tom Linden wrote: > PL/I on VAX has done global flow analysis since 1980. Any language can have /a/ compiler that does global flow analysis, even Fortran. But as long as the PL/I language has DECLARE entry ENTRY (attributes, attributes....), the language cannot be regarded as actually supporting such a thing, even if, per accidens, an individual compiler does. To turn it the other way, the most important Ada compiler (GNAT) does not have anything resembling the "repository" envisioned in the Ada LRM. But it achieves the appearance of one, as does (in its way) Java, because the Ada and Java language definitions wouldn't work otherwise. -- John W. Kennedy "The blind rulers of Logres Nourished the land on a fallacy of rational virtue." -- Charles Williams. "Taliessin through Logres: Prelude" ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 14:46 ` John W. Kennedy 2006-07-11 14:45 ` Tom Linden @ 2006-07-11 15:16 ` glen herrmannsfeldt 2006-07-11 15:55 ` Richard E Maine 2006-07-11 17:08 ` Jean-Pierre Rosen 2006-07-12 0:37 ` robin 3 siblings, 1 reply; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-11 15:16 UTC (permalink / raw) John W. Kennedy wrote: (snip regarding compile time detection of references to uninitialized variables.) > (Java is the only language I know of that will actually fail the compile > unless it /knows/ that every variable is initialized before use in every > possible path.) I am not sure what the language specification says, but Sun compilers do that. (Note: for scalars only, not arrays.) I believe the compilers are getting better. I used to have problems with things like: if(i==3) x=1; else x=2; not being recognized as initializing x, but as I understand it compilers now recognize that case. I believe there are still cases where a variable is sure to be initialized but the compiler doesn't notice it. Also, tests are done on the class (compiled) file at load time to protect against many errors. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 15:16 ` glen herrmannsfeldt @ 2006-07-11 15:55 ` Richard E Maine 2006-07-11 16:21 ` Ed Falis 2006-07-12 3:33 ` James Dennett 0 siblings, 2 replies; 314+ messages in thread From: Richard E Maine @ 2006-07-11 15:55 UTC (permalink / raw) glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote: > John W. Kennedy wrote: > > > (Java is the only language I know of that will actually fail the compile > > unless it /knows/ that every variable is initialized before use in every > > possible path.) > > I am not sure what the language specification says, but Sun compilers > do that. (Note: for scalars only, not arrays.) I don't believe you. Reread what John actually wrote instead of what you were probably thinking about. Do you really claim that the Sun compiler will "fail compilation" (which is not the same thing as give a warning) if it does not "know" that every scalar variable is set before use in every possible path. An awful lot of Fortran codes would fail compilation in that case - enough of them that Sun would feel the backlash rather strongly. It would also violate the Fortran standard, as those codes are perfectly conforming to the Fortran standard, which only requires that the variables be defined when they are actually referenced (i.e. in the path that is taken - not in some other path that is staticaally possible, but is not taken). That Sun might have a warning, I could believe (though it expect such a warning to be verbose enough that few people would use it). I could also even more believe that Sun might fail compilation if it could prove at compile time that a variable would be used without being defined. But neither of those are equivalent to what John said. -- Richard Maine | Good judgment comes from experience; email: my first.last at org.domain| experience comes from bad judgment. org: nasa, domain: gov | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 15:55 ` Richard E Maine @ 2006-07-11 16:21 ` Ed Falis 2006-07-11 16:28 ` Richard E Maine 2006-07-12 3:33 ` James Dennett 1 sibling, 1 reply; 314+ messages in thread From: Ed Falis @ 2006-07-11 16:21 UTC (permalink / raw) Richard E Maine wrote: > glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote: > >> John W. Kennedy wrote: >> >> > (Java is the only language I know of that will actually fail the > compile >> > unless it /knows/ that every variable is initialized before use in > every >> > possible path.) >> >> I am not sure what the language specification says, but Sun > compilers >> do that. (Note: for scalars only, not arrays.) > > I don't believe you. Reread what John actually wrote instead of what > you > were probably thinking about. Do you really claim that the Sun > compiler > will "fail compilation" (which is not the same thing as give a > warning) > if it does not "know" that every scalar variable is set before use in > every possible path. > > An awful lot of Fortran codes would fail compilation in that case - > enough of them that Sun would feel the backlash rather strongly. I believe he meant the Sun Java compiler - not a Sun Fortran compiler. - Ed ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 16:21 ` Ed Falis @ 2006-07-11 16:28 ` Richard E Maine 0 siblings, 0 replies; 314+ messages in thread From: Richard E Maine @ 2006-07-11 16:28 UTC (permalink / raw) Ed Falis <falis@verizon.net> wrote: > Richard E Maine wrote: > > I don't believe you... > > An awful lot of Fortran codes would fail compilation in that case - > > I believe he meant the Sun Java compiler - not a Sun Fortran compiler. Ah. I think you are right. I should read more carefuly myself. :-( -- Richard Maine | Good judgment comes from experience; email: my first.last at org.domain| experience comes from bad judgment. org: nasa, domain: gov | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 15:55 ` Richard E Maine 2006-07-11 16:21 ` Ed Falis @ 2006-07-12 3:33 ` James Dennett 1 sibling, 0 replies; 314+ messages in thread From: James Dennett @ 2006-07-12 3:33 UTC (permalink / raw) Richard E Maine wrote: > glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote: > >> John W. Kennedy wrote: >> >>> (Java is the only language I know of that will actually fail the compile >>> unless it /knows/ that every variable is initialized before use in every >>> possible path.) >> I am not sure what the language specification says, but Sun compilers >> do that. (Note: for scalars only, not arrays.) > > I don't believe you. Reread what John actually wrote instead of what you > were probably thinking about. Do you really claim that the Sun compiler > will "fail compilation" (which is not the same thing as give a warning) > if it does not "know" that every scalar variable is set before use in > every possible path. The Java language specification defines exact rules for when a variable is considered "definitely" initialized, and requires a compilation error if a local variable is used before being definitely assigned a value according to these rules (even if the compiler is so smart that it can recognize that a value is actually assigned in such a way that the specified rules don't detect it). (But maybe you didn't notice that this was a point about Java -- specifically that it is unusual, if not unique, in this respect.) -- James ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 14:46 ` John W. Kennedy 2006-07-11 14:45 ` Tom Linden 2006-07-11 15:16 ` glen herrmannsfeldt @ 2006-07-11 17:08 ` Jean-Pierre Rosen 2006-07-11 22:44 ` glen herrmannsfeldt 2006-07-12 3:35 ` James Dennett 2006-07-12 0:37 ` robin 3 siblings, 2 replies; 314+ messages in thread From: Jean-Pierre Rosen @ 2006-07-11 17:08 UTC (permalink / raw) John W. Kennedy a �crit : > (Java is the only language I know of that will actually fail the compile > unless it /knows/ that every variable is initialized before use in every > possible path.) > But it doesn't work! In Java, you have the rule that every variable must be initialized before being used, PLUS the rule that every variable is initialized automatically to zero! Why that? Because with a clever use of initializers, you can still access variables before they are initialized... Which shows the limitations of static checking for uninitialized variables. -- --------------------------------------------------------- J-P. Rosen (rosen@adalog.fr) Visit Adalog's web site at http://www.adalog.fr ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 17:08 ` Jean-Pierre Rosen @ 2006-07-11 22:44 ` glen herrmannsfeldt 2006-07-12 9:50 ` Jean-Pierre Rosen 2006-07-12 3:35 ` James Dennett 1 sibling, 1 reply; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-11 22:44 UTC (permalink / raw) Jean-Pierre Rosen wrote: (snip) > But it doesn't work! In Java, you have the rule that every variable must > be initialized before being used, PLUS the rule that every variable is > initialized automatically to zero! Why that? Because with a clever use > of initializers, you can still access variables before they are > initialized... It might be that arrays are initialized to zero when created, but that is more like C's malloc(). (Well, calloc() I suppose.) Automatic variables, which are scalars, are not initialized to zero automatically. Only scalar variables are checked for initialization, but then an array is really an initialized object reference variable. If that variable isn't properly initialized, compilation will fail. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 22:44 ` glen herrmannsfeldt @ 2006-07-12 9:50 ` Jean-Pierre Rosen 2006-07-14 7:00 ` glen herrmannsfeldt 0 siblings, 1 reply; 314+ messages in thread From: Jean-Pierre Rosen @ 2006-07-12 9:50 UTC (permalink / raw) glen herrmannsfeldt a �crit : > It might be that arrays are initialized to zero when created, > but that is more like C's malloc(). (Well, calloc() I suppose.) > Automatic variables, which are scalars, are not initialized > to zero automatically. > My "Java Language Specification" say in 4.5.5: "Each class variable, instance variable, or array component is initialized with a /default value/ when it is created:" and then goes on explaining that numeric types are initialized to 0, boolean to false, and reference types to null. -- --------------------------------------------------------- J-P. Rosen (rosen@adalog.fr) Visit Adalog's web site at http://www.adalog.fr ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-12 9:50 ` Jean-Pierre Rosen @ 2006-07-14 7:00 ` glen herrmannsfeldt 0 siblings, 0 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-14 7:00 UTC (permalink / raw) Jean-Pierre Rosen wrote: > glen herrmannsfeldt a �crit : > >> It might be that arrays are initialized to zero when created, >> but that is more like C's malloc(). (Well, calloc() I suppose.) >> Automatic variables, which are scalars, are not initialized >> to zero automatically. > My "Java Language Specification" say in 4.5.5: > "Each class variable, instance variable, or array component is > initialized with a /default value/ when it is created:" > and then goes on explaining that numeric types are initialized to 0, > boolean to false, and reference types to null. That leaves ordinary local variables that must be initialized. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 17:08 ` Jean-Pierre Rosen 2006-07-11 22:44 ` glen herrmannsfeldt @ 2006-07-12 3:35 ` James Dennett 1 sibling, 0 replies; 314+ messages in thread From: James Dennett @ 2006-07-12 3:35 UTC (permalink / raw) Jean-Pierre Rosen wrote: > John W. Kennedy a �crit : >> (Java is the only language I know of that will actually fail the >> compile unless it /knows/ that every variable is initialized before >> use in every possible path.) >> > But it doesn't work! In Java, you have the rule that every variable must > be initialized before being used, PLUS the rule that every variable is > initialized automatically to zero! Why that? Because with a clever use > of initializers, you can still access variables before they are > initialized... The checking is done for local variables only, and there is no way to access those before they are initialized. Variables that cannot be checked in this manner are created with default values, prior to receiving value that may be defined by the code. -- James ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 14:46 ` John W. Kennedy ` (2 preceding siblings ...) 2006-07-11 17:08 ` Jean-Pierre Rosen @ 2006-07-12 0:37 ` robin 3 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-07-12 0:37 UTC (permalink / raw) John W. Kennedy wrote in message ... >adaworks@sbcglobal.net wrote: >> "robin" <robin_v@bigpond.com> wrote in message >> news:z9Dsg.2740$tE5.2374@news-server.bigpond.net.au... >>> Compilers can check for uninitialized variables during compilation. >>> >> True. In fact, Ada compilers issue a warning for any variable >> that is used before a value is assigned to it. If a parameter is >> included in a method (function/procedure/subroutine) and never >> referenced, a warning is issued. Sometimes the pragma >> Normalize_Scalars is useful. Often, the correct design is to >> leave variables uninitialized until they are used so an exception >> can be raised. However, since the compiler will emit a warning >> about variables that have never been assigned a value in an algorithm >> that tries to use it, no harm is really done since the careful programmer >> will not release a program with warnings in it. >> >> So, I am assuming, Robin, that PL/I does something similar: a warning >> for any variable that is used in a program before a value is assigned to it. > >Smart compilers may try to do so, as a by-product of optimization, but >PL/I is too old a design for this sort of thing to be taken for granted; >there are unavoidable holes, because PL/I A) passes by reference, Pass by reference doesn't prevent checking for uninitialized variables. > B) does not have in-out parameter declaration, Well, it does. It's called ASSIGNABLE and NONASSIGNABLE. > and C) uses separate compilation and a dumb linker, It's perhaps 30+ years since I used separate compilation of procedures to be linked subsequently. Memories have become so much larger. All the info is there for the compiler to do a good analysis in a single compilation. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 21:36 ` James Giles ` (2 preceding siblings ...) 2006-07-10 7:38 ` Ada vs Fortran for scientific applications Dmitry A. Kazakov @ 2006-07-10 9:57 ` Georg Bauhaus 3 siblings, 0 replies; 314+ messages in thread From: Georg Bauhaus @ 2006-07-10 9:57 UTC (permalink / raw) On Sun, 2006-07-09 at 21:36 +0000, James Giles wrote: > Determining whether a variable is defined or not is a > complex problem. It's made worse by the fact that the user can > make the error message go away (though not usually the problem) > by initializing the variable in the declaration. SPARK is a subset of Ada that addresses these and other issues. The SPARK tools provide path analysis, information flow analysis, etc., leading to a proof of properties of the analysed program. This comes at a price, though, as SPARK cannot handle full Ada (being a subset language). -- Georg ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 20:52 ` adaworks 2006-07-09 21:33 ` Brooks Moses 2006-07-09 21:36 ` James Giles @ 2006-07-11 15:14 ` robin 2006-07-11 17:21 ` adaworks 2006-11-20 9:39 ` robin 3 siblings, 1 reply; 314+ messages in thread From: robin @ 2006-07-11 15:14 UTC (permalink / raw) adaworks@sbcglobal.net wrote in message ... > >"Gordon Sande" <g.sande@worldnet.att.net> wrote in message >news:2006052509454116807-gsande@worldnetattnet... >> >> How many Ada systems can match the undefined variable checking of the >> old WatFor or the current Salford CheckMate or the Lahey/Fujitsu >> global checking? It seems to be a thing associated with places that >> run student cafteria computing on mainframes. Not much used anymore. >> There was a similar student checkout PL/I from Cornell if I recall >> correctly. >> >The default for Ada is to do thorough range checking on all numeric >types. Range checking is not a substitute for detection of uninitialized variables. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 15:14 ` robin @ 2006-07-11 17:21 ` adaworks 2006-07-11 19:50 ` John W. Kennedy 0 siblings, 1 reply; 314+ messages in thread From: adaworks @ 2006-07-11 17:21 UTC (permalink / raw) "robin" <robin_v@bigpond.com> wrote in message news:TfPsg.3094$tE5.2436@news-server.bigpond.net.au... > adaworks@sbcglobal.net wrote in message ... >> >>"Gordon Sande" <g.sande@worldnet.att.net> wrote in message >>news:2006052509454116807-gsande@worldnetattnet... >>> >>> How many Ada systems can match the undefined variable checking of the >>> old WatFor or the current Salford CheckMate or the Lahey/Fujitsu >>> global checking? It seems to be a thing associated with places that >>> run student cafteria computing on mainframes. Not much used anymore. >>> There was a similar student checkout PL/I from Cornell if I recall >>> correctly. >>> >>The default for Ada is to do thorough range checking on all numeric >>types. > > Range checking is not a substitute for detection > of uninitialized variables. > See my example on how Ada supports this in a different post under this thread topic. An Ada compiler can certainly check for unitialized variables. It can also check for misplaced artifacts of all kinds, thereby assisting the developer with program organization improvement. As stated earlier, the fundamental design goal of Ada is provide the maximum amount of error detection as early in the development process as possible. Errors, of course, are at different levels. Sometimes the compiler provides advisory error messages, and other times the error prevents compilation. I have used a lot of programming languages during my 40+ years in software and I have not found a language that is as dependable in this respect as Ada. That being said, I sometimes prefer to use other languages when Ada is too strict. Currently, I enjoy Python. In the past I have liked Smalltalk. Long, long ago, when PL/I was new (during the 1960's and one project during the 1970's), I did some coding in it, but I am certainly not current with modern PL/I. It did seem to be an improvement over Fortran and COBOL at that time. Every language has its good and bad points, its weak points and its strong points. Ultimately, it is about choosing the right tool for the right job. I have tried to find information on PL/I that would encourage me to recommend it. I queried this forum for that kind of information and received much verbal abuse (except from Tom, who was helpful) for it. I would like to see PL/I continue to evolve and receive good support. I would like to see it have a good model for objec-oriented programming so it would be more attractive to a larger audience. Fortran has continued to evolve nicely. The current standard has much that is commendable. Even COBOL has evolved with an OOP capability. In fact, COBOL, for all its faults, continues to evolve rather well. A language does not stay current with emerging concepts in program design is not going to retain a following. PL/I has a good fundamental model. There is no reason why it should lose its place as a popular programming language. So instead of being defensive about PL/I, or haranguing against those who see potential for improvements, it is probably worthwhile to work toward making those improvements. Richard Riehle ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-11 17:21 ` adaworks @ 2006-07-11 19:50 ` John W. Kennedy 0 siblings, 0 replies; 314+ messages in thread From: John W. Kennedy @ 2006-07-11 19:50 UTC (permalink / raw) adaworks@sbcglobal.net wrote: > "robin" <robin_v@bigpond.com> wrote in message > news:TfPsg.3094$tE5.2436@news-server.bigpond.net.au... >> adaworks@sbcglobal.net wrote in message ... >>> "Gordon Sande" <g.sande@worldnet.att.net> wrote in message >>> news:2006052509454116807-gsande@worldnetattnet... >>>> How many Ada systems can match the undefined variable checking of the >>>> old WatFor or the current Salford CheckMate or the Lahey/Fujitsu >>>> global checking? It seems to be a thing associated with places that >>>> run student cafteria computing on mainframes. Not much used anymore. >>>> There was a similar student checkout PL/I from Cornell if I recall >>>> correctly. >>>> >>> The default for Ada is to do thorough range checking on all numeric >>> types. >> Range checking is not a substitute for detection >> of uninitialized variables. >> > See my example on how Ada supports this in a different post > under this thread topic. An Ada compiler can certainly check > for unitialized variables. It can also check for misplaced artifacts > of all kinds, thereby assisting the developer with program > organization improvement. > > As stated earlier, the fundamental design goal of Ada is provide the > maximum amount of error detection as early in the development > process as possible. Errors, of course, are at different levels. > Sometimes the compiler provides advisory error messages, and > other times the error prevents compilation. I have used a lot > of programming languages during my 40+ years in software and > I have not found a language that is as dependable in this respect > as Ada. > > That being said, I sometimes prefer to use other languages when > Ada is too strict. Currently, I enjoy Python. In the past I have > liked Smalltalk. Long, long ago, when PL/I was new (during > the 1960's and one project during the 1970's), I did some coding > in it, but I am certainly not current with modern PL/I. It did > seem to be an improvement over Fortran and COBOL at that time. > > Every language has its good and bad points, its weak points and > its strong points. Ultimately, it is about choosing the right tool > for the right job. > > I have tried to find information on PL/I that would encourage me to > recommend it. I queried this forum for that kind of information and > received much verbal abuse (except from Tom, who was helpful) > for it. I would like to see PL/I continue to evolve and receive > good support. I would like to see it have a good model for > objec-oriented programming so it would be more attractive to > a larger audience. > > Fortran has continued to evolve nicely. The current standard has > much that is commendable. Even COBOL has evolved with an > OOP capability. In fact, COBOL, for all its faults, continues to > evolve rather well. A language does not stay current with emerging > concepts in program design is not going to retain a following. PL/I > has a good fundamental model. There is no reason why it should > lose its place as a popular programming language. > > So instead of being defensive about PL/I, or haranguing against those > who see potential for improvements, it is probably worthwhile to > work toward making those improvements. I'm afraid that Robin believes that PL/I is perfect as-is. The language needs a great deal of updating; to begin with: Short-circuits (language definitions have never been clear on the issue; many compilers treat & and | as short-circuit under some circumstances, but the issue has never been defined in writing). OO. INTEGER as a type distinct from FIXED and FLOAT. Ranges, subtypes, and subtype-oriented loops. Tagged variants. Decent inter-thread communications. Namespaces. And those are only those that would be compatible. Some existing language "features" should be abolished as unsafe. -- John W. Kennedy "The blind rulers of Logres Nourished the land on a fallacy of rational virtue." -- Charles Williams. "Taliessin through Logres: Prelude" ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-09 20:52 ` adaworks ` (2 preceding siblings ...) 2006-07-11 15:14 ` robin @ 2006-11-20 9:39 ` robin 2006-11-21 9:02 ` Finalization Philippe Tarroux 3 siblings, 1 reply; 314+ messages in thread From: robin @ 2006-11-20 9:39 UTC (permalink / raw) "Gordon Sande" <g.sande@worldnet.att.net> wrote in message news:2006052509454116807-gsande@worldnetattnet... > > How many Ada systems can match the undefined variable checking of the > old WatFor or the current Salford CheckMate or the Lahey/Fujitsu > global checking? It seems to be a thing associated with places that > run student cafteria computing on mainframes. Not much used anymore. > There was a similar student checkout PL/I from Cornell if I recall > correctly. That was PL/C. But it could be used by anyone; it wasn't restricted to students, and it implemented most of PL/I. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Finalization 2006-11-20 9:39 ` robin @ 2006-11-21 9:02 ` Philippe Tarroux 2006-11-21 9:22 ` Finalization Dmitry A. Kazakov 2006-11-21 11:26 ` Finalization Georg Bauhaus 0 siblings, 2 replies; 314+ messages in thread From: Philippe Tarroux @ 2006-11-21 9:02 UTC (permalink / raw) I have a problem trying to use controlled types. My purpose was to use finalize to deallocate a big data structure each time a reuse. I wrote a simpler program that exhibits the problem too. here is the code followed by a comment on what I observed: with Ada.Unchecked_Deallocation, Ada.Finalization; package Final is type Vector is array (Positive range <>) of Float; type Vector_Ptr is access Vector; procedure Free is new Ada.Unchecked_Deallocation (Vector, Vector_Ptr); type Obj is new Ada.Finalization.Controlled with record X : Vector_Ptr := null; end record; overriding procedure Finalize (O : in out Obj); procedure Process (O : in out Obj); function Process return Obj; end Final; with Ada.Text_Io; package body Final is package Text_Io renames Ada.Text_Io; procedure Finalize (O : in out Obj) is begin Text_Io.Put ("Finalize: "); Text_Io.New_Line; Free (O.X); end Finalize; procedure Process (O : in out Obj) is begin Text_Io.Put ("In process procedure"); Text_Io.New_Line; Finalize(O); O.X := new Vector (1 .. 100); end Process; function Process return Obj is O : Obj; begin Text_Io.Put ("In process function"); Text_Io.New_Line; O.Process; return O; end Process; end Final; with Ada.Text_Io, Final; procedure main is O : Final.Obj; begin for I in 1 .. 100 loop Ada.Text_Io.put(Integer'Image(I)); Ada.Text_Io.New_Line; O := Final.Process; -- O.Process; end loop; Ada.Text_Io.Put("Fin"); Ada.Text_Io.New_Line; end Main; and the resulting execution trace: 1 In process function In process procedure Finalize: Finalize: Finalize: Finalize: 2 and so on...until: 7 In process function In process procedure Finalize: Finalize: Finalize: Finalize: 8 In process function In process procedure Finalize: process exited with status 128 When I use the function call the program stops with an unexpected error that seems to be different from one compiler to another (I tried gnat gcc 3.4.6 on Windows and the Debian gnat version on Linux). The message depends also on the type of structure to be freed (vector or vector'class). Even when the program raises PROGRAM_ERROR I am unable to trap the exception. I tried to follow what happens under gdb and observed that an unexpected signal cencerning the heap is received by the program. When i use the procedure call (commented out in the main program), all is correct but I suspect that the memory is not deallocated at each call in the main loop since there i s only one call to Finalize at the end of the program. Does somebody has any idea of what happens? Do you think there is a faulty construct in my code? Thanks for your help Philippe Tarroux ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Finalization 2006-11-21 9:02 ` Finalization Philippe Tarroux @ 2006-11-21 9:22 ` Dmitry A. Kazakov 2006-11-21 10:32 ` Finalization Philippe Tarroux 2006-11-21 11:26 ` Finalization Georg Bauhaus 1 sibling, 1 reply; 314+ messages in thread From: Dmitry A. Kazakov @ 2006-11-21 9:22 UTC (permalink / raw) On Tue, 21 Nov 2006 10:02:35 +0100, Philippe Tarroux wrote: > I have a problem trying to use controlled types. My purpose was to use > finalize to deallocate a big data structure each time a reuse. > > I wrote a simpler program that exhibits the problem too. here is the > code followed by a comment on what I observed: > > with Ada.Unchecked_Deallocation, > Ada.Finalization; > > package Final is > type Vector is array (Positive range <>) of Float; > type Vector_Ptr is access Vector; > > procedure Free is new Ada.Unchecked_Deallocation (Vector, Vector_Ptr); > > type Obj is new Ada.Finalization.Controlled with record > X : Vector_Ptr := null; > end record; Note that Obj is declared Controlled, not Limited_Controlled, therefore the function Process would make a copy of it. When you are dealing with copies of pointers (the field X), you should decide what you would do with multiple pointers to the same object, especially upon finalization. In any case you have to override Adjust (and probably use reference counting). But I suppose it should better be Limited_Controlled. > overriding > procedure Finalize (O : in out Obj); > procedure Process (O : in out Obj); > function Process return Obj; > end Final; > > > with Ada.Text_Io; > > package body Final is > package Text_Io renames Ada.Text_Io; > > procedure Finalize (O : in out Obj) is > begin > Text_Io.Put ("Finalize: "); Text_Io.New_Line; (Use Put_Line for that) > Free (O.X); > end Finalize; > > procedure Process (O : in out Obj) is > begin > Text_Io.Put ("In process procedure"); Text_Io.New_Line; > Finalize(O); You don't need that. It breaks your design. As a rule, Finalize should be called once, and almost never explicitly. > O.X := new Vector (1 .. 100); > end Process; > > function Process return Obj is > O : Obj; > begin > Text_Io.Put ("In process function"); Text_Io.New_Line; > O.Process; > return O; > end Process; This is broken. You allocate a new vector in Process, then a shallow copy of is made in "return 0." Then O is destructed and as a result Finalize kills that vector. The caller receives the shallow copy object with a dangling pointer in the filed X. > end Final; > > with Ada.Text_Io, > Final; > > procedure main is > O : Final.Obj; > begin > for I in 1 .. 100 loop > Ada.Text_Io.put(Integer'Image(I)); Ada.Text_Io.New_Line; > O := Final.Process; > -- O.Process; > end loop; Do it this way: for I in 1 .. 100 loop declare O : Obj; -- Ideally Initialize should allocate the vector begin Process (O); -- Don't allocate anything here end; -- Finalize takes care of vector end loop; > Ada.Text_Io.put(Integer'Image(I)); Ada.Text_Io.New_Line; > O := Final.Process; > -- O.Process; > end loop; > Ada.Text_Io.Put("Fin"); Ada.Text_Io.New_Line; > end Main; > > and the resulting execution trace: > > 1 > In process function > In process procedure > Finalize: > Finalize: > Finalize: > Finalize: > 2 > and so on...until: > 7 > In process function > In process procedure > Finalize: > Finalize: > Finalize: > Finalize: > 8 > In process function > In process procedure > Finalize: > process exited with status 128 > > When I use the function call the program stops with an unexpected error > that seems to be different from one compiler to another (I tried gnat > gcc 3.4.6 on Windows and the Debian gnat version on Linux). The message > depends also on the type of structure to be freed (vector or > vector'class). Even when the program raises PROGRAM_ERROR I am unable > to trap the exception. I tried to follow what happens under gdb and > observed that an unexpected signal cencerning the heap is received by > the program. > > When i use the procedure call (commented out in the main program), all > is correct but I suspect that the memory is not deallocated at each call > in the main loop since there i s only one call to Finalize at the end of > the program. > > Does somebody has any idea of what happens? Do you think there is a > faulty construct in my code? Yes, see above. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Finalization 2006-11-21 9:22 ` Finalization Dmitry A. Kazakov @ 2006-11-21 10:32 ` Philippe Tarroux 2006-11-21 11:09 ` Finalization Dmitry A. Kazakov 2006-11-21 17:22 ` Finalization Adam Beneschan 0 siblings, 2 replies; 314+ messages in thread From: Philippe Tarroux @ 2006-11-21 10:32 UTC (permalink / raw) >Note that Obj is declared Controlled, not Limited_Controlled, therefore the >function Process would make a copy of it. > If I declare it Limited_Controlled I can't use the function version anymore. Right? >When you are dealing with copies >of pointers (the field X), you should decide what you would do with >multiple pointers to the same object, especially upon finalization. In any >case you have to override Adjust (and probably use reference counting). But >I suppose it should better be Limited_Controlled. > > I suppose too. It is what I observed: it works with the procedure version. >> Free (O.X); >> end Finalize; >> >> procedure Process (O : in out Obj) is >> begin >> Text_Io.Put ("In process procedure"); Text_Io.New_Line; >> Finalize(O); >> >> > >You don't need that. It breaks your design. As a rule, Finalize should be >called once, and almost never explicitly. > > Yes. I added it only to see what happened here. Of course it is useless. >> O.X := new Vector (1 .. 100); >> end Process; >> >> function Process return Obj is >> O : Obj; >> begin >> Text_Io.Put ("In process function"); Text_Io.New_Line; >> O.Process; >> return O; >> end Process; >> >> > >This is broken. You allocate a new vector in Process, then a shallow copy >of is made in "return 0." Then O is destructed and as a result Finalize >kills that vector. The caller receives the shallow copy object with a >dangling pointer in the filed X. > > But I don't undestand why it works for the first 7 iteration of the loop and crashes only at the 8th. >Do it this way: > > for I in 1 .. 100 loop > declare > O : Obj; -- Ideally Initialize should allocate the vector > begin > Process (O); -- Don't allocate anything here > > You mean that there should be no allocation here? vector is allocated inside process. > end; -- Finalize takes care of vector > end loop; > But anyway thank you for these explanations that clarify the point. Philippe Tarroux ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Finalization 2006-11-21 10:32 ` Finalization Philippe Tarroux @ 2006-11-21 11:09 ` Dmitry A. Kazakov 2006-11-21 17:29 ` Finalization Adam Beneschan 2006-11-21 17:22 ` Finalization Adam Beneschan 1 sibling, 1 reply; 314+ messages in thread From: Dmitry A. Kazakov @ 2006-11-21 11:09 UTC (permalink / raw) On Tue, 21 Nov 2006 11:32:31 +0100, Philippe Tarroux wrote: >>Note that Obj is declared Controlled, not Limited_Controlled, therefore the >>function Process would make a copy of it. >> > If I declare it Limited_Controlled I can't use the function version > anymore. Right? Sort of, actually, in Ada 2005 there is a way to return limited objects. > But I don't undestand why it works for the first 7 iteration of the loop > and crashes only at the 8th. That depends on how fast the memory vital for further program execution gets corrupt... (:-)) >>Do it this way: >> >> for I in 1 .. 100 loop >> declare >> O : Obj; -- Ideally Initialize should allocate the vector >> begin >> Process (O); -- Don't allocate anything here >> > You mean that there should be no allocation here? vector is allocated > inside process. I would allocate vector in Initialize, at least to separate object construction from object's use. Of course the design depends on what you are going to do, especially, on whether the object size is dynamic and changes during the object's life span or not. If the object's size is invariant, then it makes no sense to delay any allocation of its parts. Also, independently, it is a good design principle to ensure object's usability during all its lifespan. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Finalization 2006-11-21 11:09 ` Finalization Dmitry A. Kazakov @ 2006-11-21 17:29 ` Adam Beneschan 2006-11-21 18:39 ` Finalization Dmitry A. Kazakov 0 siblings, 1 reply; 314+ messages in thread From: Adam Beneschan @ 2006-11-21 17:29 UTC (permalink / raw) Dmitry A. Kazakov wrote: > >>Do it this way: > >> > >> for I in 1 .. 100 loop > >> declare > >> O : Obj; -- Ideally Initialize should allocate the vector > >> begin > >> Process (O); -- Don't allocate anything here > >> > > You mean that there should be no allocation here? vector is allocated > > inside process. > > I would allocate vector in Initialize, at least to separate object > construction from object's use. Of course the design depends on what you > are going to do, especially, on whether the object size is dynamic and > changes during the object's life span or not. If the object's size is > invariant, then it makes no sense to delay any allocation of its parts. > Also, independently, it is a good design principle to ensure object's > usability during all its lifespan. To me, if there's a chance that Obj.X won't ever be used for some particular objects, I don't see any reason to allocate the vector right away---the allocation can be delayed until it's needed. This may avoid wasted time and space. I also don't see why failing to allocate right away would make an object of type Obj "unusable". A program can still use it if it accounts for the possibility that X will be null. If Obj is a private type in a package, and all the operations in that package make sure to do something appropriate if X is null, then Obj is quite usable. I very often write code like this, that doesn't bother to allocate until the package determines that the allocation is necessary. Anyway, I don't think it makes sense to say that the vector should be allocated in Initialize, since this is not a real-world example but just a small reduced case, so we don't have enough information about the actual application to say where it's appropriate to do the allocation. -- Adam ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Finalization 2006-11-21 17:29 ` Finalization Adam Beneschan @ 2006-11-21 18:39 ` Dmitry A. Kazakov 0 siblings, 0 replies; 314+ messages in thread From: Dmitry A. Kazakov @ 2006-11-21 18:39 UTC (permalink / raw) On 21 Nov 2006 09:29:27 -0800, Adam Beneschan wrote: > Dmitry A. Kazakov wrote: > >>>>Do it this way: >>>> >>>> for I in 1 .. 100 loop >>>> declare >>>> O : Obj; -- Ideally Initialize should allocate the vector >>>> begin >>>> Process (O); -- Don't allocate anything here >>>> >>> You mean that there should be no allocation here? vector is allocated >>> inside process. >> >> I would allocate vector in Initialize, at least to separate object >> construction from object's use. Of course the design depends on what you >> are going to do, especially, on whether the object size is dynamic and >> changes during the object's life span or not. If the object's size is >> invariant, then it makes no sense to delay any allocation of its parts. >> Also, independently, it is a good design principle to ensure object's >> usability during all its lifespan. > > To me, if there's a chance that Obj.X won't ever be used for some > particular objects, I don't see any reason to allocate the vector right > away---the allocation can be delayed until it's needed. This may avoid > wasted time and space. Why then X is declared in a scope where it is not used? This violates another good principle: declare objects in the most possible nested scope. > I also don't see why failing to allocate right away would make an > object of type Obj "unusable". A program can still use it if it > accounts for the possibility that X will be null. If Obj is a private > type in a package, and all the operations in that package make sure to > do something appropriate if X is null, then Obj is quite usable. I > very often write code like this, that doesn't bother to allocate until > the package determines that the allocation is necessary. That is OK. It just means that the object size isn't invariant and the object is usable even if not fully allocated. However, it is still a bit suspicious, because it makes the behavior less predictable. Compare it with famous (flawed) OS design which gives you non-committed memory pages so that in sum it could be more than the whole virtual space. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Finalization 2006-11-21 10:32 ` Finalization Philippe Tarroux 2006-11-21 11:09 ` Finalization Dmitry A. Kazakov @ 2006-11-21 17:22 ` Adam Beneschan 1 sibling, 0 replies; 314+ messages in thread From: Adam Beneschan @ 2006-11-21 17:22 UTC (permalink / raw) Philippe Tarroux wrote: > >This is broken. You allocate a new vector in Process, then a shallow copy > >of is made in "return 0." Then O is destructed and as a result Finalize > >kills that vector. The caller receives the shallow copy object with a > >dangling pointer in the filed X. > > > > > But I don't undestand why it works for the first 7 iteration of the loop > and crashes only at the 8th. Because that's what dangling pointer bugs do. You've deallocated some data, but somewhere else in your program you still have a pointer to the data. This pointer is now invalid. However, from a machine-instruction standpoint, most of the data still exists at the address, so if you try to access the data at that address, it will still be successful until something else happens to cause the program to allocate new data at the same address. Now, if you use your invalid pointer, you'll get the wrong data, which could cause your program to crash, or could just cause incorrect data to be read that causes erroneous results that you can't figure out. Or worse, you could use your invalid pointer to *write* data into memory... I once had a problem (when using C, but Ada isn't too much better at preventing this) where I used a dangling pointer to write data, and it ended up writing over data that belonged to the memory allocator, and---much worse---it overwrote an address with another legitimate address, which meant that the memory allocator didn't crash right away but instead eventually caused things to go haywire several dozen allocations later. Bugs like this can take days to track down. So the bottom line is, dangling pointer bugs are VERY NASTY, and if there's one in your code (as there is in your example), fix it. Don't worry about why it isn't causing a crash or why it's taking a while to cause one. Just do it. Even if your program seems to be working. Anyway, the "classic" simple way to deal with this sort of problem is to override the Adjust routine: procedure Adjust (O : in out Obj) is begin if O.X /= null then O.X := new Vector' (O.X.all); end if; end Adjust; This is the simplest approach, but it may or may not be wasteful in the context of your application; reference counters are another way, and there are doubtless other ways to skin the cat. But you definitely need to do something to eliminate the dangling pointer problem. -- Adam ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Finalization 2006-11-21 9:02 ` Finalization Philippe Tarroux 2006-11-21 9:22 ` Finalization Dmitry A. Kazakov @ 2006-11-21 11:26 ` Georg Bauhaus 1 sibling, 0 replies; 314+ messages in thread From: Georg Bauhaus @ 2006-11-21 11:26 UTC (permalink / raw) On Tue, 2006-11-21 at 10:02 +0100, Philippe Tarroux wrote: > I have a problem trying to use controlled types. My purpose was to use > finalize to deallocate a big data structure each time a reuse. > > I wrote a simpler program that exhibits the problem too. here is the > code followed by a comment on what I observed: Here is what I get: $ ./main 1 In process function In process procedure Finalize: Finalize: Finalize: Finalize: *** glibc detected *** double free or corruption (top): 0x0806f820 *** raised PROGRAM_ERROR : unhandled signal As a quick guess, maybe it would be best not to call Finalize yourself? (Perhaps an activation record is the scope to think of in the case of GNAT, but that's only another guess.) -- Georg ^ permalink raw reply [flat|nested] 314+ messages in thread
* Bounds Check Overhead [was: Re: Ada vs Fortran for scientific applications] 2006-05-25 12:09 ` Dr. Adrian Wrigley 2006-05-25 12:42 ` Dan Nagle 2006-05-25 12:45 ` Gordon Sande @ 2006-05-25 16:25 ` Bob Lidral 2006-05-25 22:08 ` Bounds Check Overhead Simon Wright 2006-05-26 2:58 ` Ada vs Fortran for scientific applications robin 2006-05-29 12:21 ` Jan Vorbrüggen 4 siblings, 1 reply; 314+ messages in thread From: Bob Lidral @ 2006-05-25 16:25 UTC (permalink / raw) Dr. Adrian Wrigley wrote: > [...] > The adverse consequences of exceeding bounds can be seen to > outweigh the (usually) modest costs in code size and performance that > even mature code should ship with checks enabled, IMO. > Compilers generally should be shipped with the 'failsafe' > options on by default. > -- > Adrian > Certainly the adverse consequences of exceeding bounds can be high -- as can the adverse consequences of using invalid pointer values. And with 64-bit (or even 32-bit) architectures and paging, code size is not as much of an issue as it has been for earlier architectures and memory management technologies. However, the performance hit of including explicit bounds checking can be significant -- especially for code with extremely short loops that are executed a lot of times. It's not just the size of the bounds check code compared with the size of the rest of the loop. The size of the bounds check code can increase the size of the loop enough to alter page usage and bounds checking can also mess up hardware branch prediction optimization. Granted, there are ways to optimize bounds checking for loops, including moving the bounds check out of the loop when possible. Before enabling universal array bounds checking for production code I'd recommend benchmarking the performance with and without the checks enabled to determine the real performance cost. Bob Lidral lidral at alum dot mit dot edu ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-25 16:25 ` Bounds Check Overhead [was: Re: Ada vs Fortran for scientific applications] Bob Lidral @ 2006-05-25 22:08 ` Simon Wright 2006-05-25 22:27 ` Brooks Moses 2006-05-27 14:29 ` robin 0 siblings, 2 replies; 314+ messages in thread From: Simon Wright @ 2006-05-25 22:08 UTC (permalink / raw) Bob Lidral <l1dralspamba1t@comcast.net> writes: > However, the performance hit of including explicit bounds checking > can be significant -- especially for code with extremely short loops > that are executed a lot of times. In Ada one should where possible use the 'Range attribute: for I in Some_Array'Range loop Process (Some_Array (I)); end loop; where I _can't_ exceeed the bounds, so it would be surprising if a compiler inserted bounds checks. Is there a Fortran equivalent? I seem to remember something like that in VAX F77, but it's been a while... ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-25 22:08 ` Bounds Check Overhead Simon Wright @ 2006-05-25 22:27 ` Brooks Moses 2006-05-25 22:43 ` Richard E Maine 2006-05-26 10:16 ` Ludovic Brenta 2006-05-27 14:29 ` robin 1 sibling, 2 replies; 314+ messages in thread From: Brooks Moses @ 2006-05-25 22:27 UTC (permalink / raw) Simon Wright wrote: > In Ada one should where possible use the 'Range attribute: > > for I in Some_Array'Range loop > Process (Some_Array (I)); > end loop; > > where I _can't_ exceeed the bounds, so it would be surprising if a > compiler inserted bounds checks. Is there a Fortran equivalent? I seem > to remember something like that in VAX F77, but it's been a while... The Fortran equivalent is: do i = lbound(Some_Array), ubound(Some_Array) call Process(Some_Array(i)) end do I don't know that compilers are smart enough to omit the range check, though -- because, in the general case, it shouldn't be omitted. There could be something in the body of the loop that modified the loop variable -- which would be illegal code, but the whole point of range-checking is that it works even if the code is illegal, so unless the compiler is otherwise able to guarantee that the loop variable is not modified by a statement in the loop, it still needs to do the check. - Brooks -- The "bmoses-nospam" address is valid; no unmunging needed. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-25 22:27 ` Brooks Moses @ 2006-05-25 22:43 ` Richard E Maine 2006-05-26 10:16 ` Ludovic Brenta 1 sibling, 0 replies; 314+ messages in thread From: Richard E Maine @ 2006-05-25 22:43 UTC (permalink / raw) Brooks Moses <bmoses-nospam@cits1.stanford.edu> wrote: > The Fortran equivalent is: > > do i = lbound(Some_Array), ubound(Some_Array) > call Process(Some_Array(i)) > end do or, if that is really all that is in the loop, and if Process is elemental, just call Process(Some_Array) will do the trick quite nicely. The compiler will build the loop for you (or anyway, I think that's how most compilers implement elemental). But the restrictions on elemental procedures are quite stringent (so much so that I find it hard to do much useful with them except for the most fundamental of operations), so this isn't really a "fair" comparison. The version Brooks showed is more realistic in most cases. I just thought I'd mention the elemental possibility. -- Richard Maine | Good judgment comes from experience; email: my first.last at org.domain| experience comes from bad judgment. org: nasa, domain: gov | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-25 22:27 ` Brooks Moses 2006-05-25 22:43 ` Richard E Maine @ 2006-05-26 10:16 ` Ludovic Brenta 2006-05-26 10:59 ` Dan Nagle 2006-05-26 16:18 ` Richard Maine 1 sibling, 2 replies; 314+ messages in thread From: Ludovic Brenta @ 2006-05-26 10:16 UTC (permalink / raw) Brooks Moses <bmoses-nospam@cits1.stanford.edu> writes: > Simon Wright wrote: >> In Ada one should where possible use the 'Range attribute: >> for I in Some_Array'Range loop >> Process (Some_Array (I)); >> end loop; >> where I _can't_ exceeed the bounds, so it would be surprising if a >> compiler inserted bounds checks. Is there a Fortran equivalent? I seem >> to remember something like that in VAX F77, but it's been a while... > > The Fortran equivalent is: > > do i = lbound(Some_Array), ubound(Some_Array) > call Process(Some_Array(i)) > end do > > I don't know that compilers are smart enough to omit the range check, > though -- because, in the general case, it shouldn't be omitted. > There could be something in the body of the loop that modified the > loop variable -- which would be illegal code, but the whole point of > range-checking is that it works even if the code is illegal, so unless > the compiler is otherwise able to guarantee that the loop variable is > not modified by a statement in the loop, it still needs to do the > check. And that's why Ada specifies that I cannot change inside the loop, and is undefined outside the loop; see ARM 5.5 (9, 10). You seem to imply that Fortran has a similar rule, but that compilers do not enforce that rule, and therefore have to perform range checking to enforce a non-existent language rule about array access. I am confused. Could you clarify? -- Ludovic Brenta. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 10:16 ` Ludovic Brenta @ 2006-05-26 10:59 ` Dan Nagle 2006-05-26 14:44 ` Dick Hendrickson 2006-05-27 14:29 ` robin 2006-05-26 16:18 ` Richard Maine 1 sibling, 2 replies; 314+ messages in thread From: Dan Nagle @ 2006-05-26 10:59 UTC (permalink / raw) Hello, Ludovic Brenta wrote: <snip> > And that's why Ada specifies that I cannot change inside the loop, and > is undefined outside the loop; see ARM 5.5 (9, 10). > > You seem to imply that Fortran has a similar rule, but that compilers > do not enforce that rule, and therefore have to perform range checking > to enforce a non-existent language rule about array access. I am > confused. Could you clarify? A loop index cannot be changed within the loop. Some compilers may have an option to allow older code to work without modification by allowing modification within the loop. (Basically, this is to avoid re-certification costs incurred when the code is modified in any way.) The value at loop termination is defined to be the last value within the loop, plus the increment. For a concurrent loop or for a forall, the index values are undefined outside the loop. -- Cheers! Dan Nagle Purple Sage Computing Solutions, Inc. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 10:59 ` Dan Nagle @ 2006-05-26 14:44 ` Dick Hendrickson 2006-05-26 14:52 ` Rich Townsend 2006-05-26 14:59 ` gary.l.scott 2006-05-27 14:29 ` robin 1 sibling, 2 replies; 314+ messages in thread From: Dick Hendrickson @ 2006-05-26 14:44 UTC (permalink / raw) Dan Nagle wrote: > Hello, > > Ludovic Brenta wrote: > > <snip> > >> And that's why Ada specifies that I cannot change inside the loop, and >> is undefined outside the loop; see ARM 5.5 (9, 10). >> >> You seem to imply that Fortran has a similar rule, but that compilers >> do not enforce that rule, and therefore have to perform range checking >> to enforce a non-existent language rule about array access. I am >> confused. Could you clarify? > > > A loop index cannot be changed within the loop. It's actually a stronger rule, or maybe Dan's statement could have been worded stronger. The loop index can't be changed while the loop is executing. This covers direct assignment in the loop and also in procedures called from within the loop. Given something like COMMON I DO I = 1,10 call something (I) call something_else() enddo "something" is not allowed to change it's argument and neither "something" nor "something_else" is allowed to change the variable in common. This prohibition flows down the entire call tree from these routines. And that's why it's hard to check. Dick Hendrickson > Some compilers may have an option to allow older code > to work without modification by allowing modification > within the loop. (Basically, this is to avoid re-certification > costs incurred when the code is modified in any way.) > > The value at loop termination is defined to be the last value > within the loop, plus the increment. For a concurrent loop > or for a forall, the index values are undefined outside > the loop. > ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 14:44 ` Dick Hendrickson @ 2006-05-26 14:52 ` Rich Townsend 2006-05-26 16:44 ` Ludovic Brenta 2006-05-26 14:59 ` gary.l.scott 1 sibling, 1 reply; 314+ messages in thread From: Rich Townsend @ 2006-05-26 14:52 UTC (permalink / raw) Dick Hendrickson wrote: > > > Dan Nagle wrote: > >> Hello, >> >> Ludovic Brenta wrote: >> >> <snip> >> >>> And that's why Ada specifies that I cannot change inside the loop, and >>> is undefined outside the loop; see ARM 5.5 (9, 10). >>> >>> You seem to imply that Fortran has a similar rule, but that compilers >>> do not enforce that rule, and therefore have to perform range checking >>> to enforce a non-existent language rule about array access. I am >>> confused. Could you clarify? >> >> >> >> A loop index cannot be changed within the loop. > > It's actually a stronger rule, or maybe Dan's statement > could have been worded stronger. The loop index can't > be changed while the loop is executing. This covers > direct assignment in the loop and also in procedures > called from within the loop. > Given something like > COMMON I > DO I = 1,10 > call something (I) > call something_else() > enddo > "something" is not allowed to change it's argument and > neither "something" nor "something_else" is allowed to > change the variable in common. This prohibition flows > down the entire call tree from these routines. And that's > why it's hard to check. ...although a lot of the constructs introduced in Fortran 90 help in checking -- in particular, if the called subroutines have INTENT() on their arguments, then its pretty cut-and-dry whether I will be modified or not. Having said that, things like host association can muddy the waters somewhat -- but at least you only have to go down one level to check if I is being modified. cheers, Rich ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 14:52 ` Rich Townsend @ 2006-05-26 16:44 ` Ludovic Brenta 2006-05-26 17:49 ` Gordon Sande ` (2 more replies) 0 siblings, 3 replies; 314+ messages in thread From: Ludovic Brenta @ 2006-05-26 16:44 UTC (permalink / raw) Rich Townsend <rhdt@barVOIDtol.udel.edu> writes: > Dick Hendrickson wrote: >> Given something like >> COMMON I >> DO I = 1,10 >> call something (I) >> call something_else() >> enddo >> "something" is not allowed to change it's argument and neither >> "something" nor "something_else" is allowed to change the variable >> in common. This prohibition flows down the entire call tree from >> these routines. And that's why it's hard to check. > > ...although a lot of the constructs introduced in Fortran 90 help in > checking -- in particular, if the called subroutines have INTENT() > on their arguments, then its pretty cut-and-dry whether I will be > modified or not. Having said that, things like host association can > muddy the waters somewhat -- but at least you only have to go down > one level to check if I is being modified. Then Ada is better in that respect than Fortran. In Ada, the compiler has enough knowledge about what subprograms do to their arguments that it can easily check that loop indexes never change (i.e. you can pass the loop index as an "in" parameter but not as an "out" or "in out" parameter). Also, in Ada, the loop index does not exist outside of the loop. From what you said, it seems to me that in Fortran, the loop index still exists after the loop, and has a well-defined value, but the programmer can then change it. Correct? -- Ludovic Brenta. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 16:44 ` Ludovic Brenta @ 2006-05-26 17:49 ` Gordon Sande 2006-05-27 9:06 ` Dmitry A. Kazakov 2006-05-26 20:31 ` glen herrmannsfeldt 2006-05-27 14:29 ` robin 2 siblings, 1 reply; 314+ messages in thread From: Gordon Sande @ 2006-05-26 17:49 UTC (permalink / raw) On 2006-05-26 13:44:58 -0300, Ludovic Brenta <ludovic@ludovic-brenta.org> said: > Rich Townsend <rhdt@barVOIDtol.udel.edu> writes: >> Dick Hendrickson wrote: >>> Given something like >>> COMMON I >>> DO I = 1,10 >>> call something (I) >>> call something_else() >>> enddo >>> "something" is not allowed to change it's argument and neither >>> "something" nor "something_else" is allowed to change the variable >>> in common. This prohibition flows down the entire call tree from >>> these routines. And that's why it's hard to check. >> >> ...although a lot of the constructs introduced in Fortran 90 help in >> checking -- in particular, if the called subroutines have INTENT() >> on their arguments, then its pretty cut-and-dry whether I will be >> modified or not. Having said that, things like host association can >> muddy the waters somewhat -- but at least you only have to go down >> one level to check if I is being modified. > > Then Ada is better in that respect than Fortran. In Ada, the compiler > has enough knowledge about what subprograms do to their arguments that > it can easily check that loop indexes never change (i.e. you can pass > the loop index as an "in" parameter but not as an "out" or "in out" > parameter). > Also, in Ada, the loop index does not exist outside of the loop. From > what you said, it seems to me that in Fortran, the loop index still > exists after the loop, and has a well-defined value, but the > programmer can then change it. Correct? Many search idioms use the fact that a DO index is just another integer variable which will have a known value on a forced exit from the loop. It also has a known value when the loop terminates naturally. The index is the exclusive property of the DO inside the loop but it is visible to the rest of the program. Being an integer is a new restriction. The contract that the programmer has with the compiler is that the index will not be tampered with. The checking that many systems do is not exhaustive and proveably correct. It just grumbles about the obvious cases. If one use F90 well then the compiler will catch many more cases. Too many folks in a hurry (and who know they never make mistakes) do not bother (so they spend more time debugging and blaming the hardware or compiler). In olden times of F66 there was a long contorted notion of first and second level definition to account for the fact that a DO index was likely to be in a hardware register and compiler technology was not developed to do much checking. It was more documentation of the quirks that had been discovered in the early compilers. That is long since past but some echoes survive. The Ada constructs would require that one explicitly capture the loop index on forced exit and do whatever else is required to notice a natural termination. If the index was to be used in any way other than as an arguement then one would have to explicitly copy it. One gets used to whatever idioms are required. After a while they look natural. There have been more than a few suggested syntaxes for loops and searches. This is not a topic devoid of extensive discussion. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 17:49 ` Gordon Sande @ 2006-05-27 9:06 ` Dmitry A. Kazakov 2006-05-27 14:44 ` Richard Maine 2006-05-29 12:02 ` Jan Vorbrüggen 0 siblings, 2 replies; 314+ messages in thread From: Dmitry A. Kazakov @ 2006-05-27 9:06 UTC (permalink / raw) On Fri, 26 May 2006 17:49:09 GMT, Gordon Sande wrote: > Many search idioms use the fact that a DO index is just another integer > variable which will have a known value on a forced exit from the loop. > It also has a known value when the loop terminates naturally. The index > is the exclusive property of the DO inside the loop but it is visible > to the rest of the program. Being an integer is a new restriction. > > The contract that the programmer has with the compiler is that the > index will not be tampered with. The checking that many systems do is not > exhaustive and proveably correct. It just grumbles about the obvious cases. > If one use F90 well then the compiler will catch many more cases. Too > many folks in a hurry (and who know they never make mistakes) do not > bother (so they spend more time debugging and blaming the hardware or > compiler). Well, maybe Ada and Fortran share some things, but not the design philosophy. Ada standard actually starts with a classification of error: 1. Errors that are required to be detected no later than at compile time 2. Errors that are required to be detected at run-time 3. Bounded errors which detection isn't required, but the effect is bounded 4. Erroneous execution, unbounded effect The design decisions made in Ada are always in favor of 1, maybe at the cost of some idioms. So in Ada modifying the index of a loop isn't a "bounded error" as it seems to be in Fortran, it is a compile-time error. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-27 9:06 ` Dmitry A. Kazakov @ 2006-05-27 14:44 ` Richard Maine 2006-05-29 12:02 ` Jan Vorbrüggen 1 sibling, 0 replies; 314+ messages in thread From: Richard Maine @ 2006-05-27 14:44 UTC (permalink / raw) Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > Well, maybe Ada and Fortran share some things, but not the design > philosophy. Ada standard actually starts with a classification of error: > > 1. Errors that are required to be detected no later than at compile time > 2. Errors that are required to be detected at run-time > 3. Bounded errors which detection isn't required, but the effect is bounded > 4. Erroneous execution, unbounded effect > > The design decisions made in Ada are always in favor of 1, maybe at the > cost of some idioms. So in Ada modifying the index of a loop isn't a > "bounded error" as it seems to be in Fortran, it is a compile-time error. In that classification scheme, I'd put modifying the index of a loop as a 4 in Fortran, if I understand the categories correctly. Almost anything can happen in the Fortran case in theory. -- Richard Maine | Good judgement comes from experience; email: last name at domain . net | experience comes from bad judgement. domain: summertriangle | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-27 9:06 ` Dmitry A. Kazakov 2006-05-27 14:44 ` Richard Maine @ 2006-05-29 12:02 ` Jan Vorbrüggen 1 sibling, 0 replies; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-29 12:02 UTC (permalink / raw) > The design decisions made in Ada are always in favor of 1, maybe at the > cost of some idioms. So in Ada modifying the index of a loop isn't a > "bounded error" as it seems to be in Fortran, it is a compile-time error. My take is that current Fortran enables a compiler to behave in a similar way _if_ your program obeys certain rules - I suspect the restrictions implemented by F (a "modern" subset supported, at least, by a mode in g95) fulfil these rules. However, the desire to support older source code makes it impossible to remove the "escape routes" from the standard and to mandate the necessary restrictions for the language. Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 16:44 ` Ludovic Brenta 2006-05-26 17:49 ` Gordon Sande @ 2006-05-26 20:31 ` glen herrmannsfeldt 2006-05-27 14:29 ` robin 2 siblings, 0 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-05-26 20:31 UTC (permalink / raw) Ludovic Brenta wrote: (snip) > Also, in Ada, the loop index does not exist outside of the loop. From > what you said, it seems to me that in Fortran, the loop index still > exists after the loop, and has a well-defined value, but the > programmer can then change it. Correct? In Fortran 66 the loop index value was not defined for a normally terminated DO loop. If you GOTO out of the loop, it was defined. This was changed in Fortran 77. In all cases, the program can change it outside the loop. (Well, I would rather not mention "extended range of DO".) -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 16:44 ` Ludovic Brenta 2006-05-26 17:49 ` Gordon Sande 2006-05-26 20:31 ` glen herrmannsfeldt @ 2006-05-27 14:29 ` robin 2 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-05-27 14:29 UTC (permalink / raw) "Ludovic Brenta" <ludovic@ludovic-brenta.org> wrote in message news:87ac9420s5.fsf@ludovic-brenta.org... > Also, in Ada, the loop index does not exist outside of the loop. From > what you said, it seems to me that in Fortran, the loop index still > exists after the loop, and has a well-defined value, but the > programmer can then change it. Correct? Naturally the value of the loop variable can be changed once the loop has terminated. The loop variable is just an ordinary variable. The loop variable can be used in any way subsequently (including as the control variable for another loop). ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 14:44 ` Dick Hendrickson 2006-05-26 14:52 ` Rich Townsend @ 2006-05-26 14:59 ` gary.l.scott 2006-05-26 15:10 ` Dick Hendrickson 1 sibling, 1 reply; 314+ messages in thread From: gary.l.scott @ 2006-05-26 14:59 UTC (permalink / raw) So should I be prevented from having the volatile attribute in common (a common place for it to be in old code)? ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 14:59 ` gary.l.scott @ 2006-05-26 15:10 ` Dick Hendrickson 2006-05-26 18:41 ` gary.l.scott 0 siblings, 1 reply; 314+ messages in thread From: Dick Hendrickson @ 2006-05-26 15:10 UTC (permalink / raw) gary.l.scott@lmco.com wrote: > So should I be prevented from having the volatile attribute in common > (a common place for it to be in old code)? > Well, personally, I think you should be prevented from having either common or volatile in a program. But, that's probably not what your question is about ;) . The true answer is no, mixing volatile and common is a fine way to program. You merely need to make sure that your other processes don't volatile a variable when it is being used as a DO index. Dick Hendrickson ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 15:10 ` Dick Hendrickson @ 2006-05-26 18:41 ` gary.l.scott 2006-05-26 18:56 ` Dan Nagle 0 siblings, 1 reply; 314+ messages in thread From: gary.l.scott @ 2006-05-26 18:41 UTC (permalink / raw) I was just thinking that if the loop counter was in fact declared to be volatile, that the declaration of such should cause the compiler to diagnose it as an incompatible loop index variable (i.e. F2k8+). ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 18:41 ` gary.l.scott @ 2006-05-26 18:56 ` Dan Nagle 2006-05-26 19:16 ` Richard Maine 0 siblings, 1 reply; 314+ messages in thread From: Dan Nagle @ 2006-05-26 18:56 UTC (permalink / raw) Hello, gary.l.scott@lmco.com wrote: > I was just thinking that if the loop counter was in fact declared to be > volatile, that the declaration of such should cause the compiler to > diagnose it as an incompatible loop index variable (i.e. F2k8+). In 04-007, at 167[22:23], the <do-index> may not be redefined. The volatile attribute, at 85[5:6], specifies that the variable may become undefined or redefined by a means outside the program. I suppose it's a theorem left for the reader to prove that a do-index should not be volatile, or at worst, if it is, the volatile part is ineffective. I'll ask on the J3 list to see whether there's sentiment for addressing this directly, or if I missed something in my quick perusal of the standard. -- Cheers! Dan Nagle Purple Sage Computing Solutions, Inc. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 18:56 ` Dan Nagle @ 2006-05-26 19:16 ` Richard Maine 0 siblings, 0 replies; 314+ messages in thread From: Richard Maine @ 2006-05-26 19:16 UTC (permalink / raw) Dan Nagle <dannagle@verizon.net> wrote: > Hello, > > gary.l.scott@lmco.com wrote: > > I was just thinking that if the loop counter was in fact declared to be > > volatile, that the declaration of such should cause the compiler to > > diagnose it as an incompatible loop index variable (i.e. F2k8+). > > In 04-007, at 167[22:23], the <do-index> may not be redefined. > > The volatile attribute, at 85[5:6], specifies that the variable may > become undefined or redefined by a means outside the program. > > I suppose it's a theorem left for the reader to prove that > a do-index should not be volatile, or at worst, if it is, > the volatile part is ineffective. > > I'll ask on the J3 list to see whether there's sentiment > for addressing this directly, or if I missed something > in my quick perusal of the standard. I think you are missing half of the purpose of volatile. You've got the half about a volatile variable possibly getting modified elsewhere, but you forgot the part about the volatile variable possibly being referenced elsewhere. They are the opposite sides of the same coin, depending on whether you are sedning data to or getting data from the external source. The volatile attribute doesn't distinguish. I see nothing inherently wrong in making a loop variable volatile so that some external source can reference it. It is likely to kill the performance of the loop, but presumably the user would realize that. Volatile is a performance killer in general, but this is "known". An alternate, verbose spelling of volatile might be "do everything exactly as I said, one step at a time, no matter how stupid it might look; trust me." -- Richard Maine | Good judgement comes from experience; email: last name at domain . net | experience comes from bad judgement. domain: summertriangle | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 10:59 ` Dan Nagle 2006-05-26 14:44 ` Dick Hendrickson @ 2006-05-27 14:29 ` robin 1 sibling, 0 replies; 314+ messages in thread From: robin @ 2006-05-27 14:29 UTC (permalink / raw) "Dan Nagle" <dannagle@verizon.net> wrote in message news:3cBdg.6255$oa3.2407@trnddc08... > Ludovic Brenta wrote: > > And that's why Ada specifies that I cannot change inside the loop, and > > is undefined outside the loop; see ARM 5.5 (9, 10). > > > > You seem to imply that Fortran has a similar rule, but that compilers > > do not enforce that rule, and therefore have to perform range checking > > to enforce a non-existent language rule about array access. I am > > confused. Could you clarify? > > A loop index cannot be changed within the loop. It's easy enough to do. Put the index in COMMON or EQUIVALENCE it. Or call a procedure and pass the loop variable. What you mean is that it's illegal to change the loop variable. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 10:16 ` Ludovic Brenta 2006-05-26 10:59 ` Dan Nagle @ 2006-05-26 16:18 ` Richard Maine 2006-05-26 17:30 ` Nasser Abbasi 2006-05-26 20:27 ` glen herrmannsfeldt 1 sibling, 2 replies; 314+ messages in thread From: Richard Maine @ 2006-05-26 16:18 UTC (permalink / raw) Ludovic Brenta <ludovic@ludovic-brenta.org> wrote: > You seem to imply that Fortran has a similar rule, [that an loop index shall not change within a loop] > but that compilers > do not enforce that rule, and therefore have to perform range checking > to enforce a non-existent language rule about array access. I am > confused. Could you clarify? Others have replied some, but let me make my attempt at clarification because yes, you seem to be confused about multiple related things here. Yes, there is a Fortran language rule that a loop index shall not be changed while a loop is executing. Compilers are not required to enforce that rule, but essentially all compilers make at least some attempt to enforce it and catch the simple cases, which is most cases, but not all. There are cases where the violation of the rule is hard to detect, because the change occurs in some other procedure. But you talk about a "non-existent language rule about array access". It is not non-existent. There is a language rule prohibiting array access out of bounds. Brooks' point (and a good point it was, I thought) was that the rules on array bounds an loop index changes were similar in that both are rules that the compiler is not to required to enforce. Fortran has many, many such rules. In fact, that is how most of the Fortran language rules are. There is even a general statement to that effect near the front of the Fortran standard - that requirements and prohibitions generally apply to the programm rather than to the compiler. It is the program that is prohibitted from exceeding array bounds or changing loop indices, or doing all kinds of other things. The compiler is not responsable for enforcing it. There are some things that a compiler is required to enforce, but it is a small subset of all the requirements of the language. Although not stated in these terms, the subset is basically those things that can reasonably be expected to be detected at compile time with typical compiler implementations (which includes separate compilation of external procedures). The standard mostly doesn't require run-time checks, and it doesn't require compile-time ones that are particularly "difficult" (ie. that would require flow analysis, interprocedural analysis, or other things like that). Things like syntax errors, on the other hand, are usually required to be diagnosable. Compilers almost always go farther than the strict requirement of the standard. Any compiler that had no diagnostic capability other than that required by the standard would be commercially unviable. You probably could not give it way (literally). In the current case, almost all compilers will detect the simple and common cases of changing a DO loop index inside of the loop. That one tends to be a compile-time test looking for the syntactically obvious case of the change occuring in the source code physically in range of the loop. Likewise almost all compilers have the capability of detecting most array bounds violations. That one tends to be mostly done at run-time (some simple cases are caught at compile-time, but that's not the usual cases), and it tends to be optional to turn on the test. Some compilers also miss the harder cases, while other compilers can catch absolutely all cases. There is feature differentation between compilers in that area. -- Richard Maine | Good judgement comes from experience; email: last name at domain . net | experience comes from bad judgement. domain: summertriangle | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 16:18 ` Richard Maine @ 2006-05-26 17:30 ` Nasser Abbasi 2006-05-26 18:24 ` Richard Maine 2006-05-26 18:26 ` Rich Townsend 2006-05-26 20:27 ` glen herrmannsfeldt 1 sibling, 2 replies; 314+ messages in thread From: Nasser Abbasi @ 2006-05-26 17:30 UTC (permalink / raw) "Richard Maine" <nospam@see.signature> wrote in message news:1hfxsjh.t88mchrssv9cN%nospam@see.signature... > Ludovic Brenta <ludovic@ludovic-brenta.org> wrote: > >> You seem to imply that Fortran has a similar rule, > [that an loop index shall not change within a loop] >> but that compilers >> do not enforce that rule, and therefore have to perform range checking >> to enforce a non-existent language rule about array access. I am >> confused. Could you clarify? > > Others have replied some, but let me make my attempt at clarification > because yes, you seem to be confused about multiple related things here. > > Yes, there is a Fortran language rule that a loop index shall not be > changed while a loop is executing. Compilers are not required to enforce > that rule, but essentially all compilers make at least some attempt to > enforce it and catch the simple cases, which is most cases, but not all. > There are cases where the violation of the rule is hard to detect, > because the change occurs in some other procedure. > Hello; <about loop counters being changed.> I am not sure if there is supposed to compiler flag to enforce this or not, you do not seem to imply that, so I did this very simple test, please see: ------------ test for checking on changing loop index---- $ cat a.f90 PROGRAM MAIN DO I=1,10 CALL foo(I) PRINT *,I END DO END PROGRAM SUBROUTINE foo(I) I=I+1 END SUBROUTINE ------------- end program ------ $ g95 a.f90 $ ./a.exe 2 4 6 8 10 12 14 16 18 20 --------- end run --------- I did the same in Ada: --------- Ada ------- procedure Main is PROCEDURE foo(I: in out integer) IS begin I:=I+1; end foo; BEGIN FOR I IN 1..10 LOOP foo(I); END LOOP; END Main; ------- end ada ---- The above will not even be allowed to compile since Ada wants 'I' to be an actual variable. The compile error I get is "actual for I must be a variable" This means I am not even allowed to use "I" in a call. This eliminate the problem from accidentally change the loop counter. Nasser ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 17:30 ` Nasser Abbasi @ 2006-05-26 18:24 ` Richard Maine 2006-05-26 19:23 ` Paul Van Delst 2006-05-26 19:25 ` Nasser Abbasi 2006-05-26 18:26 ` Rich Townsend 1 sibling, 2 replies; 314+ messages in thread From: Richard Maine @ 2006-05-26 18:24 UTC (permalink / raw) Nasser Abbasi <nma@12000.org> wrote: > <about loop counters being changed.> > > I am not sure if there is supposed to compiler flag to enforce this or not, I said > Compilers are not required to enforce that rule, (In context, I meant Fortran compilers, as I do in all cases below.) In terms of the standard, that is basically all there is to say. As I tried to elaborate, almost all compilers do enforce simpe cases. I don't know of a compiler that doesn't. However, that has nothing to do with any requirement of the standard. The standard has nothing at all to say about compiler switches in any context. That is not a concept defined by the standard. > so I did this very simple test, No, this is not a simple test. This is one of the complicated cases. The code is short, but that doesn't necessarily mean simple to diagnose. This one is hard to test. See my previous comments about separate compilation. They are extremely relevent - indeed central here. > PROGRAM MAIN > > DO I=1,10 > CALL foo(I) > PRINT *,I > END DO > > END PROGRAM This is a perfectly valid main program, by the definition of the standard. In particular, it could be part of a valid entire program, when combined with an appropriate subroutine foo. > SUBROUTINE foo(I) > I=I+1 > END SUBROUTINE And this is a perfectly valid subroutine, when combined with an appropriate main program calling it. Although the two parts are separately valid, they don't "fit" together. It is only when you look at them together that they become invalid. This is a "traditional" codong style with an external subprogram, which might be compiled completely separately, the compiler having no knowledge of the subroutine when it is compiling the main program, and vise versa. *SOME* (definitely not all) compilers might catch this in some cases (such as when it is all in one source file, but "source file" is another concept not even defined in the Fortran standard). You could make this even harder to test by having the subroutine read some input and use that input to decide whether or not to increment the variable. In that case, the legality would depend on the input data. > I did the same in Ada: No you didn't do the same. I don't know whether g95 would happen to catch it if you did make the Fortran code comparable. I don't particularly care either. I know what the requirements of the Fortran standard are here - the compiler is not required to catch it, but some compilers might. I'm also not particularly interested in pursuing this line at all, I am aware that the Ada standard has stronger requirements in this area. I don't really need test cases to illustrate that. The only reason I'm posting this part is to point out that your test programs are not, in fact, very close to "the same". You Ada procedure foo is defined inside the main program, not outside of it. The comparable thing in Fortran would be an internal procedure, not an external procedure. Internal procedures are not compiled separately. Therefore, the odds of catching things like this in an internal procedure are far better. It still is not guaranteed. I might have seen some bugs resulting form uncaught errors in that situation. But the odds are at least better. > The above will not even be allowed to compile since Ada wants 'I' to be an > actual variable. I don't know Ada well enough to interpret this precisely. It has been far too long since I did anything in Ada (about 15 years). But if I am interpreting your statement there correctly, I'd say that a fundamental difference here is that in Fortran, a look index *IS* an actual variable, just like any other. There are restrictions on how you may use it (some of them hard for the compiler to test, as mentioned above), but it fundamentally is a variable. This has good points and bad points. I'm not feeling like arguing such things. I'll stick to describing the factual matters. -- Richard Maine | Good judgement comes from experience; email: last name at domain . net | experience comes from bad judgement. domain: summertriangle | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 18:24 ` Richard Maine @ 2006-05-26 19:23 ` Paul Van Delst 2006-05-26 19:25 ` Nasser Abbasi 1 sibling, 0 replies; 314+ messages in thread From: Paul Van Delst @ 2006-05-26 19:23 UTC (permalink / raw) Richard Maine wrote: > Nasser Abbasi <nma@12000.org> wrote: > > >><about loop counters being changed.> >> >>I am not sure if there is supposed to compiler flag to enforce this or not, > > > I said > > >>Compilers are not required to enforce that rule, > > > (In context, I meant Fortran compilers, as I do in all cases below.) In > terms of the standard, that is basically all there is to say. As I tried > to elaborate, almost all compilers do enforce simpe cases. I don't know > of a compiler that doesn't. However, that has nothing to do with any > requirement of the standard. > > The standard has nothing at all to say about compiler switches in any > context. That is not a concept defined by the standard. > > >>so I did this very simple test, > > > > No, this is not a simple test. This is one of the complicated cases. The > code is short, but that doesn't necessarily mean simple to diagnose. For the OP, here's both cases, and the simple one is caught: PROGRAM testit IMPLICIT NONE INTEGER :: i DO i=1,10 CALL foo(i) ! The complicated case PRINT *,i i = i+2 ! The simple case PRINT *,i END DO CONTAINS SUBROUTINE foo(i) INTEGER,INTENT(IN OUT) :: i i=i+1 END SUBROUTINE foo END PROGRAM testit lnx:scratch : gfortran testit.f90 In file testit.f90:7 i = i+2 1 In file testit.f90:4 DO i=1,10 2 Error: Variable 'i' at (1) cannot be redefined inside loop beginning at (2) (g95, lf95, and pgf95 give essentially the same error message.) cheers, paulv -- Paul van Delst Ride lots. CIMSS @ NOAA/NCEP/EMC Eddy Merckx Ph: (301)763-8000 x7748 Fax:(301)763-8545 ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 18:24 ` Richard Maine 2006-05-26 19:23 ` Paul Van Delst @ 2006-05-26 19:25 ` Nasser Abbasi 2006-05-26 19:46 ` Richard Maine 1 sibling, 1 reply; 314+ messages in thread From: Nasser Abbasi @ 2006-05-26 19:25 UTC (permalink / raw) Hello "Richard Maine" <nospam@see.signature> wrote in message news:1hfxy4r.1sv2j76l6cgg1N%nospam@see.signature... > Nasser Abbasi <nma@12000.org> wrote: > > Internal procedures are not compiled separately. Therefore, the odds of > catching things like this in an internal procedure are far better. It > still is not guaranteed. I might have seen some bugs resulting form > uncaught errors in that situation. But the odds are at least better. > Ok, you are correct, the Fortran example was using an external proc (even though it was in the same source file) while the Ada one used an internal proc. Here is a Fortran one that is using internal procedure that attempts to change the loop counter, so that now we have the same type of code. In Ada we call an internal proc to try to update a loop counter, and in Fortran we call an internal proc to try to update a loop counter. I hope this you'll find it to be a better test. $ cat a.f90 ---------------------- PROGRAM MAIN DO I=1,10 CALL foo(I) PRINT *,I END DO CONTAINS SUBROUTINE foo(I) I=I+1 END SUBROUTINE END PROGRAM MAIN ------------------ $ g95 a.f90 $ ./a.exe 2 4 6 8 10 12 14 16 18 20 Please note that I did not use the intent(in ) in the Fortran internal proc(), this is by purpose. I wanted to see if the compiler will detect that a call is being made to a proc which is taking a loop counter as an argument, where it does not have intent(in) on that argument declared. I do not think this is too hard for the compiler to check, but I can be wrong. I would have to assume then that this is a g95 issue where it just does not do this extra check if it is supposed to be part of the Fortran standard to try to check against loop counters updates. thanks, Nasser ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 19:25 ` Nasser Abbasi @ 2006-05-26 19:46 ` Richard Maine 2006-05-30 19:13 ` Robert A Duff 0 siblings, 1 reply; 314+ messages in thread From: Richard Maine @ 2006-05-26 19:46 UTC (permalink / raw) Nasser Abbasi <nma@12000.org> wrote: > I do not think this is too hard for the compiler to check, but I can be > wrong. I would have to assume then that this is a g95 issue where it just > does not do this extra check if it is supposed to be part of the Fortran > standard to try to check against loop counters updates. I seem to be having some trouble communicating here. No it is not "supposed to be part of the Fortran standard to try to check against loop counters updates". I will repeat for the 3rd time. >> Compilers are not required to enforce that rule, Am I not making this clear? I cannot come up with a simpler way of stating it. As I also said >> In terms of the standard, that is basically all there is to say. Anything else on the subject is outside of the standard. Really. There are other things to say on the subject, but not in the standard. Also, there is almost nothing that the standard says that compilers are supposed to "try" to do. To quote Yoda "Do or do not... there is no try." There are plenty of things that the market (i.e. users) say that compilers should try to do, but it is quite rare for the standard to say things like that. There are a few cases where the standard has a "recommendation", which could reasonably be interpreted as a suggestion to try to do something that is not strictly required. But those are rare and none of them are in this area. -- Richard Maine | Good judgement comes from experience; email: last name at domain . net | experience comes from bad judgement. domain: summertriangle | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 19:46 ` Richard Maine @ 2006-05-30 19:13 ` Robert A Duff 2006-06-04 23:48 ` Richard Maine 0 siblings, 1 reply; 314+ messages in thread From: Robert A Duff @ 2006-05-30 19:13 UTC (permalink / raw) nospam@see.signature (Richard Maine) writes: > Nasser Abbasi <nma@12000.org> wrote: > > > I do not think this is too hard for the compiler to check, but I can be > > wrong. I would have to assume then that this is a g95 issue where it just > > does not do this extra check if it is supposed to be part of the Fortran > > standard to try to check against loop counters updates. > > I seem to be having some trouble communicating here. No it is not > "supposed to be part of the Fortran standard to try to check against > loop counters updates". > > I will repeat for the 3rd time. > > >> Compilers are not required to enforce that rule, > > Am I not making this clear?... Well, it seems clear enough to me. ;-) I'm not a Fortran expert, but I think you're saying that (according to the Fortran standard) it is an error to modify a DO-loop variable, but Fortran compilers are not required to detect this error. They are allowed to, and in fact do so in some cases. But a separate issue is whether it is possible or feasible for compilers to detect this error in all cases. Apparently (please correct me if I'm wrong), it is not feasible for Fortran compilers to detect this error in all cases. In Ada, the same error can be detected in all cases (and in fact the Ada standard requires it) -- even with separate compilation. My point is that it's not just an issue of whether certain errors are required to be detected. The language design has some influence over whether they _can_ be detected (given separate compilation). Apparently, the INTENT(in) thing is a more modern addition to Fortran, which could solve the problem if compilers choose to have strict rules about it. (Sorry, I have not programmed in Fortran for years, and was never an expert.) >... I cannot come up with a simpler way of > stating it. As I also said > > >> In terms of the standard, that is basically all there is to say. > > Anything else on the subject is outside of the standard. Really. There > are other things to say on the subject, but not in the standard. > > Also, there is almost nothing that the standard says that compilers are > supposed to "try" to do. To quote Yoda > > "Do or do not... there is no try." Right. The Ada standard is the same way -- it rarely says "try". It requires certain errors to be detected (either at compile time, or at link time, or at run time), and allows other errors to go undetected. > There are plenty of things that the market (i.e. users) say that > compilers should try to do, but it is quite rare for the standard to say > things like that. ... Quite true, but the Standard (for whatever language) determines to some extent whether such error-detection market-demands can be met. >... There are a few cases where the standard has a > "recommendation", which could reasonably be interpreted as a suggestion > to try to do something that is not strictly required. But those are rare > and none of them are in this area. - Bob ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-30 19:13 ` Robert A Duff @ 2006-06-04 23:48 ` Richard Maine 0 siblings, 0 replies; 314+ messages in thread From: Richard Maine @ 2006-06-04 23:48 UTC (permalink / raw) Robert A Duff <bobduff@shell01.TheWorld.com> wrote: > I'm not a Fortran expert, but I think you're saying that (according to > the Fortran standard) it is an error to modify a DO-loop variable, but > Fortran compilers are not required to detect this error. They are > allowed to, and in fact do so in some cases. > > But a separate issue is whether it is possible or feasible for compilers > to detect this error in all cases. Apparently (please correct me if I'm > wrong), it is not feasible for Fortran compilers to detect this error in > all cases. That's correct. Of course, it is always possible for the error to be detected at run-time, but that's not the same thing (and is not required by the standard either). I agree with your other comments. (Well, I agree with the above one also, but I thought I'd confirm your supposition). -- Richard Maine | Good judgement comes from experience; email: last name at domain . net | experience comes from bad judgement. domain: summertriangle | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 17:30 ` Nasser Abbasi 2006-05-26 18:24 ` Richard Maine @ 2006-05-26 18:26 ` Rich Townsend 2006-05-26 19:58 ` Simon Wright 1 sibling, 1 reply; 314+ messages in thread From: Rich Townsend @ 2006-05-26 18:26 UTC (permalink / raw) Nasser Abbasi wrote: > "Richard Maine" <nospam@see.signature> wrote in message > news:1hfxsjh.t88mchrssv9cN%nospam@see.signature... > >>Ludovic Brenta <ludovic@ludovic-brenta.org> wrote: >> >> >>>You seem to imply that Fortran has a similar rule, >> >> [that an loop index shall not change within a loop] >> >>>but that compilers >>>do not enforce that rule, and therefore have to perform range checking >>>to enforce a non-existent language rule about array access. I am >>>confused. Could you clarify? >> > >>Others have replied some, but let me make my attempt at clarification >>because yes, you seem to be confused about multiple related things here. >> >>Yes, there is a Fortran language rule that a loop index shall not be >>changed while a loop is executing. Compilers are not required to enforce >>that rule, but essentially all compilers make at least some attempt to >>enforce it and catch the simple cases, which is most cases, but not all. >>There are cases where the violation of the rule is hard to detect, >>because the change occurs in some other procedure. >> > > > Hello; > > <about loop counters being changed.> > > I am not sure if there is supposed to compiler flag to enforce this or not, > you do not seem to imply that, so I did this very simple test, please see: > > ------------ test for checking on changing loop index---- > $ cat a.f90 > > PROGRAM MAIN > > DO I=1,10 > CALL foo(I) > PRINT *,I > END DO > > END PROGRAM > > SUBROUTINE foo(I) > I=I+1 > END SUBROUTINE > ------------- end program ------ > $ g95 a.f90 > $ ./a.exe > 2 > 4 > 6 > 8 > 10 > 12 > 14 > 16 > 18 > 20 > --------- end run --------- > > I did the same in Ada: > > --------- Ada ------- > procedure Main is > > PROCEDURE foo(I: in out integer) IS > begin > I:=I+1; > end foo; > > BEGIN > > FOR I IN 1..10 LOOP > foo(I); > END LOOP; > > END Main; > ------- end ada ---- > > The above will not even be allowed to compile since Ada wants 'I' to be an > actual variable. > > The compile error I get is "actual for I must be a variable" > > This means I am not even allowed to use "I" in a call. > This eliminate the problem from accidentally change the loop counter. > > Nasser > > I don't think you're comparing like-for-like. A better comparison: PROGRAM MAIN DO I=1,10 CALL foo(I) PRINT *,I END DO CONTAINS SUBROUTINE foo(I) INTENT(in) :: i I=I+1 END SUBROUTINE END PROGRAM MAIN Also, from looking at your Ada program, it seems you can't pass loop counters to procedures. Is this the case? cheers, Rich ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 18:26 ` Rich Townsend @ 2006-05-26 19:58 ` Simon Wright 2006-05-26 20:06 ` Rich Townsend 0 siblings, 1 reply; 314+ messages in thread From: Simon Wright @ 2006-05-26 19:58 UTC (permalink / raw) Rich Townsend <rhdt@barVOIDtol.udel.edu> writes: > Also, from looking at your Ada program, it seems you can't pass loop > counters to procedures. Is this the case? No, it was the 'in out' on the parameter that was forbidden. We can say procedure Foo (X : Integer); procedure Foo (X : in Integer); with the same meaning, I suspect the same as your intent(in), and within Foo X is treated as constant; procedure Foo (X : out Integer); where the initial value of X is not accessible within Foo (though its constraints, eg array bounds, are); and procedure Foo (X : in out Integer); where the initial value is accessible. I think this is the default Fortran mode? Only in the first case can you pass a loop index. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 19:58 ` Simon Wright @ 2006-05-26 20:06 ` Rich Townsend 2006-05-26 20:16 ` Richard Edgar 2006-05-26 20:28 ` Richard Maine 0 siblings, 2 replies; 314+ messages in thread From: Rich Townsend @ 2006-05-26 20:06 UTC (permalink / raw) Simon Wright wrote: > Rich Townsend <rhdt@barVOIDtol.udel.edu> writes: > > >>Also, from looking at your Ada program, it seems you can't pass loop >>counters to procedures. Is this the case? > > > No, it was the 'in out' on the parameter that was forbidden. > > We can say > > procedure Foo (X : Integer); > procedure Foo (X : in Integer); > > with the same meaning, I suspect the same as your intent(in), and > within Foo X is treated as constant; > > procedure Foo (X : out Integer); > > where the initial value of X is not accessible within Foo (though its > constraints, eg array bounds, are); and > > procedure Foo (X : in out Integer); > > where the initial value is accessible. I think this is the default > Fortran mode? Yes. Without an explicit INTENT(), Fortran defaults to INTENT(inout) -- and I guess Ada defaults to INTENT(in), which is 'safer', but could not be done in Fortran due to the requirement of backward compatibility. cheers, Rich ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 20:06 ` Rich Townsend @ 2006-05-26 20:16 ` Richard Edgar 2006-05-26 20:28 ` Richard Maine 1 sibling, 0 replies; 314+ messages in thread From: Richard Edgar @ 2006-05-26 20:16 UTC (permalink / raw) Rich Townsend wrote: > Yes. Without an explicit INTENT(), Fortran defaults to INTENT(inout) -- > and I guess Ada defaults to INTENT(in), which is 'safer', but could not > be done in Fortran due to the requirement of backward compatibility. Minor nitpick, but I thought that not declaring INTENT was _not_ the same as INTENT(INOUT). Specifically, anything declared INOUT had to be adjustable, even if it wasn't adjusted. Thus, it would not be legal to pass a PARAMETER to an INOUT argument, but it would be legal to pass it to an argument with no INTENT declared (Sorry for messing up the standardese in this, but I hope my meaning is clear)? Richard ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 20:06 ` Rich Townsend 2006-05-26 20:16 ` Richard Edgar @ 2006-05-26 20:28 ` Richard Maine 2006-05-30 19:20 ` Robert A Duff 1 sibling, 1 reply; 314+ messages in thread From: Richard Maine @ 2006-05-26 20:28 UTC (permalink / raw) Rich Townsend <rhdt@barVOIDtol.udel.edu> wrote: > Yes. Without an explicit INTENT(), Fortran defaults to INTENT(inout) That is not true. Fortran has 4 distinct intent() values - in, out, inout, and unspecified. All 4 are different. The unspecified case is sort of like inout, but it is *NOT* the same. The unspecified case is largely historical. It means what it has to for compatibility with old codes, which turns out to be a bit messy. I doubt that anyone would come up with such a thing from scratch today... but they didn't. I might describe the unspecified case as "who knows?" The argument might be for input, It might be for output. It might be both, and it might play different roles at different times. A particular difference between unspecified and inout is that the actual argument must be definable for inout. Mostly that means it has to be a variable - not an expression. For unspecified, the actual argument only has to be definable if the dummy gets defined or redefined on that particular invocation; yes, it can be different on different invocations. The distinction is important to the case in question. Using a DO index variable as an actual argument for an intent(inout) dummy is at least questionable. I don't recall whether or not it is prohibitted, but it is at least something worth a warning anyway. Using a DO index variable as an actual argument for an unspecified intent dummy is normal, widespread practice in old codes - after all, that is the only kind of intent there was, and passing DO index variables as actual arguments is often useful. -- Richard Maine | Good judgement comes from experience; email: last name at domain . net | experience comes from bad judgement. domain: summertriangle | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 20:28 ` Richard Maine @ 2006-05-30 19:20 ` Robert A Duff 2006-06-05 13:50 ` Richard Edgar 0 siblings, 1 reply; 314+ messages in thread From: Robert A Duff @ 2006-05-30 19:20 UTC (permalink / raw) nospam@see.signature (Richard Maine) writes: > Rich Townsend <rhdt@barVOIDtol.udel.edu> wrote: > > > Yes. Without an explicit INTENT(), Fortran defaults to INTENT(inout) > > That is not true. Fortran has 4 distinct intent() values - in, out, > inout, and unspecified. All 4 are different. The unspecified case is > sort of like inout, but it is *NOT* the same. > > The unspecified case is largely historical. ... [snip] Thanks for the good explanation. It seems that for newly-written Fortran, one would want a compiler option that requires the INTENT to be specified. And passing a DO-loop index variable as a parameter should require INTENT(in). In that case, Fortran would be pretty much equivalent to Ada in this regard. - Bob ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-30 19:20 ` Robert A Duff @ 2006-06-05 13:50 ` Richard Edgar 0 siblings, 0 replies; 314+ messages in thread From: Richard Edgar @ 2006-06-05 13:50 UTC (permalink / raw) Robert A Duff wrote: >>> Yes. Without an explicit INTENT(), Fortran defaults to INTENT(inout) >> That is not true. Fortran has 4 distinct intent() values - in, out, >> inout, and unspecified. All 4 are different. The unspecified case is >> sort of like inout, but it is *NOT* the same. >> >> The unspecified case is largely historical. ... [snip] > > Thanks for the good explanation. > > It seems that for newly-written Fortran, one would want a compiler > option that requires the INTENT to be specified. And passing a DO-loop > index variable as a parameter should require INTENT(in). In that case, > Fortran would be pretty much equivalent to Ada in this regard. I'm not sure... requiring INTENT would catch a lot of cases, but I don't think all. If I've understood the discussion, in Ada, looop counters are created specially for the loop? In Fortran, they are just ordinary variables. In particular, the counter could be a 'global' variable (I'll defer a detailed discussion about the meaning of 'global variable' in Fortran :-) ). This means that the loop counter could be modified through the 'global' reference, rather than the reference passed as an argument. I suppose an extremely sophisticated compiler/linker might catch that, but in practice, I think it requires a runtime check. Apologies to those who write the standards for my loose use of terminology, but I hope my point is clear ;-) Richard ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 16:18 ` Richard Maine 2006-05-26 17:30 ` Nasser Abbasi @ 2006-05-26 20:27 ` glen herrmannsfeldt 2006-05-26 20:41 ` Richard Maine 1 sibling, 1 reply; 314+ messages in thread From: glen herrmannsfeldt @ 2006-05-26 20:27 UTC (permalink / raw) Richard Maine wrote: (snip) > Likewise almost all compilers have the capability of detecting > most array bounds violations. That one tends to be mostly done at > run-time (some simple cases are caught at compile-time, but that's not > the usual cases), I was not so long ago working on a Fortran program with the ever popular dimension (1) assumed size arrays. (I believe it was written in 1990.) It seems the IBM XLF compiler, even with runtime bounds checking off, does compile time bounds checking. A constant subscript of 2 is not allowed with an assumed size array dimensioned (1). -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 20:27 ` glen herrmannsfeldt @ 2006-05-26 20:41 ` Richard Maine 0 siblings, 0 replies; 314+ messages in thread From: Richard Maine @ 2006-05-26 20:41 UTC (permalink / raw) glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote: > Richard Maine wrote: > > > Likewise almost all compilers have the capability of detecting > > most array bounds violations. That one tends to be mostly done at > > run-time (some simple cases are caught at compile-time, but that's not > > the usual cases), > > I was not so long ago working on a Fortran program with the ever popular > dimension (1) assumed size arrays. (I believe it was written in 1990.) > It seems the IBM XLF compiler, even with runtime bounds checking off, > does compile time bounds checking. A constant subscript of 2 is not > allowed with an assumed size array dimensioned (1). That's exactly one of the simple cases I was talking about that are sometimes caught. I've run into that one with old code. Might have even been the same code, as I ran into it with a fairly widely used free library code (perhaps fftpack, though I'm not sure of that). I was using a different compiler though. -- Richard Maine | Good judgement comes from experience; email: last name at domain . net | experience comes from bad judgement. domain: summertriangle | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-25 22:08 ` Bounds Check Overhead Simon Wright 2006-05-25 22:27 ` Brooks Moses @ 2006-05-27 14:29 ` robin 2006-05-27 14:56 ` Ludovic Brenta 1 sibling, 1 reply; 314+ messages in thread From: robin @ 2006-05-27 14:29 UTC (permalink / raw) "Simon Wright" <simon@pushface.org> wrote in message news:m2mzd569l9.fsf@grendel.local... > Bob Lidral <l1dralspamba1t@comcast.net> writes: > > > However, the performance hit of including explicit bounds checking > > can be significant -- especially for code with extremely short loops > > that are executed a lot of times. > > In Ada one should where possible use the 'Range attribute: Ideally yes, but in practice, such as in sorting and averaging, the loop is often one or two short of the number of elements in the array. > for I in Some_Array'Range loop > Process (Some_Array (I)); > end loop; > > where I _can't_ exceeed the bounds, so it would be surprising if a > compiler inserted bounds checks. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-27 14:29 ` robin @ 2006-05-27 14:56 ` Ludovic Brenta 2006-05-27 15:53 ` jimmaureenrogers 2006-05-27 15:59 ` Dmitry A. Kazakov 0 siblings, 2 replies; 314+ messages in thread From: Ludovic Brenta @ 2006-05-27 14:56 UTC (permalink / raw) "robin" writes: > "Simon Wright" writes: >> In Ada one should where possible use the 'Range attribute: > > Ideally yes, but in practice, such as in sorting and averaging, > the loop is often one or two short of the number of elements in the array. No problem: type Some_Array_Type is array (Positive range <>) of Something; procedure Walk (A : in Some_Array_Type) is begin for J in A'First .. A'Last - 1 loop ... end loop; end Walk; -- Ludovic Brenta. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-27 14:56 ` Ludovic Brenta @ 2006-05-27 15:53 ` jimmaureenrogers 2006-05-27 15:59 ` Dmitry A. Kazakov 1 sibling, 0 replies; 314+ messages in thread From: jimmaureenrogers @ 2006-05-27 15:53 UTC (permalink / raw) Ludovic Brenta wrote: > "robin" writes: > > "Simon Wright" writes: > >> In Ada one should where possible use the 'Range attribute: > > > > Ideally yes, but in practice, such as in sorting and averaging, > > the loop is often one or two short of the number of elements in the array. > > No problem: > > type Some_Array_Type is array (Positive range <>) of Something; > > procedure Walk (A : in Some_Array_Type) is > begin > for J in A'First .. A'Last - 1 loop > ... > end loop; > end Walk; A somewhat more general syntax is also available. Ada allows indexing of arrays using any discrete type. Discrete types include integer types and enumeration types. Enumeration types have no arithmetic operators. generic type Element_type is private; type Index_Type is (<>); type Some_Array_Type is array (Index_Type range <>) of Element; procedure Wal(A : in Some_Array_Type) is begin for J in A'First .. Some_Array_Type'Pred(A'Last) loop ... end loop; end Walk; The 'Pred attribute evaluates to the value preceding the value specified by its actual parameter. The effect is to iterate from the first index value to one before the last index value. Jim Rogers ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-27 14:56 ` Ludovic Brenta 2006-05-27 15:53 ` jimmaureenrogers @ 2006-05-27 15:59 ` Dmitry A. Kazakov 2006-05-27 17:52 ` Tom LINDEN 1 sibling, 1 reply; 314+ messages in thread From: Dmitry A. Kazakov @ 2006-05-27 15:59 UTC (permalink / raw) On Sat, 27 May 2006 16:56:58 +0200, Ludovic Brenta wrote: > "robin" writes: >> "Simon Wright" writes: >>> In Ada one should where possible use the 'Range attribute: >> >> Ideally yes, but in practice, such as in sorting and averaging, >> the loop is often one or two short of the number of elements in the array. > > No problem: > > type Some_Array_Type is array (Positive range <>) of Something; > > procedure Walk (A : in Some_Array_Type) is > begin > for J in A'First .. A'Last - 1 loop > ... > end loop; > end Walk; I don't think it is same. A range expression of A'First and A'Last is not statically required to be a subrange of A'Range (or maybe empty). So the compiler had to prove that, if it would wish to omit checks. In your example it is trivially provable (if nobody had played with "-"), though I am not sure if, say, GNAT really does it. Somebody had mentioned OCaml, I really doubt that inference is the answer. The programmer knows far more than the compiler (or pier viewer) could infer. If J has to be in A'Range, I prefer that it could be specified rather than deduced. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-27 15:59 ` Dmitry A. Kazakov @ 2006-05-27 17:52 ` Tom LINDEN 2006-05-28 6:28 ` Dave Weatherall 0 siblings, 1 reply; 314+ messages in thread From: Tom LINDEN @ 2006-05-27 17:52 UTC (permalink / raw) WHY NOT STOP CROSS POSTING TO COMP.LANG.PL1, PLEASE? On Sat, 27 May 2006 08:59:42 -0700, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > On Sat, 27 May 2006 16:56:58 +0200, Ludovic Brenta wrote: > >> "robin" writes: >>> "Simon Wright" writes: >>>> In Ada one should where possible use the 'Range attribute: >>> >>> Ideally yes, but in practice, such as in sorting and averaging, >>> the loop is often one or two short of the number of elements in the >>> array. >> >> No problem: >> >> type Some_Array_Type is array (Positive range <>) of Something; >> >> procedure Walk (A : in Some_Array_Type) is >> begin >> for J in A'First .. A'Last - 1 loop >> ... >> end loop; >> end Walk; > > I don't think it is same. A range expression of A'First and A'Last is not > statically required to be a subrange of A'Range (or maybe empty). So the > compiler had to prove that, if it would wish to omit checks. In your > example it is trivially provable (if nobody had played with "-"), though > I > am not sure if, say, GNAT really does it. > > Somebody had mentioned OCaml, I really doubt that inference is the > answer. > The programmer knows far more than the compiler (or pier viewer) could > infer. If J has to be in A'Range, I prefer that it could be specified > rather than deduced. > -- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/ ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-27 17:52 ` Tom LINDEN @ 2006-05-28 6:28 ` Dave Weatherall 0 siblings, 0 replies; 314+ messages in thread From: Dave Weatherall @ 2006-05-28 6:28 UTC (permalink / raw) On Sat, 27 May 2006 17:52:03 UTC, "Tom LINDEN" <tom@kednos.com> wrote: > WHY NOT STOP CROSS POSTING TO COMP.LANG.PL1, PLEASE? > And there's me wondering what you doing over here :-) It probably got cross-posted 'cos somebody brought PL/I into the discussion some days ago. -- Cheers - Dave. PS sorry to anybody in other groups if this offends. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 12:09 ` Dr. Adrian Wrigley ` (2 preceding siblings ...) 2006-05-25 16:25 ` Bounds Check Overhead [was: Re: Ada vs Fortran for scientific applications] Bob Lidral @ 2006-05-26 2:58 ` robin 2006-05-29 12:21 ` Jan Vorbrüggen 4 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-05-26 2:58 UTC (permalink / raw) "Dr. Adrian Wrigley" <amtw@linuxchip.demon.co.uk.uk.uk> wrote in message news:pan.2006.05.25.12.11.52.919554@linuxchip.demon.co.uk.uk.uk... > (note: IIRC, the Ariane 5 launch failure was linked to disabling a > range check after careful analysis... of Ariane 4 trajectory) Actually, an overflow check on converting from float to 16-bit integer. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 12:09 ` Dr. Adrian Wrigley ` (3 preceding siblings ...) 2006-05-26 2:58 ` Ada vs Fortran for scientific applications robin @ 2006-05-29 12:21 ` Jan Vorbrüggen 2006-05-29 13:47 ` Dr. Adrian Wrigley 4 siblings, 1 reply; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-29 12:21 UTC (permalink / raw) > The adverse consequences of exceeding bounds can be seen to > outweigh the (usually) modest costs in code size and performance that > even mature code should ship with checks enabled, IMO. I am of the opinion that the Ariane 5 exprience shows that this is not true in general. Had that exception been caught and dismissed by a last- chance exception handler, the flight would have succeeded. The point is that while some exceptions could be generated, there was no clear way of handling them in any useful way, so ignoring them at least gives a chance of success in such a situation, while just shutting down by default will guarantee failure. An operational weather forecast is a similar situation: I'd rather have at least some results _now_, instead of restarting the prediction (after the bug has been fixed) and getting them just after the storm surge has drowned a lot of people. Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-29 12:21 ` Jan Vorbrüggen @ 2006-05-29 13:47 ` Dr. Adrian Wrigley 2006-05-29 14:17 ` Jan Vorbrüggen 0 siblings, 1 reply; 314+ messages in thread From: Dr. Adrian Wrigley @ 2006-05-29 13:47 UTC (permalink / raw) On Mon, 29 May 2006 14:21:16 +0200, Jan Vorbr�ggen wrote: >> The adverse consequences of exceeding bounds can be seen to >> outweigh the (usually) modest costs in code size and performance that >> even mature code should ship with checks enabled, IMO. > > I am of the opinion that the Ariane 5 exprience shows that this is not > true in general. Had that exception been caught and dismissed by a last- > chance exception handler, the flight would have succeeded. The point is > that while some exceptions could be generated, there was no clear way of > handling them in any useful way, so ignoring them at least gives a chance > of success in such a situation, while just shutting down by default will > guarantee failure. An operational weather forecast is a similar situation: > I'd rather have at least some results _now_, instead of restarting the > prediction (after the bug has been fixed) and getting them just after the > storm surge has drowned a lot of people. I think you're saying code shouldn't check for serious errors if the system shuts down when they're found! Surely this is application dependent? Banking software users might prefer the program to be stopped, while critical flight control software users might prefer to pray. I am comparing code compiled with array bounds and range checks against code with no such checks. With no checks, reading and writing of completely unrelated data sometimes occurs causing unbounded errors. With checks, exceptions can be raised, and the failure is bounded. Usually, a system can be designed to do something better than scribbling over unrelated memory! You seem to be comparing different ways of handling unanticipated exceptions. Shutting down the system vs. dismissal by a last-chance handler. I agree with you on this point, for certain applications. Making a generalisation on error detection from Araine 5 seems a bit rash though. Most software is not fail-deadly. And if it is, it gets some testing at the task in hand. And if it can't be tested, execution errors are checked and handled usefully. Clearly, Ariane 5's case is not representative of the vast bulk of real-world code. -- Adrian ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-29 13:47 ` Dr. Adrian Wrigley @ 2006-05-29 14:17 ` Jan Vorbrüggen 2006-05-29 14:52 ` Dmitry A. Kazakov 2006-05-29 15:47 ` Dr. Adrian Wrigley 0 siblings, 2 replies; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-29 14:17 UTC (permalink / raw) > Clearly, Ariane 5's case is not representative of the vast > bulk of real-world code. Quite to the contrary - almost all of the world's code is in embedded systems, Winwoes notwithstanding. But I believe you are overinterpreting what I said. What I wanted to say is that error detection without corrective action is not the panacea it is sometimes made out to be. In the case of Ariane 501, the correct approach IMO would have been to have a test mode (with detection) and a flight mode, which turns on the "let's hope and pray" handling of errors and is reserved for use only on actual launches. In other cases - e.g., my SAT receiver or similar system - the best approach probably is to do a warm reboot and, after a certain number of recurrences, a cold reboot. That, again, can lead to a denial-of-service attack: Some older Siemens mobile phones have a bug in parsing the header data of SMS (the code assumes that violations of the standardized format cannot happen), which allows a perpretator to send you an SMS that will disable your phone until you manage to delete the offending SMS on a phone that does not have the bug. In summary, error detection is similar to asking a girl out on a date: you should think beforehand of what you'll do if the says "yes". Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-29 14:17 ` Jan Vorbrüggen @ 2006-05-29 14:52 ` Dmitry A. Kazakov 2006-05-29 15:08 ` Jan Vorbrüggen 2006-05-31 14:58 ` robin 2006-05-29 15:47 ` Dr. Adrian Wrigley 1 sibling, 2 replies; 314+ messages in thread From: Dmitry A. Kazakov @ 2006-05-29 14:52 UTC (permalink / raw) On Mon, 29 May 2006 16:17:06 +0200, Jan Vorbr�ggen wrote: >> Clearly, Ariane 5's case is not representative of the vast >> bulk of real-world code. > > Quite to the contrary - almost all of the world's code is in embedded > systems, Winwoes notwithstanding. > > But I believe you are overinterpreting what I said. What I wanted to say > is that error detection without corrective action is not the panacea it > is sometimes made out to be. I think one should clarify what was an error and what was a bug. Properly detected, but improperly handled errors are bugs. Bugs cannot be handled. > In the case of Ariane 501, the correct approach > IMO would have been to have a test mode (with detection) and a flight mode, > which turns on the "let's hope and pray" handling of errors and is reserved > for use only on actual launches. I don't think so. The problem (bug) wasn't in an inappropriate handling of an error. It was a false positive in error detection. Handling was correct, detection was wrong. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-29 14:52 ` Dmitry A. Kazakov @ 2006-05-29 15:08 ` Jan Vorbrüggen 2006-05-29 17:03 ` Dmitry A. Kazakov 2006-05-31 14:58 ` robin 1 sibling, 1 reply; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-29 15:08 UTC (permalink / raw) >>In the case of Ariane 501, the correct approach >>IMO would have been to have a test mode (with detection) and a flight mode, >>which turns on the "let's hope and pray" handling of errors and is reserved >>for use only on actual launches. > I don't think so. The problem (bug) wasn't in an inappropriate handling of > an error. It was a false positive in error detection. Handling was correct, > detection was wrong. If any error had been forseen, I might agree. But the problem lay in handling the unforseen error: that handling, in itself, led to failure. The approach taken just wasn't tolerant of errors in the programming. Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-29 15:08 ` Jan Vorbrüggen @ 2006-05-29 17:03 ` Dmitry A. Kazakov 2006-05-30 7:11 ` Jan Vorbrüggen 2006-05-31 14:58 ` robin 0 siblings, 2 replies; 314+ messages in thread From: Dmitry A. Kazakov @ 2006-05-29 17:03 UTC (permalink / raw) On Mon, 29 May 2006 17:08:27 +0200, Jan Vorbr�ggen wrote: >>>In the case of Ariane 501, the correct approach >>>IMO would have been to have a test mode (with detection) and a flight mode, >>>which turns on the "let's hope and pray" handling of errors and is reserved >>>for use only on actual launches. >> I don't think so. The problem (bug) wasn't in an inappropriate handling of >> an error. It was a false positive in error detection. Handling was correct, >> detection was wrong. > > If any error had been forseen, I might agree. But the problem lay in handling > the unforseen error: that handling, in itself, led to failure. The approach > taken just wasn't tolerant of errors in the programming. Ah, but an unforeseen error is a bug. One cannot be bug-tolerant, it is self-contradictory, after all. Programming error (bug) means that the system's state is not adequate to the physical system. Which could be the rocket falling right onto the control tower. But that's no matter, because there is no way for the program to know anything about that. Once you start to judge about such undesired program states (even purely statistically), and change the program, they automatically become *foreseen*. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-29 17:03 ` Dmitry A. Kazakov @ 2006-05-30 7:11 ` Jan Vorbrüggen 2006-05-30 8:29 ` Dmitry A. Kazakov 2006-05-31 14:58 ` robin 1 sibling, 1 reply; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-30 7:11 UTC (permalink / raw) > Ah, but an unforeseen error is a bug. One cannot be bug-tolerant, it is > self-contradictory, after all. Programming error (bug) means that the > system's state is not adequate to the physical system. Which could be the > rocket falling right onto the control tower. But that's no matter, because > there is no way for the program to know anything about that. Once you start > to judge about such undesired program states (even purely statistically), > and change the program, they automatically become *foreseen*. I don't consider that distinction helpful - it's like saying that economically important algorithms are NP-complete and thus unsolveable, while experience tells you that almost all practical problems turn out to be solveable with polynomial algorithms or at least reach approximations to the optimal solution that are economically indistinguishable. It's similar to approaches people have in driving cars. On the one hand, you can drive "defensively", i.e., you take into account that other participants in traffic might not behave in an optimal way. Or you can drive agressively, with the expactation that should somebody else suffer a momentary lapse of concentration, say, you will have a crash. Incidentally, I consider "graceful degradation" the hallmark of good engi- neering, and the Ariane 501 design was anything but that. The control system of your body, on the other hand, is the best known example of a system showing this property. Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-30 7:11 ` Jan Vorbrüggen @ 2006-05-30 8:29 ` Dmitry A. Kazakov 2006-05-31 14:58 ` robin 0 siblings, 1 reply; 314+ messages in thread From: Dmitry A. Kazakov @ 2006-05-30 8:29 UTC (permalink / raw) On Tue, 30 May 2006 09:11:39 +0200, Jan Vorbr�ggen wrote: >> Ah, but an unforeseen error is a bug. One cannot be bug-tolerant, it is >> self-contradictory, after all. Programming error (bug) means that the >> system's state is not adequate to the physical system. Which could be the >> rocket falling right onto the control tower. But that's no matter, because >> there is no way for the program to know anything about that. Once you start >> to judge about such undesired program states (even purely statistically), >> and change the program, they automatically become *foreseen*. > > I don't consider that distinction helpful - it's like saying that economically > important algorithms are NP-complete and thus unsolveable, while experience > tells you that almost all practical problems turn out to be solveable with > polynomial algorithms or at least reach approximations to the optimal solution > that are economically indistinguishable. Good example. You cannot solve NP, but you can a practical [sub]problem. Exactly so, you cannot write a bug-tolerant program, but you can a fault-tolerant one. > It's similar to approaches people have in driving cars. On the one hand, you > can drive "defensively", i.e., you take into account that other participants > in traffic might not behave in an optimal way. Or you can drive agressively, > with the expactation that should somebody else suffer a momentary lapse of > concentration, say, you will have a crash. > > Incidentally, I consider "graceful degradation" the hallmark of good engi- > neering, and the Ariane 501 design was anything but that. The control system > of your body, on the other hand, is the best known example of a system showing > this property. I agree with this. But these aren't examples of bug-tolerant design. Service quality degradation is a result of a proper behavior of a system in response to some exceptional state. But all inputs are still valid, as well as the system state. So the human body can handle, say, outer temperature change. But it cannot handle electron mass as a temperature. That would be an "unforeseen" error, if God decided to swap both in the Universe. My point is that a false positive cannot be handled. When you calculate sin(0.5) actually meaning sin(0.1) there is nothing sine could do about it. It is not an error, it is a bug on your side. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-30 8:29 ` Dmitry A. Kazakov @ 2006-05-31 14:58 ` robin 0 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-05-31 14:58 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1347 bytes --] "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:u02hetbmsmkk$.18b13ph0ab35n.dlg@40tude.net... > On Tue, 30 May 2006 09:11:39 +0200, Jan Vorbr�ggen wrote: > > >> Ah, but an unforeseen error is a bug. One cannot be bug-tolerant, it is > >> self-contradictory, after all. Programming error (bug) means that the > >> system's state is not adequate to the physical system. Which could be the > >> rocket falling right onto the control tower. But that's no matter, because > >> there is no way for the program to know anything about that. Once you start > >> to judge about such undesired program states (even purely statistically), > >> and change the program, they automatically become *foreseen*. > > > > I don't consider that distinction helpful - it's like saying that economically > > important algorithms are NP-complete and thus unsolveable, while experience > > tells you that almost all practical problems turn out to be solveable with > > polynomial algorithms or at least reach approximations to the optimal solution > > that are economically indistinguishable. > > Good example. You cannot solve NP, but you can a practical [sub]problem. > Exactly so, you cannot write a bug-tolerant program, You can have a bug-tolerant program. > but you can a fault-tolerant one. And you can have a fault-tolerant one too. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-29 17:03 ` Dmitry A. Kazakov 2006-05-30 7:11 ` Jan Vorbrüggen @ 2006-05-31 14:58 ` robin 2006-05-31 15:42 ` Dmitry A. Kazakov 1 sibling, 1 reply; 314+ messages in thread From: robin @ 2006-05-31 14:58 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1120 bytes --] "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:1j8hpasemtm7n$.1l1wrnukz7ewf$.dlg@40tude.net... > On Mon, 29 May 2006 17:08:27 +0200, Jan Vorbr�ggen wrote: > > >>>In the case of Ariane 501, the correct approach > >>>IMO would have been to have a test mode (with detection) and a flight mode, > >>>which turns on the "let's hope and pray" handling of errors and is reserved > >>>for use only on actual launches. > >> I don't think so. The problem (bug) wasn't in an inappropriate handling of > >> an error. It was a false positive in error detection. Handling was correct, > >> detection was wrong. > > > > If any error had been forseen, I might agree. But the problem lay in handling > > the unforseen error: that handling, in itself, led to failure. The approach > > taken just wasn't tolerant of errors in the programming. > > Ah, but an unforeseen error is a bug. One cannot be bug-tolerant, it is > self-contradictory, after all. A program can be bug-tolerant. Standard error-handling techniques dan del with them. Such programs containing error handling are called fail-safe programs. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-31 14:58 ` robin @ 2006-05-31 15:42 ` Dmitry A. Kazakov 2006-05-31 15:54 ` Gordon Sande 0 siblings, 1 reply; 314+ messages in thread From: Dmitry A. Kazakov @ 2006-05-31 15:42 UTC (permalink / raw) On Wed, 31 May 2006 14:58:44 GMT, robin wrote: > "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message > news:1j8hpasemtm7n$.1l1wrnukz7ewf$.dlg@40tude.net... >> Ah, but an unforeseen error is a bug. One cannot be bug-tolerant, it is >> self-contradictory, after all. > > A program can be bug-tolerant. Indeed. I know so many programs exceptionally tolerant to their bugs... -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-31 15:42 ` Dmitry A. Kazakov @ 2006-05-31 15:54 ` Gordon Sande 0 siblings, 0 replies; 314+ messages in thread From: Gordon Sande @ 2006-05-31 15:54 UTC (permalink / raw) On 2006-05-31 12:42:09 -0300, "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> said: > On Wed, 31 May 2006 14:58:44 GMT, robin wrote: > >> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message >> news:1j8hpasemtm7n$.1l1wrnukz7ewf$.dlg@40tude.net... > >>> Ah, but an unforeseen error is a bug. One cannot be bug-tolerant, it is >>> self-contradictory, after all. >> >> A program can be bug-tolerant. > > Indeed. I know so many programs exceptionally tolerant to their bugs... They often produce wrong answers which keep their users happy. When (or if) they are corrected and produce different answers the users are sometime not so happy. :-( ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-29 14:52 ` Dmitry A. Kazakov 2006-05-29 15:08 ` Jan Vorbrüggen @ 2006-05-31 14:58 ` robin 2006-05-31 18:07 ` Marc A. Criley 1 sibling, 1 reply; 314+ messages in thread From: robin @ 2006-05-31 14:58 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1366 bytes --] "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:10qtgfusyium5.1fe6t8kirrzbf$.dlg@40tude.net... > On Mon, 29 May 2006 16:17:06 +0200, Jan Vorbr�ggen wrote: > > >> Clearly, Ariane 5's case is not representative of the vast > >> bulk of real-world code. > > > > Quite to the contrary - almost all of the world's code is in embedded > > systems, Winwoes notwithstanding. > > > > But I believe you are overinterpreting what I said. What I wanted to say > > is that error detection without corrective action is not the panacea it > > is sometimes made out to be. > > I think one should clarify what was an error and what was a bug. Properly > detected, but improperly handled errors are bugs. Bugs cannot be handled. Bugs can be handled in many cases. Standard error handling can deal with them. > > In the case of Ariane 501, the correct approach > > IMO would have been to have a test mode (with detection) and a flight mode, > > which turns on the "let's hope and pray" handling of errors and is reserved > > for use only on actual launches. > > I don't think so. The problem (bug) wasn't in an inappropriate handling of > an error. It was a false positive in error detection. Handling was correct, > detection was wrong. ?? There was no handling of the unprotected error in the Ariane 5. The response was to shut down the processor. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-31 14:58 ` robin @ 2006-05-31 18:07 ` Marc A. Criley 0 siblings, 0 replies; 314+ messages in thread From: Marc A. Criley @ 2006-05-31 18:07 UTC (permalink / raw) robin wrote: > Bugs can be handled in many cases. > Standard error handling can deal with them. Depends on what you mean by "deal with them". If it means releasing whatever resources you believe you have control over and then exiting, okay. I take a paranoid approach to bugs--I have no way of knowing what that bug may have corrupted by the time it gets detected. So I'm extremely leery of bug handling responses that simply stop the propagation of the fault, and then try to resume execution. My reasoning is that since it's a bug, some or all of its triggering context is inherently unknowable (if it was fully knowable, it would be a known possible error, e.g., network failure, and could be accommodated via error handling). The application has therefore lost confidence in its internal state, and proceeding onward with anything short of an exit/verify externalities/restart strategy is proceeding at risk. My 2c :-) -- Marc A. Criley -- McKae Technologies -- www.mckae.com -- DTraq - XPath In Ada - XML EZ Out ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-29 14:17 ` Jan Vorbrüggen 2006-05-29 14:52 ` Dmitry A. Kazakov @ 2006-05-29 15:47 ` Dr. Adrian Wrigley 1 sibling, 0 replies; 314+ messages in thread From: Dr. Adrian Wrigley @ 2006-05-29 15:47 UTC (permalink / raw) On Mon, 29 May 2006 16:17:06 +0200, Jan Vorbr�ggen wrote: >> Clearly, Ariane 5's case is not representative of the vast >> bulk of real-world code. > > Quite to the contrary - almost all of the world's code is in embedded > systems, Winwoes notwithstanding. The unrepresentative combination of features of Ariane 5's code I was referring to were: 1) Hard real-time, termination highly likely to result in catastrophe 2) Never tested in anticipated context before use 3) Lack of designed resiliance against bugs Thanks for your clarification - makes better sense now! -- Adrian ^ permalink raw reply [flat|nested] 314+ messages in thread
[parent not found: <pan.2006.05.25.12.11.52.919554@linuxchip.demon.co.uk.u <2006052514574816807-gsande@worldnetattnet>]
* Re: Checking for Undefined [was Re: Ada vs Fortran for scientific applications] [not found] ` <pan.2006.05.25.12.11.52.919554@linuxchip.demon.co.uk.u <2006052514574816807-gsande@worldnetattnet> @ 2006-05-25 18:30 ` Thomas Koenig 2006-05-25 18:34 ` Gordon Sande 0 siblings, 1 reply; 314+ messages in thread From: Thomas Koenig @ 2006-05-25 18:30 UTC (permalink / raw) Gordon Sande <g.sande@worldnet.att.net> wrote: >They even set INTENT ( IN ) variables to undefined so a >user can set the undefined attribute/(bit configuration) themselves >if they are doing an internal storeage allocation themselves. Neat! I don't understand this. Do you mean INTENT (OUT)? >Initialization to signaling NANs if a quick and effective approximation. >Needs help from first the loader and then the storeage allocator. >Until you have used it and it has saved a goodly block of time it >is a feature that many just shrug and say "That's nice". They do not >realize what they are missing. Oh, yes. I have used this feature on MIPS (-trapuv), and have sorely missed it on the compilers I've used since. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined [was Re: Ada vs Fortran for scientific applications] 2006-05-25 18:30 ` Checking for Undefined [was Re: Ada vs Fortran for scientific applications] Thomas Koenig @ 2006-05-25 18:34 ` Gordon Sande 0 siblings, 0 replies; 314+ messages in thread From: Gordon Sande @ 2006-05-25 18:34 UTC (permalink / raw) On 2006-05-25 15:30:42 -0300, Thomas Koenig <Thomas.Koenig@online.de> said: > Gordon Sande <g.sande@worldnet.att.net> wrote: > >> They even set INTENT ( IN ) variables to undefined so a >> user can set the undefined attribute/(bit configuration) themselves >> if they are doing an internal storeage allocation themselves. Neat! > > I don't understand this. Do you mean INTENT (OUT)? Oops! You have the correct version. >> Initialization to signaling NANs if a quick and effective approximation. >> Needs help from first the loader and then the storeage allocator. > >> Until you have used it and it has saved a goodly block of time it >> is a feature that many just shrug and say "That's nice". They do not >> realize what they are missing. > > Oh, yes. I have used this feature on MIPS (-trapuv), and have > sorely missed it on the compilers I've used since. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 6:04 ` Richard Maine ` (2 preceding siblings ...) [not found] ` <pan.2006.05.25.12.11.52.919554@linuxchip.demon.co.uk.u <2006052514574816807-gsande@worldnetattnet> @ 2006-05-26 2:58 ` robin [not found] ` <pan.2006.05.25.12.11.52.919554@linuxchip.demon.co.uk.u <m2r72h69vz.fsf@grendel.local> ` (4 subsequent siblings) 8 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-05-26 2:58 UTC (permalink / raw) "Richard Maine" <nospam@see.signature> wrote in message news:1hfv5wb.1x4ab1tbdzk7eN%nospam@see.signature... > Nasser Abbasi <nma@12000.org> wrote: > > > I just did this simple test, declare an array and go overbound and see if we > > get a run-time error: > ... > > $ g95 f.f90 > ... > > $ <------------------- NO runtime ERROR > > This part of the thread has started drifting away from relevance to much > of anything, but that particular sample is just drifting yet further. It > illustrates neither much about subscript bounds rules being part of the > language nor about bounds checking being part of the language, Oh inded it does. Subscript checking is not part of the Fortran language. That a particular Fortran compiler provides subscript checking or does not provide it is irrelevant. > As with most compilers, g95 does have a bounds check option; it just > isn't enabled by default. That's irrelevant to the discussion. It isn't something that's defined in the language. > Gives me: > > At line 4 of file clf.f90 > Traceback: not available, compile with -ftrace=frame or -ftrace=full Actually now that you mention it, a traceback is another thing that PL/I provides, via the SNAP source option. (It's part of the language) ^ permalink raw reply [flat|nested] 314+ messages in thread
[parent not found: <pan.2006.05.25.12.11.52.919554@linuxchip.demon.co.uk.u <m2r72h69vz.fsf@grendel.local>]
* Re: Checking for Undefined [not found] ` <pan.2006.05.25.12.11.52.919554@linuxchip.demon.co.uk.u <m2r72h69vz.fsf@grendel.local> @ 2006-05-26 7:54 ` Dirk Craeynest 2006-05-26 15:04 ` Dick Hendrickson 0 siblings, 1 reply; 314+ messages in thread From: Dirk Craeynest @ 2006-05-26 7:54 UTC (permalink / raw) >Gordon Sande <g.sande@worldnet.att.net> writes: >> I am getting the impression from the silence of the cross postings >> that undefined checking has only shown up in Fortran systems. [...] Simon Wright <simon@pushface.org> wrote: >The pro version of GNAT (I don't know about the FSF version) has >optional initialization with out-of-range values and checking even in >places where it normally would be omitted because the compiler would >assume it had already done the checks. Pragma Initialize_Scalars together with improved control over validity checking was introduced in GNAT in the 2001-2002 time frame. As such, *early* versions are included already in GNAT 3.15p [1], which was released in October 2002. The implementation has been fine-tuned and further improved in later GNAT releases, i.e. the GNAT Pro, GNAT Academic, and GNAT GPL editions [2], as well as in the FSF version [3]. >This only works if there _are_ out-of-range values, so Integer can't >be checked. Normally the recommendation is to define types appropriate >to the application, so checks are possible. True, but even for types without out-of-range values, there's help. With GNAT, you can control the value used for initializing scalar objects. Apart from using invalid values (where possible), you can also choose to use high or low values, or with a specified bit pattern. Running your application tests with various such settings and checking for differences in the results helps to detect the use of uninitialized variables. For much more about uninitialized variables in Ada code, the following paper might be useful: "Exposing Uninitialized Variables: Strengthening and Extending Run-Time Checks in Ada" [4], Robert Dewar, Olivier Hainque, Dirk Craeynest, and Philippe Waroquiers, In "Proceedings of the 7th International Conference on Reliable Software Technologies - Ada-Europe 2002" [5], Vienna, Austria, June 17-21, 2002, Johan Blieberger and Alfred Strohmeier (Eds.), volume 2361 of Lecture Notes in Computer Science, pages 193-204, Springer-Verlag, 2002. The GNAT manuals provide more information on GNAT's pragma Initialize_Scalars [6] and on enhanced validity checking [7]. Reference [6] mentions: ---start-quote--- Note that pragma Initialize_Scalars is particularly useful in conjunction with the enhanced validity checking that is now provided in GNAT, which checks for invalid values under more conditions. Using this feature (see description of the -gnatV flag in the users guide) in conjunction with pragma Initialize_Scalars provides a powerful new tool to assist in the detection of problems caused by uninitialized variables. ---end-quote--- We can assure everyone that from a developers and testers point of view the combination of Initialize_Scalars and enhanced validity checking is indeed "particularly useful". References: [1] <ftp://ftp.cs.kuleuven.be//pub/Ada-Belgium/mirrors/gnu-ada/3.15p> [2] <https://libre2.adacore.com/dynamic/comp_chart.html [3] <http://gcc.gnu.org/cvs.html> [4] <http://www.cs.kuleuven.be/~dirk/papers/ae02cfmu-paper.pdf> [5] <http://www.springeronline.com/3-540-43784-3> [6] <http://www.adacore.com/wp-content/files/auto_update/gnat-unw-docs/html/gnat_rm_2.html#SEC48> [7] <http://www.adacore.com/wp-content/files/auto_update/gnat-unw-docs/html/gnat_ugn_4.html#SEC47> Dirk Dirk.Craeynest@cs.kuleuven.be (for Ada-Belgium/-Europe/SIGAda/WG9 mail) +-------------/ Home: http://www.cs.kuleuven.be/~dirk/ada-belgium |Ada-Belgium / FTP: ftp://ftp.cs.kuleuven.be/pub/Ada-Belgium |on Internet/ E-mail: ada-belgium-board@cs.kuleuven.be +----------/ Maillist: ada-belgium-info-request@cs.kuleuven.be *** 11th Intl.Conf.on Reliable Software Technologies - Ada-Europe'2006 *** June 5-9, 2006 ** Porto, Portugal ** http://www.ada-europe.org *** ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined 2006-05-26 7:54 ` Checking for Undefined Dirk Craeynest @ 2006-05-26 15:04 ` Dick Hendrickson 2006-05-27 14:29 ` robin 0 siblings, 1 reply; 314+ messages in thread From: Dick Hendrickson @ 2006-05-26 15:04 UTC (permalink / raw) This isn't really relevant to the post I'm replying to, but my newsreader has lost the one I wanted to reply to. Sorry. There's a deep problem in Fortran with detecting usage of undefined variables. The problem is that an error condition during input causes all input items to become undefined. Given a statement like read (unit, err = 100) j, x(j), j, x(j) What's supposed to happen? If the error occurs after the second read of "j", how does the processor go back and flag the first instance of x([old value of]j) as undefined when "j" no longer has the old value? Sure, in this example it's easy enough for the processor to keep a little list of variables. But, what if the input list is inside an do-loop that goes from 1 to some huge number. There's no practical way to keep the list of potentially undefined values. You can't spray the array with "undefined" flags before the input, since not all values are necessarily changed. You can't make a shadow copy of the array and look for changed elements, because input doesn't have to change the value in an element and making a shadow copy of a million word array is impractical. I think there are cases in Fortran where usage of what the language calls undefined variables is not detectable in any practical sense of the word. Dick Hendrickson ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined 2006-05-26 15:04 ` Dick Hendrickson @ 2006-05-27 14:29 ` robin 2006-05-27 15:08 ` Gordon Sande 0 siblings, 1 reply; 314+ messages in thread From: robin @ 2006-05-27 14:29 UTC (permalink / raw) "Dick Hendrickson" <dick.hendrickson@att.net> wrote in message news:2OEdg.167448$eR6.128849@bgtnsc04-news.ops.worldnet.att.net... > This isn't really relevant to the post I'm replying to, but > my newsreader has lost the one I wanted to reply to. Sorry. > > There's a deep problem in Fortran with detecting usage of > undefined variables. The problem is that an error condition > during input causes all input items to become undefined. > Given a statement like > read (unit, err = 100) j, x(j), j, x(j) > What's supposed to happen? If the error occurs after the > second read of "j", how does the processor go back and > flag the first instance of x([old value of]j) as undefined > when "j" no longer has the old value? Sure, in this > example it's easy enough for the processor to keep a little > list of variables. But, what if the input list is inside > an do-loop that goes from 1 to some huge number. There's > no practical way to keep the list of potentially undefined > values. By keeping a shadow array of bits, any undefined value can be flagged. > You can't spray the array with "undefined" flags > before the input, since not all values are necessarily > changed. You can't make a shadow copy of the array and > look for changed elements, because input doesn't have to > change the value in an element and making a shadow copy > of a million word array is impractical. It isn't impractical, because only 3% of additional storage is required (see above). > I think there are cases in Fortran where usage of what > the language calls undefined variables is not detectable > in any practical sense of the word. See above. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined 2006-05-27 14:29 ` robin @ 2006-05-27 15:08 ` Gordon Sande 2006-05-28 0:56 ` robin 0 siblings, 1 reply; 314+ messages in thread From: Gordon Sande @ 2006-05-27 15:08 UTC (permalink / raw) On 2006-05-27 11:29:29 -0300, "robin" <robin_v@bigpond.com> said: > "Dick Hendrickson" <dick.hendrickson@att.net> wrote in message > news:2OEdg.167448$eR6.128849@bgtnsc04-news.ops.worldnet.att.net... >> This isn't really relevant to the post I'm replying to, but >> my newsreader has lost the one I wanted to reply to. Sorry. >> >> There's a deep problem in Fortran with detecting usage of >> undefined variables. The problem is that an error condition >> during input causes all input items to become undefined. >> Given a statement like >> read (unit, err = 100) j, x(j), j, x(j) >> What's supposed to happen? If the error occurs after the >> second read of "j", how does the processor go back and >> flag the first instance of x([old value of]j) as undefined >> when "j" no longer has the old value? Sure, in this >> example it's easy enough for the processor to keep a little >> list of variables. But, what if the input list is inside >> an do-loop that goes from 1 to some huge number. There's >> no practical way to keep the list of potentially undefined >> values. The practical man is likly to say that the problem here is the GENERATION of the undefined values. That is when the execution should be flagged as erroneous. Letting them be and tracking them is awkward but seems to be an invented problem that is more the unintended result of the particular wording. Assign the problem to a thesis student and let the real world gt on with doing real things. If undefined variable checking were to be part of the standard then this is an issue that might deserve better wording but I would not expect it make any realistic checklist. Since the program starts will all variables undefined the testing for their use is a practical thing to want. Otherwise all that is being asked for is to initialize everything, which seems to ignore the real issue of real programming errors. If one wants to be fussy one might say that a divide by zero is just generating an undefined it is no problem unles the result is used. That is exactly what NANs do so it has considerable utility. > By keeping a shadow array of bits, any undefined > value can be flagged. > >> You can't spray the array with "undefined" flags >> before the input, since not all values are necessarily >> changed. You can't make a shadow copy of the array and >> look for changed elements, because input doesn't have to >> change the value in an element and making a shadow copy >> of a million word array is impractical. > > It isn't impractical, because only 3% of additional storage > is required (see above). > >> I think there are cases in Fortran where usage of what >> the language calls undefined variables is not detectable >> in any practical sense of the word. > > See above. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined 2006-05-27 15:08 ` Gordon Sande @ 2006-05-28 0:56 ` robin 2006-05-28 1:04 ` glen herrmannsfeldt 2006-05-28 13:46 ` Gordon Sande 0 siblings, 2 replies; 314+ messages in thread From: robin @ 2006-05-28 0:56 UTC (permalink / raw) "Gordon Sande" <g.sande@worldnet.att.net> wrote in message news:2006052712085316807-gsande@worldnetattnet... > The practical man is likly to say that the problem here is the > GENERATION of the undefined values. That is when the execution should > be flagged as erroneous. Letting them be and tracking them is > awkward but seems to be an invented problem that is more the > unintended result of the particular wording. Assign the problem > to a thesis student and let the real world gt on with doing real > things. If undefined variable checking were to be part of the > standard then this is an issue that might deserve better wording > but I would not expect it make any realistic checklist. Checking undefined variables has been around for a long time (more than 3 decades, maybe 4), and successfully in notable compilers such as WATFOR/WATFIV and PL/C. You will find that it's offered in some current Fortran compilers. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined 2006-05-28 0:56 ` robin @ 2006-05-28 1:04 ` glen herrmannsfeldt 2006-05-28 13:46 ` Gordon Sande 1 sibling, 0 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-05-28 1:04 UTC (permalink / raw) robin wrote: (snip) > Checking undefined variables has been around for a long time > (more than 3 decades, maybe 4), and successfully in notable compilers > such as WATFOR/WATFIV and PL/C. > You will find that it's offered in some current Fortran compilers. Note, though, that it doesn't work well for small variables, such as CHARACTER*1 in WATFIV with the value 'a'. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Checking for Undefined 2006-05-28 0:56 ` robin 2006-05-28 1:04 ` glen herrmannsfeldt @ 2006-05-28 13:46 ` Gordon Sande 1 sibling, 0 replies; 314+ messages in thread From: Gordon Sande @ 2006-05-28 13:46 UTC (permalink / raw) On 2006-05-27 21:56:06 -0300, "robin" <robin_v@bigpond.com> said: > "Gordon Sande" <g.sande@worldnet.att.net> wrote in message > news:2006052712085316807-gsande@worldnetattnet... > >> The practical man is likly to say that the problem here is the >> GENERATION of the undefined values. That is when the execution should >> be flagged as erroneous. Letting them be and tracking them is >> awkward but seems to be an invented problem that is more the >> unintended result of the particular wording. Assign the problem >> to a thesis student and let the real world gt on with doing real >> things. If undefined variable checking were to be part of the >> standard then this is an issue that might deserve better wording >> but I would not expect it make any realistic checklist. > > Checking undefined variables has been around for a long time > (more than 3 decades, maybe 4), and successfully in notable compilers > such as WATFOR/WATFIV and PL/C. > You will find that it's offered in some current Fortran compilers. All of these were listed earlier but you have chosen to ingnore that. Judging from other threads that is to be expected in your case. The response was to a particular issue relating to the wording of the processing of I/O lists in the presence of errors. ^ permalink raw reply [flat|nested] 314+ messages in thread
[parent not found: <pan.2006.05.25.12.11.52.919554@linuxchip.demon.co.uk.u <e574mu$rrj$1@scrotar.nss.udel.edu>]
* Re: Bounds Check Overhead [not found] ` <pan.2006.05.25.12.11.52.919554@linuxchip.demon.co.uk.u <e574mu$rrj$1@scrotar.nss.udel.edu> @ 2006-05-26 18:53 ` Thomas Koenig 2006-05-26 19:07 ` Richard Maine 0 siblings, 1 reply; 314+ messages in thread From: Thomas Koenig @ 2006-05-26 18:53 UTC (permalink / raw) Rich Townsend <rhdt@barVOIDtol.udel.edu> wrote: >in particular, if the called subroutines have INTENT() on their arguments, then >its pretty cut-and-dry whether I will be modified or not. How, excactly, will you be modified by having an intentional argument? SCNR, Thomas ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Bounds Check Overhead 2006-05-26 18:53 ` Bounds Check Overhead Thomas Koenig @ 2006-05-26 19:07 ` Richard Maine 0 siblings, 0 replies; 314+ messages in thread From: Richard Maine @ 2006-05-26 19:07 UTC (permalink / raw) Thomas Koenig <Thomas.Koenig@online.de> wrote: > Rich Townsend <rhdt@barVOIDtol.udel.edu> wrote: > > >in particular, if the called subroutines have INTENT() on their > >iarguments, then ts pretty cut-and-dry whether I will be modified or not. > > How, excactly, will you be modified by having an intentional argument? Well, if I intentionally debate a point, the result of that debate might modify my opinion. :-) -- Richard Maine | Good judgement comes from experience; email: last name at domain . net | experience comes from bad judgement. domain: summertriangle | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
[parent not found: <pan.2006.05.25.12.11.52.919554@linuxchip <20060712.7A4E6E0.D028@mojaveg.lsan.sisna.com>]
[parent not found: <20060712.7A4E6E0.D028@mojaveg.lsan.sisna.com>]
* Re: Ada vs Fortran for scientific applications [not found] ` <20060712.7A4E6E0.D028@mojaveg.lsan.sisna.com> @ 2006-07-14 6:08 ` Bob Lidral 2006-07-14 6:17 ` Richard Maine 0 siblings, 1 reply; 314+ messages in thread From: Bob Lidral @ 2006-07-14 6:08 UTC (permalink / raw) Everett M. Greene wrote: > "robin" <robin_v@bigpond.com> writes: > >>glen herrmannsfeldt wrote in message ... >> >>>robin wrote: >>> >>>>glen herrmannsfeldt wrote >>> >>>>>For signed integer types, most, if not all, allow twos complement, >>>>>ones complement, or sign magnitude representation. >>> >>>>Virtually all use twos complement for negative values. >>>>Few ever used ones complement anyway. They were a PITA. >>> >>>CDC used ones complement, >> >>Yes, and they were a PITA. > > > In what regard? Ones complement vs. twos complement is > rarely of significance to a HLL programmer. > Depends of the compiler, instruction set, and application. Years ago, I did a lot of programming on CDC CYBERS. There were some operations that could result in either positive or negative zero. It was also difficult to do character comparisons. Bob Lidral lidral at alum dot mit dot edu ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-07-14 6:08 ` Ada vs Fortran for scientific applications Bob Lidral @ 2006-07-14 6:17 ` Richard Maine 2006-07-17 12:44 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) robin 0 siblings, 1 reply; 314+ messages in thread From: Richard Maine @ 2006-07-14 6:17 UTC (permalink / raw) Bob Lidral <l1dralspamba1t@comcast.net> wrote: > Years ago, I did a lot of programming on CDC CYBERS. There were some > operations that could result in either positive or negative zero. It > was also difficult to do character comparisons. Me too. On the other hand, in other situations it was nice that you never had to worry about range asymetry. Sometimes one form is better. Sometimes the other. Sometimes it makes little difference. I consider claims that either form is universally and obviously superior to be indicative of a narrow viewpoint. But then, if I read the citations correctly, this is not new news. :-( -- Richard Maine | Good judgement comes from experience; email: last name at domain . net | experience comes from bad judgement. domain: summertriangle | -- Mark Twain ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) 2006-07-14 6:17 ` Richard Maine @ 2006-07-17 12:44 ` robin 0 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-07-17 12:44 UTC (permalink / raw) Richard Maine wrote in message <1hifsfw.11gje3142w54vN%nospam@see.signature>... >Bob Lidral <l1dralspamba1t@comcast.net> wrote: > >> Years ago, I did a lot of programming on CDC CYBERS. There were some >> operations that could result in either positive or negative zero. It >> was also difficult to do character comparisons. > >Me too. On the other hand, in other situations it was nice that you >never had to worry about range asymetry. But you did! The size of a CDC Cyber integer was 60 bits. But if you used integer multiply, the operands had to be 48 significant bits maximum, otherwise the hardware interpreted the operands as floating-point, and did a floating-point multiplication, with catastrophically wrong results. >Sometimes one form is better. Sometimes the other. Sometimes it makes >little difference. Some implementations of ones complement used two forms of zero (zero and all ones), which required two tests. Such implementations were inferior. >I consider claims that either form is universally and obviously superior >to be indicative of a narrow viewpoint. The proof of the pudding is in the eating. How many machines now offer ones complement arithmetic? The forms that have persisted are twos complement for integers, and signed magnitude (for floating-point). It needs to be pointed out that on (early) serial machines, ones complement was clearly inferior because of the need for an additional cycle for the end-around-carry (not to mention the additional hardware to carry that out). Then there would have been the need to cater for the two forms of zero, requiring more hardware. I can think of at least two machines where ones complement would have been completely impractical. ^ permalink raw reply [flat|nested] 314+ messages in thread
[parent not found: <20060712.7A4E6E0.D028@mojaveg.lsan.sisna <20060714.7A4E988.A30D@mojaveg.lsan.sisna.com>]
[parent not found: <20060714.7A4E988.A30D@mojaveg.lsan.sisna.com>]
* Re: Ada vs Fortran for scientific applications [not found] ` <20060714.7A4E988.A30D@mojaveg.lsan.sisna.com> @ 2006-07-16 9:07 ` glen herrmannsfeldt 2006-07-18 1:48 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) robin 0 siblings, 1 reply; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-16 9:07 UTC (permalink / raw) Everett M. Greene wrote: (snip) > Doing multi-precision arithmetic on a ones complement > machine can be a real challenge. Well, you want unsigned arithmetic to do multiple precision, which is harder if a machine only supplies ones complement. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) 2006-07-16 9:07 ` Ada vs Fortran for scientific applications glen herrmannsfeldt @ 2006-07-18 1:48 ` robin 2006-07-18 18:41 ` ONES COMPLEMENT glen herrmannsfeldt 0 siblings, 1 reply; 314+ messages in thread From: robin @ 2006-07-18 1:48 UTC (permalink / raw) glen herrmannsfeldt wrote in message ... >Everett M. Greene wrote: >(snip) > >> Doing multi-precision arithmetic on a ones complement >> machine can be a real challenge. > >Well, you want unsigned arithmetic to do multiple precision, >which is harder if a machine only supplies ones complement. In what way? The arithmetic is done using fewer bits than word. You have to do that with twos complement also. The arithmetic required is addition; and all values manipulated are positive. Results are the same in machines supporting ones or twos complement representation for negative values. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-18 1:48 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) robin @ 2006-07-18 18:41 ` glen herrmannsfeldt 2006-07-19 13:41 ` robin 2006-07-22 0:09 ` Richard Steiner 0 siblings, 2 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-18 18:41 UTC (permalink / raw) robin wrote: > glen herrmannsfeldt wrote in message ... (snip) >>Well, you want unsigned arithmetic to do multiple precision, >>which is harder if a machine only supplies ones complement. > In what way? > The arithmetic is done using fewer bits than word. > You have to do that with twos complement also. The bits resulting in an unsigned addition or subtraction operation are the same as for a twos complement operation, but the detection of overflow (or carry/borrow) is different. If you can properly detect carry/borrow you can do it with all the bits in the word. > The arithmetic required is addition; and all values > manipulated are positive. Results are the same in > machines supporting ones or twos complement > representation for negative values. Hmm. If you can detect the end around carry you can generate the correct unsigned result, and so still use all the bits. This is not usually easy in an high-level language, but should be provided to the assembly programmer. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-18 18:41 ` ONES COMPLEMENT glen herrmannsfeldt @ 2006-07-19 13:41 ` robin 2006-07-22 0:09 ` Richard Steiner 1 sibling, 0 replies; 314+ messages in thread From: robin @ 2006-07-19 13:41 UTC (permalink / raw) glen herrmannsfeldt wrote in message ... >robin wrote: >> glen herrmannsfeldt wrote in message ... > >(snip) > >>>Well, you want unsigned arithmetic to do multiple precision, >>>which is harder if a machine only supplies ones complement. > >> In what way? >> The arithmetic is done using fewer bits than word. >> You have to do that with twos complement also. > >The bits resulting in an unsigned addition or subtraction operation are >the same as for a twos complement operation, but the detection of >overflow (or carry/borrow) is different. If you can properly detect >carry/borrow you can do it with all the bits in the word. The CDC cybers did not provide that. >> The arithmetic required is addition; and all values >> manipulated are positive. Results are the same in >> machines supporting ones or twos complement >> representation for negative values. > >Hmm. If you can detect the end around carry The CDC cyber did not provide that. >you can generate >the correct unsigned result, and so still use all the bits. >This is not usually easy in an high-level language, Naturally. > but should >be provided to the assembly programmer. The CDC Cyber did not provide that. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-18 18:41 ` ONES COMPLEMENT glen herrmannsfeldt 2006-07-19 13:41 ` robin @ 2006-07-22 0:09 ` Richard Steiner 1 sibling, 0 replies; 314+ messages in thread From: Richard Steiner @ 2006-07-22 0:09 UTC (permalink / raw) Just a related link for interested folks (from the UNIVAC 110x world): http://www.fourmilab.ch/documents/univac/minuszero.html The current Unisys Clearpath IX/Dorado mainframe line still uses the same architecture, so it's still an issue. :-) -- -Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Mableton, GA USA Mainframe/Unix bit twiddler by day, OS/2+Linux+DOS hobbyist by night. WARNING: I've seen FIELDATA FORTRAN V and I know how to use it! The Theorem Theorem: If If, Then Then. ^ permalink raw reply [flat|nested] 314+ messages in thread
[parent not found: <20060712.7A4E6E0.D028@mojaveg.lsan.sisna <20060717.7A4ADD0.10B1A@mojaveg.lsan.sisna.com>]
* Re: ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) [not found] ` <20060712.7A4E6E0.D028@mojaveg.lsan.sisna <20060717.7A4ADD0.10B1A@mojaveg.lsan.sisna.com> @ 2006-07-18 13:05 ` Shmuel (Seymour J.) Metz 2006-07-19 11:18 ` ONES COMPLEMENT Peter Flass ` (2 more replies) [not found] ` <20060717.7A4ADD0.10B1A@mojaveg.lsan.sisna.com> 1 sibling, 3 replies; 314+ messages in thread From: Shmuel (Seymour J.) Metz @ 2006-07-18 13:05 UTC (permalink / raw) In <20060717.7A4ADD0.10B1A@mojaveg.lsan.sisna.com>, on 07/17/2006 at 06:26 PM, mojaveg@mojaveg.lsan.sisna.com (Everett M. Greene) said: >"robin" <robin_v@bigpond.com> writes: >> Some implementations of ones complement used two forms of >> zero FSVO "some" equal to "all". >>which required two tests. That makes no sense. >A properly designed 1s complement machine would not >generate -0 in normal arithmetic operations. 0+(-0)? >Thus, no need to perform a program check for -0. I never saw a need to test for it regardless. Every 1s complement machine I know of lets you test for zero in a single instruction. >> How many machines now offer ones complement arithmetic? Before the S/360 captured the market there were large numbers of machines in the lines started with the CDC 160, CDC 6660 and the UNIVAC 1107. The shift seemed to be a copycat issue more than a technological one. -- Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel> Unsolicited bulk E-mail subject to legal action. I reserve the right to publicly post or ridicule any abusive E-mail. Reply to domain Patriot dot net user shmuel+news to contact me. Do not reply to spamtrap@library.lspace.org ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-18 13:05 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) Shmuel (Seymour J.) Metz @ 2006-07-19 11:18 ` Peter Flass 2006-07-19 17:14 ` glen herrmannsfeldt 2006-07-19 13:41 ` robin 2006-07-19 17:11 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) Everett M. Greene 2 siblings, 1 reply; 314+ messages in thread From: Peter Flass @ 2006-07-19 11:18 UTC (permalink / raw) Shmuel (Seymour J.) Metz wrote: > In <20060717.7A4ADD0.10B1A@mojaveg.lsan.sisna.com>, on 07/17/2006 > at 06:26 PM, mojaveg@mojaveg.lsan.sisna.com (Everett M. Greene) > said: > >>A properly designed 1s complement machine would not >>generate -0 in normal arithmetic operations. > > > 0+(-0)? > > >>Thus, no need to perform a program check for -0. > > > I never saw a need to test for it regardless. Every 1s complement > machine I know of lets you test for zero in a single instruction. > What happens on a test for negative? Did the machines just test the sign bit, making -0 a negative number? ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-19 11:18 ` ONES COMPLEMENT Peter Flass @ 2006-07-19 17:14 ` glen herrmannsfeldt 2006-07-19 23:43 ` robin 0 siblings, 1 reply; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-19 17:14 UTC (permalink / raw) Peter Flass wrote: > Shmuel (Seymour J.) Metz wrote: (snip on negative zero) >> I never saw a need to test for it regardless. Every 1s complement >> machine I know of lets you test for zero in a single instruction. > What happens on a test for negative? Did the machines just test the > sign bit, making -0 a negative number? Well, consider the 704, which is actually sign magnitude but has the same problem. (The easiest way to do sign magnitude arithmetic is to convert to ones complement first, and convert back later.) The 704 has a three way test instruction recently discussed in some newsgroup, with branch destinations for negative, zero, and positive. Any questions about the origin of the arithmetic IF statement? -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-19 17:14 ` glen herrmannsfeldt @ 2006-07-19 23:43 ` robin 0 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-07-19 23:43 UTC (permalink / raw) glen herrmannsfeldt wrote in message <5qednejm5MHL9SPZnZ2dnUVZ_oSdnZ2d@comcast.com>... >Peter Flass wrote: > >> Shmuel (Seymour J.) Metz wrote: > >(snip on negative zero) > >>> I never saw a need to test for it regardless. Every 1s complement >>> machine I know of lets you test for zero in a single instruction. > >> What happens on a test for negative? Did the machines just test the >> sign bit, making -0 a negative number? > >Well, consider the 704, which is actually sign magnitude but has the >same problem. (The easiest way to do sign magnitude arithmetic is >to convert to ones complement first, and convert back later.) It doesn't follow. Twos complement for neg. values is just as easy. In fact, for a serial machine, not only it [twos complement] easier, it is done in less time! when consideration of the ensuing arithmetic is considered. >The 704 has a three way test instruction recently discussed in some >newsgroup, with branch destinations for negative, zero, and positive. >Any questions about the origin of the arithmetic IF statement? This is off-topic, and completely irrelevant. It has nothing to do with ones complement. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-18 13:05 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) Shmuel (Seymour J.) Metz 2006-07-19 11:18 ` ONES COMPLEMENT Peter Flass @ 2006-07-19 13:41 ` robin 2006-07-19 17:11 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) Everett M. Greene 2 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-07-19 13:41 UTC (permalink / raw) Shmuel (Seymour J.) Metz wrote in message <44bceab6$29$fuzhry+tra$mr2ice@news.patriot.net>... >In <20060717.7A4ADD0.10B1A@mojaveg.lsan.sisna.com>, on 07/17/2006 > at 06:26 PM, mojaveg@mojaveg.lsan.sisna.com (Everett M. Greene) >said: > >>"robin" <robin_v@bigpond.com> writes: > >>> Some implementations of ones complement used two forms of >>> zero > >FSVO "some" equal to "all". Not all. >>>which required two tests. > >That makes no sense. One test for all bits zero, the other required for all ones. >>A properly designed 1s complement machine would not >>generate -0 in normal arithmetic operations. > >0+(-0)? > >>Thus, no need to perform a program check for -0. > >I never saw a need to test for it regardless. Every 1s complement >machine I know of lets you test for zero in a single instruction. (-0) looked like a negative value on some machines. >>> How many machines now offer ones complement arithmetic? > >Before the S/360 captured the market there were large numbers of >machines in the lines started with the CDC 160, CDC 6660 and the >UNIVAC 1107. The shift seemed to be a copycat issue more than a >technological one. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) 2006-07-18 13:05 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) Shmuel (Seymour J.) Metz 2006-07-19 11:18 ` ONES COMPLEMENT Peter Flass 2006-07-19 13:41 ` robin @ 2006-07-19 17:11 ` Everett M. Greene 2006-07-19 19:40 ` Shmuel (Seymour J.) Metz 2 siblings, 1 reply; 314+ messages in thread From: Everett M. Greene @ 2006-07-19 17:11 UTC (permalink / raw) "Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> writes: > mojaveg@mojaveg.lsan.sisna.com (Everett M. Greene) said: > >"robin" <robin_v@bigpond.com> writes: > > >> Some implementations of ones complement used two forms of > >> zero > > FSVO "some" equal to "all". > > >>which required two tests. > > That makes no sense. > > >A properly designed 1s complement machine would not > >generate -0 in normal arithmetic operations. > > 0+(-0)? A subtractive adder. > >Thus, no need to perform a program check for -0. > > I never saw a need to test for it regardless. Every 1s complement > machine I know of lets you test for zero in a single instruction. Try the Univac mainframes. I once got bagged by one of a Univac-designed machine that did 0 x n = -0 for any negative value of n. This wasn't the way it should work, but it worked that way to be compatible with an earlier machine. ["It's a mistake, but we are consistently wrong."] > >> How many machines now offer ones complement arithmetic? > > Before the S/360 captured the market there were large numbers of > machines in the lines started with the CDC 160, CDC 6660 and the > UNIVAC 1107. The shift seemed to be a copycat issue more than a > technological one. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) 2006-07-19 17:11 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) Everett M. Greene @ 2006-07-19 19:40 ` Shmuel (Seymour J.) Metz 2006-07-20 16:46 ` Everett M. Greene 0 siblings, 1 reply; 314+ messages in thread From: Shmuel (Seymour J.) Metz @ 2006-07-19 19:40 UTC (permalink / raw) In <20060719.79A3E90.87CE@mojaveg.lsan.sisna.com>, on 07/19/2006 at 09:11 AM, mojaveg@mojaveg.lsan.sisna.com (Everett M. Greene) said: >A subtractive adder. How so? 0+(-0) is a straight addition of 0...0 and 7...7, requiring nor complementation. >Try the Univac mainframes. Which ones? the 490 and the 1107 lines let you test for zero in a single instruction. >I once got bagged by one of a Univac-designed machine that did 0 x n >= -0 for any negative value of n. And that was a problem because? -- Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel> Unsolicited bulk E-mail subject to legal action. I reserve the right to publicly post or ridicule any abusive E-mail. Reply to domain Patriot dot net user shmuel+news to contact me. Do not reply to spamtrap@library.lspace.org ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) 2006-07-19 19:40 ` Shmuel (Seymour J.) Metz @ 2006-07-20 16:46 ` Everett M. Greene 2006-07-20 21:47 ` Shmuel (Seymour J.) Metz 0 siblings, 1 reply; 314+ messages in thread From: Everett M. Greene @ 2006-07-20 16:46 UTC (permalink / raw) "Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> writes: > mojaveg@mojaveg.lsan.sisna.com (Everett M. Greene) said: > > >A subtractive adder. > > How so? 0+(-0) is a straight addition of 0...0 and 7...7, requiring > no complementation. Context was deleted so I have no idea what you're trying to say above. > >Try the Univac mainframes. > > Which ones? the 490 and the 1107 lines let you test for zero in a > single instruction. They test for +0 in one instruction. If you want to reliably test for zero, you have to add 0 first. > >I once got bagged by one of a Univac-designed machine that did 0 x n > >= -0 for any negative value of n. > > And that was a problem because? Because the ensuing test for zero wasn't working reliably. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) 2006-07-20 16:46 ` Everett M. Greene @ 2006-07-20 21:47 ` Shmuel (Seymour J.) Metz 2006-07-21 17:23 ` ONES COMPLEMENT glen herrmannsfeldt ` (3 more replies) 0 siblings, 4 replies; 314+ messages in thread From: Shmuel (Seymour J.) Metz @ 2006-07-20 21:47 UTC (permalink / raw) In <20060720.79BD230.8698@mojaveg.lsan.sisna.com>, on 07/20/2006 at 08:46 AM, mojaveg@mojaveg.lsan.sisna.com (Everett M. Greene) said: >"Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> >writes: > mojaveg@mojaveg.lsan.sisna.com (Everett M. Greene) said: > >> >A subtractive adder. >> >> How so? 0+(-0) is a straight addition of 0...0 and 7...7, requiring >> no complementation. >Context was deleted so I have no idea what you're trying to say >above. Robin wrote "A properly designed 1s complement machine would not generate -0 in normal arithmetic operations." I responded "0+(-0)?". You responded "A subtractive adder." I'm trying to say that there's nothing about 0+(-0) requiring a subtractive adder. >They test for +0 in one instruction. If you want to reliably test >for zero, you have to add 0 first. From JZ in UP-4053, UNIVAC 1108 PROCESSOR AND STORAGE: Program control is transferred to U if the (A) are all zeros or all ones. I repeat, which UNIVAC mainframe are you talking about? It's clearly not the 1108. -- Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel> Unsolicited bulk E-mail subject to legal action. I reserve the right to publicly post or ridicule any abusive E-mail. Reply to domain Patriot dot net user shmuel+news to contact me. Do not reply to spamtrap@library.lspace.org ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-20 21:47 ` Shmuel (Seymour J.) Metz @ 2006-07-21 17:23 ` glen herrmannsfeldt 2006-07-21 18:04 ` John W. Kennedy 2006-07-23 0:26 ` robin 2006-07-21 20:05 ` glen herrmannsfeldt ` (2 subsequent siblings) 3 siblings, 2 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-21 17:23 UTC (permalink / raw) Shmuel (Seymour J.) Metz wrote: (snip) > Robin wrote "A properly designed 1s complement machine would not > generate -0 in normal arithmetic operations." > I responded "0+(-0)?". > You responded "A subtractive adder." > I'm trying to say that there's nothing about 0+(-0) requiring a > subtractive adder. There are adder designs that reduce the generation of -0. In any case, Seymour Cray, who probably knows more about fast processor design than anyone reading this newsgroup, designed ones complement machines for many years. There is also a lot of literature on processor design. Fortran and C definitely allow ones complement arithmetic, so a standard conforming program for those languages has to allow for it. (I didn't look it up for PL/I or ADA, but most likely they do, too. Java is the only language that I know doesn't allow it.) Unless someone wants to go to the literature and find the known answers, there doesn't seem to be much point in arguing it here. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-21 17:23 ` ONES COMPLEMENT glen herrmannsfeldt @ 2006-07-21 18:04 ` John W. Kennedy 2006-07-23 0:26 ` robin 2006-07-23 0:26 ` robin 1 sibling, 1 reply; 314+ messages in thread From: John W. Kennedy @ 2006-07-21 18:04 UTC (permalink / raw) glen herrmannsfeldt wrote: > (I didn't look it up for PL/I or ADA, but most likely they do, too. > Java is the only language that I know doesn't allow it.) Apart from Java's requirement of 2's-complement, the only case I can recall is that Ada goes out of its way to /permit/ 2's-complement. PL/I certainly doesn't have any particular requirement. -- John W. Kennedy "The blind rulers of Logres Nourished the land on a fallacy of rational virtue." -- Charles Williams. "Taliessin through Logres: Prelude" ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-21 18:04 ` John W. Kennedy @ 2006-07-23 0:26 ` robin 0 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-07-23 0:26 UTC (permalink / raw) John W. Kennedy wrote in message ... >glen herrmannsfeldt wrote: >> (I didn't look it up for PL/I or ADA, but most likely they do, too. >> Java is the only language that I know doesn't allow it.) > >Apart from Java's requirement of 2's-complement, the only case I can >recall is that Ada goes out of its way to /permit/ 2's-complement. PL/I >certainly doesn't have any particular requirement. PL/I was designed at a time when ones and twos complement machines were in use. There's no reason why PL/I won't run on either architecture. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-21 17:23 ` ONES COMPLEMENT glen herrmannsfeldt 2006-07-21 18:04 ` John W. Kennedy @ 2006-07-23 0:26 ` robin 1 sibling, 0 replies; 314+ messages in thread From: robin @ 2006-07-23 0:26 UTC (permalink / raw) glen herrmannsfeldt wrote in message ... >Shmuel (Seymour J.) Metz wrote: > >> I'm trying to say that there's nothing about 0+(-0) requiring a >> subtractive adder. > >There are adder designs that reduce the generation of -0. > >In any case, Seymour Cray, who probably knows more about fast >processor design than anyone reading this newsgroup, designed >ones complement machines for many years. That doesn't make it [ones complement] a good design. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-20 21:47 ` Shmuel (Seymour J.) Metz 2006-07-21 17:23 ` ONES COMPLEMENT glen herrmannsfeldt @ 2006-07-21 20:05 ` glen herrmannsfeldt 2006-07-21 21:00 ` Dick Hendrickson 2006-07-22 15:12 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) Everett M. Greene 2006-07-23 0:26 ` ONES COMPLEMENT robin 3 siblings, 1 reply; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-21 20:05 UTC (permalink / raw) Shmuel (Seymour J.) Metz wrote: (snip) > Robin wrote "A properly designed 1s complement machine would not > generate -0 in normal arithmetic operations." > I responded "0+(-0)?". > You responded "A subtractive adder." > I'm trying to say that there's nothing about 0+(-0) requiring a > subtractive adder. Well, even more, the way I understand the logic of some machines, as long as no arguments are negative zero they won't generate a negative zero. 0+(-0) doesn't satisfy that case. For those who care, consider that ones complement arithmetic will normally generate -0 instead of +0. Adding x and -x generates all ones with no end around carry. If you add the ones complements of the two operands you get the complement of the result, and still negative zero for zero sum. The complement of that will be the correct sum, and won't be negative zero unless both of the arguments are -0. In addition, note that ECL supplies a signal and its complement without any extra logic needed. In any case, a high level language has to get it right even if it takes extra instructions. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-21 20:05 ` glen herrmannsfeldt @ 2006-07-21 21:00 ` Dick Hendrickson 2006-07-22 6:19 ` glen herrmannsfeldt 0 siblings, 1 reply; 314+ messages in thread From: Dick Hendrickson @ 2006-07-21 21:00 UTC (permalink / raw) glen herrmannsfeldt wrote: > Shmuel (Seymour J.) Metz wrote: > (snip) > >> Robin wrote "A properly designed 1s complement machine would not >> generate -0 in normal arithmetic operations." > > >> I responded "0+(-0)?". > > >> You responded "A subtractive adder." > > >> I'm trying to say that there's nothing about 0+(-0) requiring a >> subtractive adder. > > > Well, even more, the way I understand the logic of some machines, > as long as no arguments are negative zero they won't generate a > negative zero. 0+(-0) doesn't satisfy that case. > > For those who care, consider that ones complement arithmetic > will normally generate -0 instead of +0. Adding x and -x generates > all ones with no end around carry. If you add the ones complements > of the two operands you get the complement of the result, and still > negative zero for zero sum. The complement of that will be the correct > sum, and won't be negative zero unless both of the arguments are -0. It's been a long time, but my recollection is that the CDC machines used a subtractor rather than an adder. This was done to mostly eliminate the -0 problem. Rather than do X + Y, they did X - (-Y). Then, if Y happened to be -X the subtractor saw X + (-X) as X - (--X) which worked out to X - X and then to zero. I think the only problem was 0 + 0 which became 0 - (-0) and then -0 . > > In addition, note that ECL supplies a signal and its complement without > any extra logic needed. Maybe, but the CDC 1604s used discrete parts. Transistors, resistors, on a 3 inch by 3 inch circuit board. I'm pretty sure that some design decisions were heavily influenced by parts count. > > In any case, a high level language has to get it right even > if it takes extra instructions. Absolutely. Dick Hendrickson > > -- glen > ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-21 21:00 ` Dick Hendrickson @ 2006-07-22 6:19 ` glen herrmannsfeldt 0 siblings, 0 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-22 6:19 UTC (permalink / raw) Dick Hendrickson wrote: (snip, I wrote) >> In addition, note that ECL supplies a signal and its complement without >> any extra logic needed. > Maybe, but the CDC 1604s used discrete parts. Transistors, > resistors, on a 3 inch by 3 inch circuit board. I'm pretty > sure that some design decisions were heavily influenced by > parts count. Well, the IBM 360/91 used ASLT, which is pretty much ECL built from discrete transistors glued on a ceramic substrate. Most 360's were SLT, which is pretty much discrete TTL. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) 2006-07-20 21:47 ` Shmuel (Seymour J.) Metz 2006-07-21 17:23 ` ONES COMPLEMENT glen herrmannsfeldt 2006-07-21 20:05 ` glen herrmannsfeldt @ 2006-07-22 15:12 ` Everett M. Greene 2006-07-23 0:26 ` ONES COMPLEMENT robin 3 siblings, 0 replies; 314+ messages in thread From: Everett M. Greene @ 2006-07-22 15:12 UTC (permalink / raw) "Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> writes: > mojaveg@mojaveg.lsan.sisna.com (Everett M. Greene) said: > >"Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> > >> >A subtractive adder. > >> > >> How so? 0+(-0) is a straight addition of 0...0 and 7...7, requiring > >> no complementation. > > >Context was deleted so I have no idea what you're trying to say > >above. > > Robin wrote "A properly designed 1s complement machine would not > generate -0 in normal arithmetic operations." > > I responded "0+(-0)?". > > You responded "A subtractive adder." > > I'm trying to say that there's nothing about 0+(-0) requiring a > subtractive adder. You are correct -- there's nothing /requiring/ a subtractive adder. But using such a design precludes certain cases of -0 from being generated. > >They test for +0 in one instruction. If you want to reliably test > >for zero, you have to add 0 first. > > From JZ in UP-4053, UNIVAC 1108 PROCESSOR AND STORAGE: > > Program control is transferred to U if the (A) are all zeros or > all ones. > > I repeat, which UNIVAC mainframe are you talking about? It's clearly > not the 1108. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-20 21:47 ` Shmuel (Seymour J.) Metz ` (2 preceding siblings ...) 2006-07-22 15:12 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) Everett M. Greene @ 2006-07-23 0:26 ` robin 3 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-07-23 0:26 UTC (permalink / raw) Shmuel (Seymour J.) Metz wrote in message <44c00809$1$fuzhry+tra$mr2ice@news.patriot.net>... >In <20060720.79BD230.8698@mojaveg.lsan.sisna.com>, on 07/20/2006 > at 08:46 AM, mojaveg@mojaveg.lsan.sisna.com (Everett M. Greene) >said: > >>"Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> >>writes: > mojaveg@mojaveg.lsan.sisna.com (Everett M. Greene) said: > >>> >A subtractive adder. >>> >>> How so? 0+(-0) is a straight addition of 0...0 and 7...7, requiring >>> no complementation. > >>Context was deleted so I have no idea what you're trying to say >>above. > >Robin wrote "A properly designed 1s complement machine would not >generate -0 in normal arithmetic operations." They are not my words. From earlier posts, it looks like E. Green's post. ^ permalink raw reply [flat|nested] 314+ messages in thread
[parent not found: <20060717.7A4ADD0.10B1A@mojaveg.lsan.sisna.com>]
* Re: ONES COMPLEMENT [not found] ` <20060717.7A4ADD0.10B1A@mojaveg.lsan.sisna.com> @ 2006-07-19 6:54 ` glen herrmannsfeldt 2006-07-19 12:47 ` Tom Linden 2006-07-19 14:35 ` robin 0 siblings, 2 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-07-19 6:54 UTC (permalink / raw) Everett M. Greene wrote: (snip) > A properly designed 1s complement machine would not > generate -0 in normal arithmetic operations. Thus, > no need to perform a program check for -0. As far as I know, they are designed either to not generate -0 under normal conditions, or to compare -0 equal to +0, or both. One solution is to do subtraction as the basic operation, such that subtracting two of the same number will generate +0, and addition as subtraction of the complement. -0 can still result if -0 is used as an operand, though. For a parallel adder, as all machines have had for about 40 years now, no extra hardware is needed, nor is more time needed. The "add with carry" operation that some machines have is not available, though. Twos complement machines might require extra hardware to handle the overflow on negate or absolute value, which can't overflow on ones complement machines. For multiply and divide it might be that ones complement is a little easier, though the difference should be small. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-19 6:54 ` glen herrmannsfeldt @ 2006-07-19 12:47 ` Tom Linden 2006-07-19 14:35 ` robin 1 sibling, 0 replies; 314+ messages in thread From: Tom Linden @ 2006-07-19 12:47 UTC (permalink / raw) On Tue, 18 Jul 2006 23:54:39 -0700, glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote: > One solution is to do subtraction as the basic operation, > such that subtracting two of the same number will generate +0, > and addition as subtraction of the complement. > -0 can still result if -0 is used as an operand, though. or mask the sign bit for a zero compare instruction ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: ONES COMPLEMENT 2006-07-19 6:54 ` glen herrmannsfeldt 2006-07-19 12:47 ` Tom Linden @ 2006-07-19 14:35 ` robin 1 sibling, 0 replies; 314+ messages in thread From: robin @ 2006-07-19 14:35 UTC (permalink / raw) glen herrmannsfeldt wrote in message ... >Everett M. Greene wrote: > >(snip) > >> A properly designed 1s complement machine would not >> generate -0 in normal arithmetic operations. Thus, >> no need to perform a program check for -0. > >As far as I know, they are designed either to not generate >-0 under normal conditions, or to compare -0 equal to +0, >or both. Or none. In some cases, -0 is treated as a negative number, and +0 as a positive, and -0 is not equal to +0. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 3:40 ` robin 2006-05-25 5:04 ` Nasser Abbasi @ 2006-05-25 11:02 ` Dan Nagle 2006-05-25 11:23 ` Gareth Owen ` (3 more replies) 1 sibling, 4 replies; 314+ messages in thread From: Dan Nagle @ 2006-05-25 11:02 UTC (permalink / raw) Hello, robin wrote: > "Dick Hendrickson" <dick.hendrickson@att.net> wrote in message <snip> >> But, if "it" refers to Fortran, subscript bounds rules >> ARE a feature of the language. > > Subscript bounds checking is not part of the Fortran language. Can Robin provide an example of a Fortran compiler available today for Windows, MacOSX or Linux that does _not_ provide a bounds checking option? >> You are NEVER allowed to >> execute an out-of-bounds array reference in a Fortran >> program. In practice, the historical run-time cost of >> checking bounds was [thought to be] too high, so compilers >> either didn't do it, or did it under some sort of command >> line option control. > > But in some languages [PL/I included] bounds checking > is part of the language, and can be controlled by the programmer. > > Subscript checking is an important part of any program. What's the difference between a programmer controlling a check, and a programmer setting a compiler option? -- Cheers! Dan Nagle Purple Sage Computing Solutions, Inc. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 11:02 ` Ada vs Fortran for scientific applications Dan Nagle @ 2006-05-25 11:23 ` Gareth Owen 2006-05-26 2:58 ` robin 2006-05-25 14:33 ` glen herrmannsfeldt ` (2 subsequent siblings) 3 siblings, 1 reply; 314+ messages in thread From: Gareth Owen @ 2006-05-25 11:23 UTC (permalink / raw) Dan Nagle <dannagle@verizon.net> writes: > What's the difference between a programmer controlling a check, > and a programmer setting a compiler option? There is a difference between "X is available on every compiler" and "X is part of the language". To most of us, it's a meaningless difference, but it *is* a difference. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 11:23 ` Gareth Owen @ 2006-05-26 2:58 ` robin 2006-05-26 7:13 ` Gareth Owen 2006-05-26 14:53 ` Dick Hendrickson 0 siblings, 2 replies; 314+ messages in thread From: robin @ 2006-05-26 2:58 UTC (permalink / raw) From: "Gareth Owen" <usenet@gwowen.freeserve.co.uk> Sent: Thursday, May 25, 2006 9:23 PM > Dan Nagle <dannagle@verizon.net> writes: > > > What's the difference between a programmer controlling a check, > > and a programmer setting a compiler option? > > There is a difference between > "X is available on every compiler" and "X is part of the language". > > To most of us, it's a meaningless difference, but it *is* a difference. As it's not available on every compiler, there is a difference. But, as I pointed out, being part of the language means that subscript checking can be applied to an entire program, a procedure, a statement, a block of statements, etc. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-26 2:58 ` robin @ 2006-05-26 7:13 ` Gareth Owen 2006-05-26 14:53 ` Dick Hendrickson 1 sibling, 0 replies; 314+ messages in thread From: Gareth Owen @ 2006-05-26 7:13 UTC (permalink / raw) "robin" <robin_v@bigpond.com> writes: > > There is a difference between > > "X is available on every compiler" and "X is part of the language". > > > > To most of us, it's a meaningless difference, but it *is* a difference. > > As it's not available on every compiler, there is a difference. I was actually agreeing with you. <washes hands> ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-26 2:58 ` robin 2006-05-26 7:13 ` Gareth Owen @ 2006-05-26 14:53 ` Dick Hendrickson 2006-05-26 20:18 ` glen herrmannsfeldt 1 sibling, 1 reply; 314+ messages in thread From: Dick Hendrickson @ 2006-05-26 14:53 UTC (permalink / raw) robin wrote: > From: "Gareth Owen" <usenet@gwowen.freeserve.co.uk> > Sent: Thursday, May 25, 2006 9:23 PM > > >>Dan Nagle <dannagle@verizon.net> writes: >> >> >>>What's the difference between a programmer controlling a check, >>>and a programmer setting a compiler option? >> >>There is a difference between >>"X is available on every compiler" and "X is part of the language". >> >>To most of us, it's a meaningless difference, but it *is* a difference. > > > As it's not available on every compiler, there is a difference. > > But, as I pointed out, being part of the language means > that subscript checking can be applied to an entire program, > a procedure, a statement, a block of statements, etc. > > Not really, if it's "part of the language", then the language defines where, when, and how it can be applied. There's no particular reason why a language definition couldn't specify a global application, or a block-by-block application of bounds checking, or couldn't limit it only to statements that have a prime number of keystrokes. It's a design choice between need and efficiency. Dick Hendrickson ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-26 14:53 ` Dick Hendrickson @ 2006-05-26 20:18 ` glen herrmannsfeldt 2006-05-26 21:50 ` Björn Persson 0 siblings, 1 reply; 314+ messages in thread From: glen herrmannsfeldt @ 2006-05-26 20:18 UTC (permalink / raw) Dick Hendrickson wrote: > robin wrote: (snip) >> But, as I pointed out, being part of the language means >> that subscript checking can be applied to an entire program, >> a procedure, a statement, a block of statements, etc. > Not really, if it's "part of the language", then the > language defines where, when, and how it can be applied. Yes, but it depends on what you mean by "can"... > There's no particular reason why a language definition > couldn't specify a global application, or a block-by-block > application of bounds checking, or couldn't limit it only to > statements that have a prime number of keystrokes. It's a > design choice between need and efficiency. Having it part of the language allows the language to specify it at the block or statement level, but does not require that ability. Not having it part of the language doesn't preclude it, but makes it difficult. How would you specify it at the statement level as a compiler command line option? You could specify the statement number, but that would work only for numbered statements. Also, most Fortran compilers can compile more than one routine with a single invocation, and the statement number could be duplicated in different routines. It could be specified as a line number, but that would make editing inconvenient. Also, at least for Java and PL/I, bounds checking is part of the exception model. To get away from the PL/I discussion, consider the Java code: static public void main(String args[]) { int x[]=new int[10]; /* allocate a 10 element array */ try{ /* index the array with the first command line argument */ x[Integer.parseInt(args[0])]=3; } catch(ArrayIndexOutOfBoundsException e) { System.out.println("Oops: "+e); } } } This program indexes an array with its first command line argument, so can easily be tested. The program can then go do whatever is needed after detecting the bad subscript. In this case the try{} block is only one statement, but it can be much larger. One difference between the Java and PL/I exception model is that in many cases PL/I allows one to correct and resume the statement in error, where Java does not allow that. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-26 20:18 ` glen herrmannsfeldt @ 2006-05-26 21:50 ` Björn Persson 0 siblings, 0 replies; 314+ messages in thread From: Björn Persson @ 2006-05-26 21:50 UTC (permalink / raw) glen herrmannsfeldt wrote: > Also, at least for Java and PL/I, bounds checking is part of the > exception model. That's true for Ada too. (If a language has both exceptions and bounds checking is would be rather silly not to use the exception model for the bounds checking.) -- Bj�rn Persson PGP key A88682FD omb jor ers @sv ge. r o.b n.p son eri nu ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 11:02 ` Ada vs Fortran for scientific applications Dan Nagle 2006-05-25 11:23 ` Gareth Owen @ 2006-05-25 14:33 ` glen herrmannsfeldt 2006-05-26 2:58 ` robin 2006-05-26 20:03 ` JA 3 siblings, 0 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-05-25 14:33 UTC (permalink / raw) Dan Nagle wrote: (snip) > What's the difference between a programmer controlling a check, > and a programmer setting a compiler option? PL/I allows bounds checking to be turned on or off statement by statement or procedure by procedure, if desired. Discussions of Java/JVM, which requires subscript checking, mention that some may be able to move tests outside a loop, or even omit them all together when it can be determined at compile time (or JIT time) that a problem can't occur. For example, a loop over the length of an array can't go out of bounds (unless the array is redefined). If the compiler didn't detect it, but the programmer knew some loops couldn't exceed array bounds, it could be turned off for those statements. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 11:02 ` Ada vs Fortran for scientific applications Dan Nagle 2006-05-25 11:23 ` Gareth Owen 2006-05-25 14:33 ` glen herrmannsfeldt @ 2006-05-26 2:58 ` robin 2006-05-26 20:03 ` JA 3 siblings, 0 replies; 314+ messages in thread From: robin @ 2006-05-26 2:58 UTC (permalink / raw) "Dan Nagle" <dannagle@verizon.net> wrote in message news:S8gdg.9145$ix2.6702@trnddc03... > >> You are NEVER allowed to > >> execute an out-of-bounds array reference in a Fortran > >> program. In practice, the historical run-time cost of > >> checking bounds was [thought to be] too high, so compilers > >> either didn't do it, or did it under some sort of command > >> line option control. > > > > But in some languages [PL/I included] bounds checking > > is part of the language, and can be controlled by the programmer. > > > > Subscript checking is an important part of any program. > > What's the difference between a programmer controlling a check, > and a programmer setting a compiler option? When the check is part of the language, as in PL/I, the check can be applied to any section of the code - even to an individual statement - or disabled for any section of the code. Or the entire program. Or any procedure. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-25 11:02 ` Ada vs Fortran for scientific applications Dan Nagle ` (2 preceding siblings ...) 2006-05-26 2:58 ` robin @ 2006-05-26 20:03 ` JA 3 siblings, 0 replies; 314+ messages in thread From: JA @ 2006-05-26 20:03 UTC (permalink / raw) Dan Nagle wrote: an example of a Fortran compiler > available today for Windows, MacOSX or Linux that does _not_ > provide a bounds checking option? Most current Fortran compilers cannot detect array bounds errors on assumed size arrays. I doubt that would be true if bounds checking were "part of the language". -- John Appleyard ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-24 15:19 ` Dick Hendrickson ` (2 preceding siblings ...) 2006-05-25 3:40 ` robin @ 2006-05-25 11:32 ` Martin Krischik 3 siblings, 0 replies; 314+ messages in thread From: Martin Krischik @ 2006-05-25 11:32 UTC (permalink / raw) Dick Hendrickson wrote: > In practice, the historical run-time cost of > checking bounds was [thought to be] too high, so compilers > either didn't do it, or did it under some sort of command > line option control. Which is not that different from Ada - only in Ada it tend to be a "don't do the check" compiler option or pragma statement. Thats because high cost has a different meaning in Ada. There is that famous: pragma suppress(numeric_error, horizontal_veloc_bias); which has cost ESA ᅵ 860.000.000. Of corse they would have found that bug if the ESA management had decided to rerun the test suite before employing old software on a new rocked. Martin -- mailto://krischik@users.sourceforge.net Ada programming at: http://ada.krischik.com ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 8:25 ` Jean-Pierre Rosen 2006-05-23 11:40 ` Dan Nagle 2006-05-23 17:09 ` Dick Hendrickson @ 2006-05-24 6:46 ` Jan Vorbrüggen 2 siblings, 0 replies; 314+ messages in thread From: Jan Vorbrüggen @ 2006-05-24 6:46 UTC (permalink / raw) > Is it possible in Fortran to define three *incompatible* types Length, > Time, and Speed, and define a "/" operator between Length and Time that > returns Speed? Yes. > I need to be educated about Fortran's kind, but can you use it to > specify that you want a type with guaranteed 5 digits accuracy? Yes. > Because Fortran has no fixed points, the scientific community sees > floating point as the only way to model real numbers. Ah nonsense. You often need the range, and the constant relative precision of FP is what is often - but not always, of course - needed in physical simuilation. > Ada's accuracy requirement is independent from any hardware (or > software) implementation of floating points, and are applicable even for > non IEEE machines. The same is true for Fortran. > More convenient to write: > Mat1 := Mat2 * Mat3; While that is true, it is much less convenient to read - see the seperate thread on the issue. >> How is Ada's bounds checking better or worse than Fortran's? > > I may miss something on the Fortran side, but Ada's very precise typing > allows to define variables whose bounds are delimited. If these > variables are later used to index an array (and if the language features > are properly used), the compiler statically knows that no out-of-bound > can occur. In short, most of the time, an Ada compiler is able to prove > that bounds checking is not necessary, and corresponding checks are not > generated. It's a little bit more difficult for a Fortran compiler, but an equivalent program - one that, directly or indirectly refers to the array bounds in the loop parameters - should be optmizable in the same way. With array expressions, Fortran has an additional and idiomatic way of doing this. Jan ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 13:02 ` Jean-Pierre Rosen 2006-05-22 15:23 ` Dan Nagle @ 2006-05-24 5:26 ` robin 1 sibling, 0 replies; 314+ messages in thread From: robin @ 2006-05-24 5:26 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 814 bytes --] "Jean-Pierre Rosen" <rosen@adalog.fr> wrote in message news:epcs4e.qb8.ln@hunter.axlog.fr... > Nasser Abbasi a �crit : > > What are the technical language specific reasons why Fortran would be > > selected over Ada? > > > Some immediate reasons: > 1) Packaging. Packages allow better organization of software, which is > good for any kind of application. Modules? > 9) Generics. Stop rewriting these damn sorting routines 1000 times. Generic procedures are available in Fortran. > 10) Default parameters. Makes complex subprograms (simplex...) much > easier to use. > > 11) Operators on any types, including arrays. Define a matrix product as > "*"... Well, you can do that too in Fortran with UDO's (for another symbol). But there's no need to, with MATMUL (a builtin matrix multiplication procedure). ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 4:54 Ada vs Fortran for scientific applications Nasser Abbasi ` (5 preceding siblings ...) 2006-05-22 13:02 ` Jean-Pierre Rosen @ 2006-05-23 8:34 ` gautier_niouzes 2006-05-23 13:33 ` Nasser Abbasi ` (3 subsequent siblings) 10 siblings, 0 replies; 314+ messages in thread From: gautier_niouzes @ 2006-05-23 8:34 UTC (permalink / raw) Nasser Abbasi: # I like to discuss the technical reasons why Ada is not used as much as # Fortran for scientific and number crunching type applications? The main reason is: - Fortran started in 1953 - Ada started in 1983 During the 30 years before Ada *tons* of scientific software were done and the Fortran compiler technology probabily still has a solid advance over the "newcomers" in terms of numerical efficiency. Generations of scientifics and engineers learned and then taught the language. Add to it that Fortran is readable and fully bracketed, which makes it *on that point* as good as Ada. # To make the discussion more focused, let's assume you want to start # developing a large scientific application in the domain where Fortran is # commonly used. Say you want to develop a new large Finite Elements Methods # program or large computational physics simulation system. Assume you can # choose either Ada or Fortran. If you have to program from scratch your FEM/CFD program, I suggest Ada (successful experience); but most of the time projects will reuse existing code, then, it depends... The advantages of transition to Ada appear not immediately: probabily you'll have to interface to Fortran routines, which is easy but not as easy as not having to interface. Only then the advantages begin to appear: - the compiler finds plenty of nasty bugs, including in "established" pieces of Fortran code you pass through f2a - you prevent lots of bugs like bad element numbering by using subtypes of Integer (Positive, arrays' ranges) - you add much clarty by using enumerated types for coding boundary conditions, methods, formats, options, etc. and let the compilation of "case" statement find forgotten cases - you can use the same code on several machines (no language dialect, compatible I/O, you can select floating-point type in one "subtype Real is ..." for the whole project) - you can use the same code for filling several matrix types (sparse, band) through generics A few other advantages may have been levelled in Fortran 9x+, eventually by breaking compatibility with previous versions - please comment! - you prevent bugs and add readbility with "for i in A'Range(1) loop" and not having to pass dimensions as parameters - no possibility of bugs like passing matrix,vector instead of vector,matrix as parameters and other type mismatches - no problem like real_variable = (integer expression) putting fuzzy contents as floating-point data - no problem of undefined variables (ijk written as jik): Ada has an implicit overall "implicit none" - no need of explaining for each subroutine parameter its dimensions and base type (Ada has user-defined types) - no problem with several formats for floating-point litterals (1.3e7, 4.2d5) IMHO all advantages concern essentially reliability and and an important programmer-time reduction. Gautier ______________________________________________________________ Ada programming -- http://www.mysunrise.ch/users/gdm/gsoft.htm NB: For a direct answer, e-mail address on the Web site! ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 4:54 Ada vs Fortran for scientific applications Nasser Abbasi ` (6 preceding siblings ...) 2006-05-23 8:34 ` gautier_niouzes @ 2006-05-23 13:33 ` Nasser Abbasi 2006-05-23 14:00 ` Michael Metcalf ` (2 more replies) 2006-05-23 17:57 ` Aldebaran ` (2 subsequent siblings) 10 siblings, 3 replies; 314+ messages in thread From: Nasser Abbasi @ 2006-05-23 13:33 UTC (permalink / raw) I personally think that one way to make Ada popular for scientific use is to publish a version of the Numerical recipes book in Ada. I have been waiting for this for long time. This should show the advantages of using Ada for scientific/numerical applications. If we can have the numerical recipes book in C,C++,Pascal, F77 and F90 for crying out loud, why can't we have it in Ada? (I do not know how anyone in their right mind would think that writing numerical applications in C or even C++ would be better/more robust than in Ada.... but I digress here) Nasser ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 13:33 ` Nasser Abbasi @ 2006-05-23 14:00 ` Michael Metcalf 2006-05-23 15:04 ` beliavsky 2006-05-23 19:29 ` Gautier 2 siblings, 0 replies; 314+ messages in thread From: Michael Metcalf @ 2006-05-23 14:00 UTC (permalink / raw) "Nasser Abbasi" <nma@12000.org> wrote in message news:YaEcg.91032$dW3.74205@newssvr21.news.prodigy.com... >I personally think that one way to make Ada popular for scientific use is >to publish a version of the Numerical recipes book in Ada. > > I have been waiting for this for long time. > > This should show the advantages of using Ada for scientific/numerical > applications. > > If we can have the numerical recipes book in C,C++,Pascal, F77 and F90 for > crying out loud, why can't we have it in Ada? > Probably because none of the NR team knows Ada, or maybe the market is not judged to be that large. The suggestion to do a f90 version fell on fertile ground because a) f90 was clearly better than f77 and b) one of the team was already heavily involved in Thinking Machines' version Fortran, which was quite close to the f90 array language. In the right place at the right time. Regards, Mike Metcalf ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 13:33 ` Nasser Abbasi 2006-05-23 14:00 ` Michael Metcalf @ 2006-05-23 15:04 ` beliavsky 2006-05-23 18:09 ` glen herrmannsfeldt 2006-05-23 22:38 ` Jeffrey Creem 2006-05-23 19:29 ` Gautier 2 siblings, 2 replies; 314+ messages in thread From: beliavsky @ 2006-05-23 15:04 UTC (permalink / raw) Nasser Abbasi wrote: > I personally think that one way to make Ada popular for scientific use is to > publish a version of the Numerical recipes book in Ada. > > I have been waiting for this for long time. > > This should show the advantages of using Ada for scientific/numerical > applications. You cannot do this unless you get the authors of Numerical Recipes on board, which would probably be difficult. I think a literal translation of their Fortran or C code to Ada could not be posted online without their permission. The GNU Scientific Library (written in C) http://www.gnu.org/software/gsl/ is open source. You could write an open source version in Ada. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 15:04 ` beliavsky @ 2006-05-23 18:09 ` glen herrmannsfeldt 2006-05-23 22:38 ` Jeffrey Creem 1 sibling, 0 replies; 314+ messages in thread From: glen herrmannsfeldt @ 2006-05-23 18:09 UTC (permalink / raw) beliavsky@aol.com wrote: > Nasser Abbasi wrote: >>I personally think that one way to make Ada popular for scientific use is to >>publish a version of the Numerical recipes book in Ada. (snip) > You cannot do this unless you get the authors of Numerical Recipes on > board, which would probably be difficult. I think a literal translation > of their Fortran or C code to Ada could not be posted online without > their permission. The rules are complicated and country dependent. On thing, though, you most likely don't want a literal translation, as that likely negates any advantage that Ada might have. If it is not a literal translation then it is likely a different expression of the ideas, and so not infringing on the copyright. You probably can't call it "Numerical Recipes" if that is trademarked, but that is a different question. (You should probably reference NR, though.) As usual, IANAL, but most likely the code could be posted. One would then need to buy the book to get the text to go along with the code, and thus increasing sales of the book. -- glen ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 15:04 ` beliavsky 2006-05-23 18:09 ` glen herrmannsfeldt @ 2006-05-23 22:38 ` Jeffrey Creem 1 sibling, 0 replies; 314+ messages in thread From: Jeffrey Creem @ 2006-05-23 22:38 UTC (permalink / raw) beliavsky@aol.com wrote: > Nasser Abbasi wrote: > >>I personally think that one way to make Ada popular for scientific use is to >>publish a version of the Numerical recipes book in Ada. >> >>I have been waiting for this for long time. >> >>This should show the advantages of using Ada for scientific/numerical >>applications. > > > You cannot do this unless you get the authors of Numerical Recipes on > board, which would probably be difficult. I think a literal translation > of their Fortran or C code to Ada could not be posted online without > their permission. > > The GNU Scientific Library (written in C) > http://www.gnu.org/software/gsl/ is open source. You could write an > open source version in Ada. > The problem is that GSL is GPL which is fine if your intent is to "Make free software" popular but at times this is in conflict with making "Ada Popular" in some application. They are probably both worthy goals, but pretty much everything I do for Open Source Ada I use GPL + linking/generics exception (GMGPL). Any direct translation of the GSL is likely to result in the Ada version being pure GPL and thus limit its usefulness in some areas. ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 13:33 ` Nasser Abbasi 2006-05-23 14:00 ` Michael Metcalf 2006-05-23 15:04 ` beliavsky @ 2006-05-23 19:29 ` Gautier 2006-05-23 19:34 ` Rich Townsend 2 siblings, 1 reply; 314+ messages in thread From: Gautier @ 2006-05-23 19:29 UTC (permalink / raw) Nasser Abbasi: > I personally think that one way to make Ada popular for scientific use is to > publish a version of the Numerical recipes book in Ada. > > I have been waiting for this for long time. It won't happen spontaneously... The best way is to help this happening. Good news are that, you can translate the whole Pascal version through a recent version of P2Ada (it works!); there is some manual rework to make it compile and more to take advantage of Ada constructs like unconstrained arrays (instead of arrays with fixed dimension in Pascal!), standardized floating-points, I/O or modularity, but it is definitely doable. More, here: http://www.mysunrise.ch/users/gdm/gsoft.htm#p2ada Good luck! Gautier __ NB: For a direct answer, e-mail address on the Web site! ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 19:29 ` Gautier @ 2006-05-23 19:34 ` Rich Townsend 0 siblings, 0 replies; 314+ messages in thread From: Rich Townsend @ 2006-05-23 19:34 UTC (permalink / raw) Gautier wrote: > Nasser Abbasi: > >> I personally think that one way to make Ada popular for scientific use >> is to publish a version of the Numerical recipes book in Ada. >> >> I have been waiting for this for long time. > > > It won't happen spontaneously... > The best way is to help this happening. > Good news are that, you can translate the whole Pascal version through a > recent version of P2Ada (it works!); there is some manual rework to make > it compile and more to take advantage of Ada constructs like > unconstrained arrays (instead of arrays with fixed dimension in > Pascal!), standardized floating-points, I/O or modularity, but it is > definitely doable. > I head they did the C version with f2c. Hehehe. cheers, Rich ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 4:54 Ada vs Fortran for scientific applications Nasser Abbasi ` (7 preceding siblings ...) 2006-05-23 13:33 ` Nasser Abbasi @ 2006-05-23 17:57 ` Aldebaran 2006-05-23 22:14 ` Dr Ivan D. Reid 2006-05-23 23:01 ` John 2006-05-27 5:01 ` Nasser Abbasi 10 siblings, 1 reply; 314+ messages in thread From: Aldebaran @ 2006-05-23 17:57 UTC (permalink / raw) El Mon, 22 May 2006 04:54:42 +0000, Nasser Abbasi escribiᅵ: > I like to discuss the technical reasons why Ada is not used as much as > Fortran for scientific and number crunching type applications? > > And what about Ocaml? Aldebaran from Taurus ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-23 17:57 ` Aldebaran @ 2006-05-23 22:14 ` Dr Ivan D. Reid 0 siblings, 0 replies; 314+ messages in thread From: Dr Ivan D. Reid @ 2006-05-23 22:14 UTC (permalink / raw) On Tue, 23 May 2006 17:57:45 GMT, Aldebaran <albatani23@hotmail.com> wrote in <pan.2006.05.23.17.58.15.418797@hotmail.com>: > El Mon, 22 May 2006 04:54:42 +0000, Nasser Abbasi escribi�: >> I like to discuss the technical reasons why Ada is not used as much as >> Fortran for scientific and number crunching type applications? > And what about Ocaml? Wasn't that a song by Neil Sedaka? -- Ivan Reid, Electronic & Computer Engineering, ___ CMS Collaboration, Brunel University. Ivan.Reid@[brunel.ac.uk|cern.ch] Room 40-1-B12, CERN KotPT -- "for stupidity above and beyond the call of duty". ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 4:54 Ada vs Fortran for scientific applications Nasser Abbasi ` (8 preceding siblings ...) 2006-05-23 17:57 ` Aldebaran @ 2006-05-23 23:01 ` John 2006-05-27 5:01 ` Nasser Abbasi 10 siblings, 0 replies; 314+ messages in thread From: John @ 2006-05-23 23:01 UTC (permalink / raw) Nasser Abbasi wrote: > I like to discuss the technical reasons why Ada is not used as much as > Fortran for scientific and number crunching type applications? > I have some historical observations to add to the discussion. I can tell about how Ada was discussed as a candidate for number crunching in one very large application domain. Quite a few (20?) years ago there was a conference in Albuquerque to describe Ada's potential in the Department of Energy national labs. Then as now the labs use large number crunching computers to compute complex physics models. There were a couple of mostly unbiased papers circulated and discussed that compared languages. I presented some numerical modelling work in Ada that I was doing at LLNL. As I recall Ada was praised on technical and management criteria and scarcely a discouraging word was heard. But I think that might have been the high water mark of Ada's visibility to that community, where intensely numeric computation takes place. My sense is that the outcome (that Fortran is the principle language for scientific modelling) comes from sound engineering judgement. "If it ain't broke, don't replace it." If there are gains that Ada might bring (and I believe there probably are), the difficulty and uncertainty of a changeover overwhelm the benefits. Ada did continue to make substantial contributions to several LLNL programs, however not in the numerical computation domain. John Woodruff retired software engineer ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-22 4:54 Ada vs Fortran for scientific applications Nasser Abbasi ` (9 preceding siblings ...) 2006-05-23 23:01 ` John @ 2006-05-27 5:01 ` Nasser Abbasi 2006-05-27 7:36 ` Pascal Obry 2006-05-27 11:18 ` Björn Persson 10 siblings, 2 replies; 314+ messages in thread From: Nasser Abbasi @ 2006-05-27 5:01 UTC (permalink / raw) googling around, I found a interesting thread on this very same subject. If you search (under the google groups, not google web), for the string "Computational scientists ignoring and ignored by Ada" It is funny that the thread was back in 1993. Here we are 13 years later. Has things improved for Ada in this specific field? One of those responding in the above thread complained about Ada (then) being 1.5-2 times slower than Fortran in heavy number crunching. I think this is no longer the case. Is it possible to do some standard tests to compare speed of Ada vs Fortran specifically for numerical work? From what I have been reading, it seems that now Parallelizing Compilers are becoming more important. (may be because we will soon have as common the ability to buy 4,8 or even 16 quad CPU's for workstations and PC's). Is Fortran ahead of Ada in regards? clearly we are talking about the compiler being be able to do things like array subscripts dependency analysis and the ability of the compiler to decide when to parallelize loops (I think the technical term is loop transformation but not sure) We are not talking here about the user themselves doing this and using user level threads or Ada tasks which I think will too heavy weight for this sort of thing. A document I found for Fortran90 on 'decomposing loop for parallel processing' sheds light on this. http://www.helsinki.fi/atk/unix/dec_manuals/df90au52/dfum026.htm I also found this interesting note about some research done at IBM for paralalizing Ada for numerical work "Parallelism in scientific applications can most often be found at the loop level. Although Ada supports parallelism via the task construct, its coarseness renders it unsuitable for this light-weight parallelism." The rest is here: http://www.research.ibm.com/people/h/hind/tr585abs.html It seems from what I see is that Fortran is ahead when it comes to parallelizing compilers for scieintifc high performance work. Is it possible to have a parallelizing GNAT compiler for example, producing super fast parallal code, yet preserve all the Ada language semantics? I hope so. And how soon can we get one? :) Nasser ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-27 5:01 ` Nasser Abbasi @ 2006-05-27 7:36 ` Pascal Obry 2006-05-27 11:18 ` Björn Persson 1 sibling, 0 replies; 314+ messages in thread From: Pascal Obry @ 2006-05-27 7:36 UTC (permalink / raw) To: Nasser Abbasi Nasser Abbasi a �crit : > I also found this interesting note about some research done at IBM for > paralalizing Ada for numerical work > > "Parallelism in scientific applications can most often be found at the loop > level. Although Ada supports parallelism via the task construct, its > coarseness renders it unsuitable for this light-weight parallelism." And this is talking only about a small part of the total application. In some applications I know where OpenMP is used to do parallel computing the gain is 20% or 30%. Why ? Because most of the application is not doing vector computing! This is often only a small part of an application. The application needs to read data, prepare them, do some computation, eventually communicate with some other applications, do some more computations... write the data, do some 2D/3D display... At this level of parallelism Ada tasking is a really nice solution. The fork/join model of OpenMP is not that efficient and certainly not a general purpose model. We are not in a world of vector-oriented architecture and vectorizing compiler (Cray, Fujitsu) but in a multi-processor or dual-core cluster/grid. I'm not sure IBM would say the same thing today... Pascal. -- --|------------------------------------------------------ --| Pascal Obry Team-Ada Member --| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE --|------------------------------------------------------ --| http://www.obry.net --| "The best way to travel is by means of imagination" --| --| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595 ^ permalink raw reply [flat|nested] 314+ messages in thread
* Re: Ada vs Fortran for scientific applications 2006-05-27 5:01 ` Nasser Abbasi 2006-05-27 7:36 ` Pascal Obry @ 2006-05-27 11:18 ` Björn Persson 1 sibling, 0 replies; 314+ messages in thread From: Björn Persson @ 2006-05-27 11:18 UTC (permalink / raw) Nasser Abbasi skrev: > Is it possible to do some standard tests to compare speed of Ada vs Fortran > specifically for numerical work? Well, there are the Computer Language Shootout Benchmarks. You can for example compare G95 to Gnat in Debian on AMD or in Gentoo on Intel: http://shootout.alioth.debian.org/debian/benchmark.php?test=all&lang=g95&lang2=gnat http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=g95&lang2=gnat (The Debian side also has a Fortran compiler from Intel.) I don't know how relevant those benchmarks are to heavy number crunching, but you could always write some new benchmarks and try to get them accepted. http://shootout.alioth.debian.org/gp4/faq.php?#newbench -- Bj�rn Persson PGP key A88682FD omb jor ers @sv ge. r o.b n.p son eri nu ^ permalink raw reply [flat|nested] 314+ messages in thread
end of thread, other threads:[~2006-11-21 18:39 UTC | newest] Thread overview: 314+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2006-05-22 4:54 Ada vs Fortran for scientific applications Nasser Abbasi 2006-05-22 6:45 ` Brooks Moses 2006-05-22 7:41 ` Jan Vorbrüggen 2006-05-22 18:49 ` Brooks Moses 2006-05-23 5:51 ` Tim Prince 2006-05-23 8:56 ` Jan Vorbrüggen 2006-05-23 13:28 ` Greg Lindahl 2006-05-24 8:10 ` Jan Vorbrüggen 2006-05-24 15:19 ` Richard E Maine 2006-05-29 12:55 ` Jan Vorbrüggen 2006-05-24 15:24 ` Dick Hendrickson 2006-05-24 19:03 ` glen herrmannsfeldt 2006-05-29 12:56 ` Jan Vorbrüggen 2006-05-29 20:07 ` glen herrmannsfeldt 2006-05-30 4:55 ` robert.corbett 2006-05-30 7:05 ` Jan Vorbrüggen 2006-05-22 11:48 ` Michael Metcalf 2006-05-22 12:01 ` Dr. Adrian Wrigley 2006-05-23 8:34 ` Jon Harrop 2006-05-22 7:34 ` Dmitry A. Kazakov 2006-05-23 8:32 ` Jon Harrop 2006-05-22 7:36 ` Greg Lindahl 2006-05-22 21:25 ` Ken Plotkin 2006-05-22 21:40 ` blmblm 2006-05-23 2:12 ` Ken Plotkin 2006-05-23 15:32 ` Pascal Obry 2006-05-23 17:33 ` Marc A. Criley 2006-05-22 10:34 ` Tim Prince 2006-05-22 12:52 ` George N. White III 2006-05-22 13:02 ` Jean-Pierre Rosen 2006-05-22 15:23 ` Dan Nagle 2006-05-22 16:20 ` Nasser Abbasi 2006-05-22 16:38 ` Jan Vorbrüggen 2006-05-22 16:41 ` Gordon Sande 2006-05-22 16:48 ` Dan Nagle 2006-05-23 2:12 ` news.hinet.net 2006-05-22 17:22 ` Paul Van Delst 2006-05-23 7:04 ` Gareth Owen 2006-05-23 7:02 ` Martin Krischik 2006-05-23 14:23 ` Rich Townsend 2006-05-23 17:24 ` Brooks Moses 2006-05-23 18:40 ` Rich Townsend 2006-05-23 21:17 ` Martin Dowie 2006-05-24 2:21 ` Nasser Abbasi 2006-05-26 15:18 ` Martin Dowie 2006-05-25 19:43 ` Janne Blomqvist 2006-05-24 5:26 ` robin 2006-05-24 6:06 ` GF Thomas 2006-05-22 16:38 ` Richard E Maine 2006-05-23 8:25 ` Jean-Pierre Rosen 2006-05-23 11:40 ` Dan Nagle 2006-05-23 13:14 ` Dr. Adrian Wrigley 2006-05-23 17:07 ` Dan Nagle 2006-05-23 22:20 ` Dr. Adrian Wrigley 2006-05-23 22:49 ` Dan Nagle 2006-05-23 23:21 ` Dr. Adrian Wrigley 2006-05-24 0:49 ` Dan Nagle 2006-05-24 5:07 ` GF Thomas 2006-05-24 12:56 ` J.F. Cornwall 2006-05-24 13:39 ` Dr. Adrian Wrigley 2006-05-24 16:49 ` J.F. Cornwall 2006-05-24 18:08 ` Dr. Adrian Wrigley 2006-05-24 5:26 ` robin 2006-05-27 5:18 ` Aldebaran 2006-05-23 17:09 ` Dick Hendrickson 2006-05-23 17:53 ` Georg Bauhaus 2006-05-23 18:21 ` Dmitry A. Kazakov 2006-05-23 18:34 ` Brooks Moses 2006-05-24 7:15 ` Dmitry A. Kazakov 2006-05-23 20:33 ` Dick Hendrickson 2006-05-24 7:52 ` Jean-Pierre Rosen 2006-05-24 14:50 ` robin 2006-05-24 15:19 ` Dick Hendrickson 2006-05-24 15:43 ` Dr. Adrian Wrigley 2006-05-24 17:12 ` Dick Hendrickson 2006-05-24 17:32 ` Richard E Maine 2006-05-24 17:54 ` Dr. Adrian Wrigley 2006-05-24 18:10 ` Richard E Maine 2006-05-24 18:39 ` Nasser Abbasi 2006-05-24 19:36 ` Gautier 2006-05-24 19:37 ` Gautier 2006-05-24 19:56 ` Richard E Maine 2006-05-30 19:39 ` Craig Powers 2006-05-26 2:58 ` robin 2006-05-24 18:34 ` Gordon Sande 2006-05-24 18:40 ` Ed Falis 2006-05-25 22:31 ` Brooks Moses 2006-05-24 18:43 ` Ed Falis 2006-05-24 18:59 ` J.F. Cornwall 2006-05-24 19:10 ` Gordon Sande 2006-05-25 3:40 ` robin 2006-05-25 15:19 ` Martin Krischik 2006-05-27 14:29 ` robin 2006-05-27 13:22 ` Georg Bauhaus 2006-05-29 11:46 ` Jan Vorbrüggen 2006-05-29 17:37 ` Martin Krischik 2006-05-24 21:04 ` Dick Hendrickson 2006-05-31 4:26 ` robert.corbett 2006-05-25 3:40 ` robin 2006-05-24 16:03 ` Richard E Maine 2006-05-24 19:08 ` glen herrmannsfeldt 2006-05-25 3:40 ` robin 2006-05-25 5:04 ` Nasser Abbasi 2006-05-25 6:04 ` Richard Maine 2006-05-25 10:42 ` Shmuel (Seymour J.) Metz 2006-05-25 15:09 ` Richard E Maine 2006-05-25 19:39 ` Shmuel (Seymour J.) Metz 2006-05-25 12:09 ` Dr. Adrian Wrigley 2006-05-25 12:42 ` Dan Nagle 2006-05-25 12:45 ` Gordon Sande 2006-05-25 16:23 ` Checking for Undefined [was Re: Ada vs Fortran for scientific applications] Bob Lidral 2006-05-25 17:48 ` Nasser Abbasi 2006-05-25 17:57 ` Gordon Sande 2006-05-25 20:20 ` Bob Lidral 2006-05-27 14:29 ` Checking for Undefined robin 2006-05-25 20:35 ` Checking for Undefined [was Re: Ada vs Fortran for scientific applications] glen herrmannsfeldt 2006-05-25 22:02 ` Checking for Undefined Simon Wright 2006-05-27 14:29 ` robin 2006-05-27 15:10 ` Gordon Sande 2006-05-26 2:58 ` Ada vs Fortran for scientific applications robin 2006-07-09 20:52 ` adaworks 2006-07-09 21:33 ` Brooks Moses 2006-07-10 3:08 ` jimmaureenrogers 2006-07-10 11:23 ` Björn Persson 2006-07-10 17:08 ` Brooks Moses 2006-07-09 21:36 ` James Giles 2006-07-09 22:29 ` Martin Dowie 2006-07-09 23:07 ` James Giles 2006-07-09 23:44 ` glen herrmannsfeldt 2006-07-11 1:29 ` robin 2006-07-09 23:19 ` glen herrmannsfeldt 2006-07-11 1:29 ` robin 2006-07-11 5:56 ` glen herrmannsfeldt 2006-07-12 0:37 ` robin 2006-07-12 1:15 ` glen herrmannsfeldt 2006-07-14 2:47 ` Randy Brukardt 2006-07-14 2:56 ` glen herrmannsfeldt 2006-07-18 1:48 ` robin 2006-07-18 18:35 ` glen herrmannsfeldt 2006-07-19 14:35 ` BINARY INTEGER robin 2006-07-10 7:38 ` Ada vs Fortran for scientific applications Dmitry A. Kazakov 2006-07-10 16:41 ` adaworks 2006-07-10 18:12 ` John W. Kennedy 2006-07-11 1:29 ` robin 2006-07-11 2:49 ` John W. Kennedy 2006-07-12 0:37 ` robin 2006-07-11 1:29 ` robin 2006-07-11 6:46 ` adaworks 2006-07-11 7:30 ` James Giles 2006-07-11 20:46 ` Simon Wright 2006-07-11 11:44 ` Jeffrey Creem 2006-07-11 14:51 ` adaworks 2006-07-12 1:29 ` Jeffrey Creem 2006-07-11 13:53 ` Tom Linden 2006-07-11 15:02 ` adaworks 2006-07-11 15:35 ` Tom Linden 2006-07-11 16:54 ` Alex R. Mosteo 2006-07-11 17:43 ` adaworks 2006-07-11 18:15 ` Ed Falis 2006-07-11 14:46 ` John W. Kennedy 2006-07-11 14:45 ` Tom Linden 2006-07-11 17:06 ` John W. Kennedy 2006-07-11 15:16 ` glen herrmannsfeldt 2006-07-11 15:55 ` Richard E Maine 2006-07-11 16:21 ` Ed Falis 2006-07-11 16:28 ` Richard E Maine 2006-07-12 3:33 ` James Dennett 2006-07-11 17:08 ` Jean-Pierre Rosen 2006-07-11 22:44 ` glen herrmannsfeldt 2006-07-12 9:50 ` Jean-Pierre Rosen 2006-07-14 7:00 ` glen herrmannsfeldt 2006-07-12 3:35 ` James Dennett 2006-07-12 0:37 ` robin 2006-07-10 9:57 ` Georg Bauhaus 2006-07-11 15:14 ` robin 2006-07-11 17:21 ` adaworks 2006-07-11 19:50 ` John W. Kennedy 2006-11-20 9:39 ` robin 2006-11-21 9:02 ` Finalization Philippe Tarroux 2006-11-21 9:22 ` Finalization Dmitry A. Kazakov 2006-11-21 10:32 ` Finalization Philippe Tarroux 2006-11-21 11:09 ` Finalization Dmitry A. Kazakov 2006-11-21 17:29 ` Finalization Adam Beneschan 2006-11-21 18:39 ` Finalization Dmitry A. Kazakov 2006-11-21 17:22 ` Finalization Adam Beneschan 2006-11-21 11:26 ` Finalization Georg Bauhaus 2006-05-25 16:25 ` Bounds Check Overhead [was: Re: Ada vs Fortran for scientific applications] Bob Lidral 2006-05-25 22:08 ` Bounds Check Overhead Simon Wright 2006-05-25 22:27 ` Brooks Moses 2006-05-25 22:43 ` Richard E Maine 2006-05-26 10:16 ` Ludovic Brenta 2006-05-26 10:59 ` Dan Nagle 2006-05-26 14:44 ` Dick Hendrickson 2006-05-26 14:52 ` Rich Townsend 2006-05-26 16:44 ` Ludovic Brenta 2006-05-26 17:49 ` Gordon Sande 2006-05-27 9:06 ` Dmitry A. Kazakov 2006-05-27 14:44 ` Richard Maine 2006-05-29 12:02 ` Jan Vorbrüggen 2006-05-26 20:31 ` glen herrmannsfeldt 2006-05-27 14:29 ` robin 2006-05-26 14:59 ` gary.l.scott 2006-05-26 15:10 ` Dick Hendrickson 2006-05-26 18:41 ` gary.l.scott 2006-05-26 18:56 ` Dan Nagle 2006-05-26 19:16 ` Richard Maine 2006-05-27 14:29 ` robin 2006-05-26 16:18 ` Richard Maine 2006-05-26 17:30 ` Nasser Abbasi 2006-05-26 18:24 ` Richard Maine 2006-05-26 19:23 ` Paul Van Delst 2006-05-26 19:25 ` Nasser Abbasi 2006-05-26 19:46 ` Richard Maine 2006-05-30 19:13 ` Robert A Duff 2006-06-04 23:48 ` Richard Maine 2006-05-26 18:26 ` Rich Townsend 2006-05-26 19:58 ` Simon Wright 2006-05-26 20:06 ` Rich Townsend 2006-05-26 20:16 ` Richard Edgar 2006-05-26 20:28 ` Richard Maine 2006-05-30 19:20 ` Robert A Duff 2006-06-05 13:50 ` Richard Edgar 2006-05-26 20:27 ` glen herrmannsfeldt 2006-05-26 20:41 ` Richard Maine 2006-05-27 14:29 ` robin 2006-05-27 14:56 ` Ludovic Brenta 2006-05-27 15:53 ` jimmaureenrogers 2006-05-27 15:59 ` Dmitry A. Kazakov 2006-05-27 17:52 ` Tom LINDEN 2006-05-28 6:28 ` Dave Weatherall 2006-05-26 2:58 ` Ada vs Fortran for scientific applications robin 2006-05-29 12:21 ` Jan Vorbrüggen 2006-05-29 13:47 ` Dr. Adrian Wrigley 2006-05-29 14:17 ` Jan Vorbrüggen 2006-05-29 14:52 ` Dmitry A. Kazakov 2006-05-29 15:08 ` Jan Vorbrüggen 2006-05-29 17:03 ` Dmitry A. Kazakov 2006-05-30 7:11 ` Jan Vorbrüggen 2006-05-30 8:29 ` Dmitry A. Kazakov 2006-05-31 14:58 ` robin 2006-05-31 14:58 ` robin 2006-05-31 15:42 ` Dmitry A. Kazakov 2006-05-31 15:54 ` Gordon Sande 2006-05-31 14:58 ` robin 2006-05-31 18:07 ` Marc A. Criley 2006-05-29 15:47 ` Dr. Adrian Wrigley [not found] ` <pan.2006.05.25.12.11.52.919554@linuxchip.demon.co.uk.u <2006052514574816807-gsande@worldnetattnet> 2006-05-25 18:30 ` Checking for Undefined [was Re: Ada vs Fortran for scientific applications] Thomas Koenig 2006-05-25 18:34 ` Gordon Sande 2006-05-26 2:58 ` Ada vs Fortran for scientific applications robin [not found] ` <pan.2006.05.25.12.11.52.919554@linuxchip.demon.co.uk.u <m2r72h69vz.fsf@grendel.local> 2006-05-26 7:54 ` Checking for Undefined Dirk Craeynest 2006-05-26 15:04 ` Dick Hendrickson 2006-05-27 14:29 ` robin 2006-05-27 15:08 ` Gordon Sande 2006-05-28 0:56 ` robin 2006-05-28 1:04 ` glen herrmannsfeldt 2006-05-28 13:46 ` Gordon Sande [not found] ` <pan.2006.05.25.12.11.52.919554@linuxchip.demon.co.uk.u <e574mu$rrj$1@scrotar.nss.udel.edu> 2006-05-26 18:53 ` Bounds Check Overhead Thomas Koenig 2006-05-26 19:07 ` Richard Maine [not found] ` <pan.2006.05.25.12.11.52.919554@linuxchip <20060712.7A4E6E0.D028@mojaveg.lsan.sisna.com> [not found] ` <20060712.7A4E6E0.D028@mojaveg.lsan.sisna.com> 2006-07-14 6:08 ` Ada vs Fortran for scientific applications Bob Lidral 2006-07-14 6:17 ` Richard Maine 2006-07-17 12:44 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) robin [not found] ` <20060712.7A4E6E0.D028@mojaveg.lsan.sisna <20060714.7A4E988.A30D@mojaveg.lsan.sisna.com> [not found] ` <20060714.7A4E988.A30D@mojaveg.lsan.sisna.com> 2006-07-16 9:07 ` Ada vs Fortran for scientific applications glen herrmannsfeldt 2006-07-18 1:48 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) robin 2006-07-18 18:41 ` ONES COMPLEMENT glen herrmannsfeldt 2006-07-19 13:41 ` robin 2006-07-22 0:09 ` Richard Steiner [not found] ` <20060712.7A4E6E0.D028@mojaveg.lsan.sisna <20060717.7A4ADD0.10B1A@mojaveg.lsan.sisna.com> 2006-07-18 13:05 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) Shmuel (Seymour J.) Metz 2006-07-19 11:18 ` ONES COMPLEMENT Peter Flass 2006-07-19 17:14 ` glen herrmannsfeldt 2006-07-19 23:43 ` robin 2006-07-19 13:41 ` robin 2006-07-19 17:11 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) Everett M. Greene 2006-07-19 19:40 ` Shmuel (Seymour J.) Metz 2006-07-20 16:46 ` Everett M. Greene 2006-07-20 21:47 ` Shmuel (Seymour J.) Metz 2006-07-21 17:23 ` ONES COMPLEMENT glen herrmannsfeldt 2006-07-21 18:04 ` John W. Kennedy 2006-07-23 0:26 ` robin 2006-07-23 0:26 ` robin 2006-07-21 20:05 ` glen herrmannsfeldt 2006-07-21 21:00 ` Dick Hendrickson 2006-07-22 6:19 ` glen herrmannsfeldt 2006-07-22 15:12 ` ONES COMPLEMENT (was: Ada vs Fortran for scientific applications) Everett M. Greene 2006-07-23 0:26 ` ONES COMPLEMENT robin [not found] ` <20060717.7A4ADD0.10B1A@mojaveg.lsan.sisna.com> 2006-07-19 6:54 ` glen herrmannsfeldt 2006-07-19 12:47 ` Tom Linden 2006-07-19 14:35 ` robin 2006-05-25 11:02 ` Ada vs Fortran for scientific applications Dan Nagle 2006-05-25 11:23 ` Gareth Owen 2006-05-26 2:58 ` robin 2006-05-26 7:13 ` Gareth Owen 2006-05-26 14:53 ` Dick Hendrickson 2006-05-26 20:18 ` glen herrmannsfeldt 2006-05-26 21:50 ` Björn Persson 2006-05-25 14:33 ` glen herrmannsfeldt 2006-05-26 2:58 ` robin 2006-05-26 20:03 ` JA 2006-05-25 11:32 ` Martin Krischik 2006-05-24 6:46 ` Jan Vorbrüggen 2006-05-24 5:26 ` robin 2006-05-23 8:34 ` gautier_niouzes 2006-05-23 13:33 ` Nasser Abbasi 2006-05-23 14:00 ` Michael Metcalf 2006-05-23 15:04 ` beliavsky 2006-05-23 18:09 ` glen herrmannsfeldt 2006-05-23 22:38 ` Jeffrey Creem 2006-05-23 19:29 ` Gautier 2006-05-23 19:34 ` Rich Townsend 2006-05-23 17:57 ` Aldebaran 2006-05-23 22:14 ` Dr Ivan D. Reid 2006-05-23 23:01 ` John 2006-05-27 5:01 ` Nasser Abbasi 2006-05-27 7:36 ` Pascal Obry 2006-05-27 11:18 ` Björn Persson
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox