From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,dab7d920e4340f12 X-Google-Attributes: gid103376,public X-Google-Thread: 1014db,dab7d920e4340f12 X-Google-Attributes: gid1014db,public From: helink@sandia.gov (Hamilton Link) Subject: Re: C is 'better' than Ada because... Date: 1996/07/18 Message-ID: #1/1 X-Deja-AN: 169484106 distribution: world references: <31e02c32.342948604@netline-fddi.jpl.nasa.gov> <4rr961$hdk@btmpjg.god.bel.alcatel.be> content-type: text/plain; charset=ISO-8859-1 organization: Sandia National Labs mime-version: 1.0 newsgroups: comp.lang.ada,comp.lang.c Date: 1996-07-18T00:00:00+00:00 List-Id: Going back to the bit about Ada vs. assembly -- Kevin says (to paraphrase, pardon me if it's not completely accurate) that 1) programming well is just as easy in assembly as Ada, and 2) the lines-of-code produced per day per programmer is a constant Now, I realize that he's talking about programming in C with small amounts of assembly to optimize the really small time-critical sections, but even so I think I have reason to be just a tiny bit dubious. When I program, I sit at my computer and punch in Ada for days at a time (pausing briefly to doze off). Before I program, however, I do research on what algorithms and structures would best suit my purpose. A good algorithm written in whatever language you want with a good compiler will be almost as fast as any other implementation of the same algorithm, with very few exceptions and a very small spread in run-times. If you don't believe me, read "The Zen of Assembly Programming". The author makes the same point, and this is coming from a really awesome assembly coder. But for this very slight percent speedup, what has been sacrificed? Let's make a list: Readability -- shot to hell and flushed down the toilet, for the vast majority of cases Portability -- unless you're fortunate enough to be porting between virtually identical machines, you're going to have to totally rewrite your code (Or -- shudder -- someone else is) Modifiability -- want to change a BST into a balanced Red/Black BST? you're once again going to have to start over, rather than modify your data structure and add a little to your code I personally don't think that the sacrifices are worth the extra 10% speedup, unless you're really hurting for it. The other thing that really got me was that Kevin prefers spectacular failures to mild ones. I prefer OS/2 to DOS because when something goes wrong I can kill the process rather than reboot my machine. Why? Because OS/2 is a more advanced design. Likewise, I like Ada's exception handling protocol (which even in the worst case allows me to exit gracefully) over the random core dumps that gcc seems to be so fond of. just my 2 cents, hamilton