From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fc89c,97188312486d4578 X-Google-Attributes: gidfc89c,public X-Google-Thread: 109fba,baaf5f793d03d420 X-Google-Attributes: gid109fba,public X-Google-Thread: 1014db,6154de2e240de72a X-Google-Attributes: gid1014db,public X-Google-Thread: 103376,97188312486d4578 X-Google-Attributes: gid103376,public From: sam0001@ibm.net (Sam B. Siegel) Subject: Re: What's the best language to start with? [was: Re: Should I learn C or Pascal?] Date: 1996/08/18 Message-ID: <4v62h3$33c4@news-s01.ny.us.ibm.net> X-Deja-AN: 174838301 references: <31FBC584.4188@ivic.qc.ca> <01bb8342$88cc6f40$32ee6fcf@timhome2> <4u7grn$eb0@news1.mnsinc.com> <01bb83ad$29c3cfa0$87ee6fce@timpent.airshields.com> <4u89c4$p7p@solutions.solon.com> <01bb83f5$923391e0$87ee6fce@timpent.airshields.com> <01bb8534$b2718bc0$87ee6fce@timpent.airshields.com> newsgroups: comp.lang.c,comp.lang.c++,comp.unix.programmer,comp.lang.ada Date: 1996-08-18T00:00:00+00:00 List-Id: I have to agree with Tim on this one. Good working knowldge of assembler and machine acretecture (sp?) make it painfully obviously on how memory and data is manipulated. Sam "Tim Behrendsen" wrote: >Dan Pop wrote in article >... >> In <01bb83f5$923391e0$87ee6fce@timpent.airshields.com> "Tim Behrendsen" > writes: >> >> >The problem is that we *can't* think purely abstractly, >> >otherwise we end up with slow crap code. >> >> Care to provide some concrete examples? >Look at the code-bloated and slow-software world we live in, >particularly on desktop platforms. I think this is caused by >people not truly understanding what's *really* going on. >For example, look at OOP. Very naive implementations of OOP >used a huge amount of dynamic memory allocation, which can cause >severe performance problems. That's why I don't use C++ for my >products; to do it right you have to do a very careful analysis >of how your classes are going fit together efficiently. >I've spoken to enough people that have had C++ disasters to >convince me that the more abstraction there is, the more >danger there is of inefficient code. This shouldn't be that >hard to believe; any time you abstract away details you are >giving up knowledge of what is going to be efficient. >I alluded to this in another post, but a good example is Motif >and X11. A programmer who only understands Motif, but does not >understand X11 is going to write slow crap, period. >> >It is simply not >> >possible to ignore the way code is structured, and completely >> >depend on the compiler to save us. >> >> This doesn't make any sense to me. Could you be a little bit more >> explicit? >> >> The compiler definitely won't save my ass if I choose to use bubblesort >> instead of quicksort on a large dataset, but the selection between the >> two algorithms is made based on an abstraction (algorithm analysis) not >> on how the compiler generates code for one or another. It's very likely >> that quicksort will be better, no matter the compiler and the underlying >> platform. >> >> Once you put micro-optimizations based on knowledge about the >> compiler and/or hardware into the code, you impair both the >> readability/maintainability/portability of the code and the opportunities >> of another compiler, on another platform, to generate optimal code. >> There are situations when this _has_ to be done, but they are isolated >> exceptions, not the norm. >I gave this example in another post, but nobody responded. I think >it was too good. :-) I'll try again ... >I can prove that your statement is wrong. >Let's say I have the perfect optimizer that takes C code and >provides the absolute most efficient translation possible. Given >that's the case, it won't improve an O(n) algorithm to an O(n^2) >algorithm. >Now, that should mean that I can order my C code into any >algorithmically valid sequence and end up with exactly the same >running time, because the optimizer is always perfect. >Now, we know that this does not reflect the real world. The >question is, how does a programmer learn the efficient >implementations that the optimizer can deal with effectively >from the boneheaded ones? >Here's an example: >int a[50000],b[50000],c[50000],d[50000],e[50000]; >void test1() >{ > int i, j; > for (j = 0; j < 10; ++j) { > for (i = 0; i < 50000; ++i) { > ++a[i]; ++b[i]; ++c[i]; ++d[i]; ++e[i]; > } > } >} >void test2() >{ > int i, j; > for (j = 0; j < 10; ++j) { > for (i = 0; i < 50000; ++i) ++a[i]; > for (i = 0; i < 50000; ++i) ++b[i]; > for (i = 0; i < 50000; ++i) ++c[i]; > for (i = 0; i < 50000; ++i) ++d[i]; > for (i = 0; i < 50000; ++i) ++e[i]; > } >} >On my AIX system, test1 runs in 2.47 seconds, and test2 >runs in 1.95 seconds using maximum optimization (-O3). The >reason I knew the second would be faster is because I know >to limit the amount of context information the optimizer has >to deal with in the inner loops, and I know to keep memory >localized. >Now I submit that if I showed the average C programmer >both programs, they would guess that test1 is faster because >it has "less code", and that is where abstraction, >ignorance, and niavete begin to hurt. >-- Tim Behrendsen (tim@airshields.com)