From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=BAYES_00,INVALID_DATE, MSGID_SHORT,REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Path: utzoo!mnetor!seismo!ut-sally!utah-cs!shebs From: shebs@utah-cs.UUCP (Stanley Shebs) Newsgroups: comp.lang.ada,comp.lang.misc Subject: Re: Software Reuse -- do we really know what it is ? Message-ID: <4661@utah-cs.UUCP> Date: Tue, 23-Jun-87 13:28:26 EDT Article-I.D.: utah-cs.4661 Posted: Tue Jun 23 13:28:26 1987 Date-Received: Thu, 25-Jun-87 01:54:59 EDT References: <8706160502.AA26398@ucbvax.Berkeley.EDU> <371@dcl-csvax.comp.lancs.ac.uk> <374@sol.ARPA> <4658@utah-cs.UUCP> <378@sol.ARPA> Reply-To: shebs@utah-cs.UUCP (Stanley Shebs) Distribution: world Organization: PASS Research Group Xref: mnetor comp.lang.ada:406 comp.lang.misc:464 List-Id: In article <378@sol.ARPA> crowl@rochester.UUCP (Lawrence Crowl) writes: >>>Clearly, we need more concentration on WHAT, but we should not abandon the >] ^^^^^^^^^^^^^^^^^^^^^^^^^ >>>efficiency of HOW. >] ^^^^^^^^^^ >] >]QED! > >Let me clarify my position. I can specify a sort as a set of predicates and >let the system figure out how to satisfy those predicates. This is the WHAT >approach used in languages such as Prolog. Another approach is to write a >algorithm for sorting, e.g. quicksort. This is the HOW approach used in >languages such as Ada. Yes, the packaging of the sort will result in some >loss of efficiency. However, it will be substantially faster than the WHAT >approach. I am advocating an approach in which a package interface describes >WHAT and the implementation of the package describes HOW. > >Because I wish to keep the macro-efficiency of algorithmic languages, do not >accuse me of wishing to keep the micro-efficiency of hand-tuned assembler. I wasn't accusing anybody of anything, just pointing out that like any other culture, the software culture is very pervasive and people in it (us) don't always recognize when we're behaving according to those cultural norms. For instance, if I were to post some inefficient sort to the net, there are maybe ten computer people in the world who could look at it and not get at least a twinge of "Gee, why not use a better algorithm?" As I said, it's part of the culture. Not completely bad, but frequently dominates our thinking. Your typical serious C hacker will get unhappy about the overhead of a function call, let alone the extra functions involved in "packaging"! After several years of extensive Lisp hacking, I've managed to overcome resistance to defining lots of little data abstraction functions, and only feel guilty once in a while :-). An additional obstacle to using the "packaging" is that in the past, some systems (such as Simula and Smalltalk) were orders-of-magnitude slower than procedural languages, and there is perhaps a lingering perception that data abstraction costs a lot in performance. Requiring both WHAT and HOW is a pretty controversial thing to do. The biggest problem to me is the potential for inconsistency, since many things will be stated twice, but in different terms. That's the rationale for making the computer figure out the HOW itself, but that is an elusive and perhaps unreachable goal. (Doesn't stop me from trying though!) >Programmers [do] this sort of program combination many times a day in the Unix >shell. There ARE approaches to software reuse that CAN work. We need to >provide this kind of capability within a language. Unix is a counterexample, that's why I didn't mention it :-). A closer look is worthwhile. Unix tools offer basically one interface - a stream of bytes. More complicated interfaces, such as words, nroff commands, or source code, are less well supported; it's unusual to be able to pass C source through many filters in the way that data files can be passed. Also, at least in the groups of Unix users I've observed, pipes and filtering sequences longer than about 3 stages are relatively uncommon, and are about as difficult for many people to compose as a C program. Of course, my sample is not scientific, and I'm sure there are lots of Unix hacks that will be glad to tell me all about their 15-stage pipe processes! What other *working* approaches are there? >There are technical problems. For instance, how do you insure that the >parameters to a generic package are appropriate. Use something higher-level than Ada? This seems like an issue only for some languages, not a major obstacle to reuse. >Performance will always be a goal. However, it must be balanced against cost. >Most programming done today is in a high-level language. In the early sixties, >most programming was done in assembler. So, the software attitude has changed. This is how us language people justify our salaries. On the other hand, I've been wondering if the change was caused by other factors not directly related to level of language, such as sites getting a wider variety of hardware (thus making non-portable assembly language impractical). C is very popular for new applications nowadays, but many consider C to be a "portable assembly language". >Modular code also allows changes in representation that can lead to orders of >magnitude performance improvements. Non-modular code strongly inhibits such >representational changes. In short, the micro-efficiency of non-modular code >leads to the macro-INefficiency of poor algorithms and representations. I agree 100% (especially since my thesis relates to this!) BUT, flaming about the wonders of data abstraction is not going to change anybody's programming practices. Demonstrations are much more convincing, although a DoD project to demonstrate reusability is probably much less influential than Jon Bentley's CACM column. Many recent C books have been introducing substantial libraries for reuse, although the books' stated policies on copying are not consistent. More of this sort of thing would be great. >Again, let me clarify. I meant that the reusable software would be written by >software houses and sold to companies which write applications. For instance, >Reusable Software Inc. sells a sort package to Farm Software Inc. which uses >it in its combine scheduling program. Farm Software Inc. need not divulge its >algorithms or its use of packages written by Reusable Software Inc. I hadn't thought about that. Here's my reaction as a Farm Software Inc. programmer - "How do I know if their stuff is good enough!? ... But the interface isn't right! ... How can we fix the bugs without sources!" Reuse can't happen until the seller *guarantees* critical characteristics - they have to be able to say that their sort package will sort M data values in N seconds, and that the software will have bugs fixed within a certain number of days of being discovered, and so forth. This is old hat for other kinds of engineers, but software companies these days will only promise that there is data on the disk somewhere :-(. (I'm sure there are places that will certify performance - anybody got good examples?) >[...] Modular code will cost substantially less in the long run. I sort of believe that too, but it's hard to substantiate. Anecdotes are easy to come by, but this is a field where cost depends primarily on programming skill, and we don't have a precise measure of that skill. The other cost win is when a module can be used in some other program, but the extra cost of making code reusable can only be justified if your company has another program to use it in! BTW, at Boeing I proposed an engineer-maintained library of aerospace-related subroutines, so people didn't have to continually reinvent spherical geometry formulas, Zakatov equations, and the like. But management rejected the idea, because it would be "too difficult to coordinate" (it was supposed to cut across individual projects). No mention of costs... People interested in promoting software reuse should look at how expert systems developed as a field. Although the first ones appeared in the mid-70s, it wasn't until 1980 when the expert system R1 that configured Vaxen for DEC was published. DEC claimed to have saved millions of dollars in the first year of using it. From then on, the expert system business grew exponentially, although it's not clear whether anybody else has ever profited from an expert system so dramatically... > Lawrence Crowl 716-275-5766 University of Rochester stan shebs shebs@cs.utah.edu