From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: *** X-Spam-Status: No, score=3.4 required=5.0 tests=BAYES_50,INVALID_DATE, MSGID_SHORT,REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 Xref: utzoo comp.edu:2889 comp.lang.ada:3153 comp.lang.misc:3860 Path: utzoo!utgpu!jarvis.csri.toronto.edu!cs.utexas.edu!rice!uw-beaver!ubc-cs!manis From: manis@cs.ubc.ca (Vincent Manis) Newsgroups: comp.edu,comp.lang.ada,comp.lang.misc Subject: Re: Teaching Concurrency Message-ID: <6218@ubc-cs.UUCP> Date: 11 Jan 90 19:20:41 GMT References: <7588@hubcap.clemson.edu> <602@agcsun.UUCP> Sender: news@cs.ubc.ca Reply-To: manis@cs.ubc.ca (Vincent Manis) Organization: The Invisible City of Kitezh List-Id: In article <602@agcsun.UUCP> marks@agcsun.UUCP (Mark Shepherd) writes: >Although these programming assignments evidently had some valuable lessons >on multi-tasking, I feel that they may also inadventantly teach a >less desirable lesson: that it is OK to use concurrent tasks for things >that could be done much more simply with subroutines. > >In real life (whether industrial, business, scientific research, or whatever), >multi-tasking applications carry substantial penalties >in complexity and resource utilization, and should only be used when >appropriate. (Of course, NOT using tasking when it should be used has >equally dire consequences). Ah, but this assumes an environment in which parallel execution is indeed less `efficient' than serial execution. That is indeed true if one is operating on a single CPU in which parallelism is simulated, but what about the use of highly parallel architectures or even networks? It seems clear that Mark had an underlying processor model in mind when he wrote his article. Unfortunately, that processor model is becoming ever less common, what with the advent of both distributed network architectures such as NCS and ONC, as well as systems such as the Connection Machine. In any case, what `efficiency' metric is one using? I have seen commercial programs in which parallelism is ever so cleverly avoided, by use of asynchronous system traps and the like, coupled with a knowledge of how long an operation is expected to take. David Parnas even talks of a real-time control system (I think it was one of the Navy plane systems he consulted on, but I'm not sure) in which code for various tasks was interleaved at assembly time, with the programmer expected to know when to switch instruction streams. Surely readability, maintainability, reliability etc., are at least as important as raw speed. (Note that I am *not* claiming that parallelism always enhances readability or reliability; just sometimes.) All of this leads to two observations: first, that rather than teaching students to do particular things `because it's efficient', we ought to teach them to look at the underlying processor architecture, as well as the other criteria which are important in the project, before selecting particular design techniques. Second, don't expect `efficiency' to stay the same for any length of time; my favourite quotation from IBM's original PL/I manual (ca. 1966) is `Do not use procedures; they are expensive.' Disclaimer: I have not read the article Bill Wolfe referred to, and therefore can't comment on the appropriateness of the original examples. Whenever I see the word `Ada', my eyes glaze over, and I turn the page. -- \ Vincent Manis "There is no law that vulgarity and \ Department of Computer Science literary excellence cannot coexist." /\ University of British Columbia -- A. Trevor Hodge / \ Vancouver, BC, Canada V6T 1W5 (604) 228-2394