From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,5dc0152fc17e5f2c X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news1.google.com!news2.google.com!fu-berlin.de!uni-berlin.de!not-for-mail From: "Marc A. Criley" Newsgroups: comp.lang.ada Subject: Re: Deadlock resolution Date: Tue, 27 Jul 2004 12:31:31 -0500 Message-ID: <2mnhrqFp0onrU1@uni-berlin.de> References: <2mkhd9Fnpr15U1@uni-berlin.de> X-Trace: news.uni-berlin.de aOdOsIhSRYU5EWNNS9oFWQ2N+QSN5EDdhpqUOKN3aj8ryCp1Hj X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2800.1437 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1441 Xref: g2news1.google.com comp.lang.ada:2421 Date: 2004-07-27T12:31:31-05:00 List-Id: "Nick Roberts" wrote : > > Right, but can I presume you were pleased on those occasions when the RTS > did detect the deadlock? Yeah, but the RTS (Alsys Ada) detected deadlocks only a handful of times throughout the whole development effort, because it would only catch fairly obvious occurrences, like when the sequence of statements in the "select block" invoked one of the same task's other entries. Far more common was a simple freeze-up; since this was an event driven distributed system, some event-handling chains could span multiple processes and multiple tasks within a process. And occasionally, due to a bug, ultimately the chain would circle back and an invocation of another entry in what was the originating task would be made. To that task it just looked like some other task trying to rendezvous with it and the entry call was therefore blocked--which of course backed up and froze the whole chain. > I suppose it is a question of sureness. Would I be wrong in suggesting > that for some kinds of software it won't be possible to be 100% sure > that deadlock cannot occur? I suppose that for the general case of all possible usable concurrent architectures that the answer would be "no", however, by suitably constraining the architecture and design it is certainly possible to eliminate the possibility of deadlock. Coarse-grained concurrency is one of the factors in doing just that. > To my mind, parallelism is unimportant except for software which is > interacting with the outside world (either human beings or peripheral > devices, or an external network). Of course, software which is > interacting with software that is interacting with the outside world > can get caught up in this, and so much of an overall system can be. Go down enough levels of indirection and interfacing and pretty much all software interacts with the outside world, so questions of parallelism/concurrency can (and probably ought to) be considered for the develoment of most any software system. > I'm all for eliminating unnecessary parallelism. It is certainly > inefficient and it usually makes a piece of software more complex > than it needs to be. Concurrency for the sake of using concurrency is certainly aggravating, and I've walked into projects where there was no other excuse for its presence. But the _improper_ use of any particular architecture, design, or language feature is always aggravating and going to be inefficient and add unnecessary complexity. Concurrency is no more inherently problematic than inheritance or polymorphism. They're all sexy techniques that can be fun and effective to use and get you into a world of hurt when misapplied. > But I think parallelism is often necessary or desirable. Nothing > comes for free, and the price to pay for parallelism, as you rightly > point out, is the increased danger of deadlocks. Actually I don't believe that was something I pointed out. Concurrency is subject to the risk of deadlock in the same sense that numerical analysis is subject to the risk of division-by-zero. I would argue that that concurrency, when correctly and appropriately employed, can be done with zero risk of deadlock. > For safety critical > (or otherwise critical) software, one might decide that the danger > is too great, and opt for the safety of a non parallel (or less > parallel) design. I'm an unabashed fan of concurrency, especially with Ada tasking; the thing I keep harping on is "appropriate use". Coarsely-grained chunking of functionality into concurrent entities, and well-defined interfaces (entries) lead to effective implementations. In other words, the software engineering 101 concepts of "highly cohesive and loosely coupled". Marc A. Criley McKae Technologies www.mckae.com