From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.5-pre1 (2020-06-20) on ip-172-31-74-118.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-1.9 required=3.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.5-pre1 Date: 25 Nov 91 04:04:23 GMT From: cis.ohio-state.edu!pacific.mps.ohio-state.edu!linac!uwm.edu!ogicse!milton !mfeldman@ucbvax.Berkeley.EDU (Michael Feldman) Subject: Re: Ada Tasking problem Message-ID: <1991Nov25.040423.4383@milton.u.washington.edu> List-Id: In article <25883@sdcc6.ucsd.edu> cs171wag@sdcc5.ucsd.edu (Bob Kitzberger) writ es: >mfeldman@milton.u.washington.edu (Michael Feldman) writes: > >>Once your main program called the task's entry, causing the >>task to get control, it simply kept it, finished its loop, >>then gave the CPU back to MAIN. The standard Ada recommendation >>to prevent this from happening is to put a brief DELAY in each loop >>iteration, forcing (in your case) MAIN and the task to ping-pong >>control. You need to add, say, DELAY 0.1 just before both END LOOP >>statements. >>> >>This is a portable solution: it forces the processor >>to be shared even in the absence of time-slicing. > >Many Ada runtimes will 'special case' a delay of 0.0, resulting in an >immediate context switch to another task of equal priority (i.e. round-robin >to the next available task in the same priority slot in the ready queue.) >Of course, this is not very portable, but it doesn't have the overhead of >delay queue insertion and expiry. Your tradeoff mileage may vary ;-) > You bet it's not portable. There are (or were, at least) runtimes that optimized away DELAY 0.0 altogether, "no-op"-ed it, so there'd be no context switch at all. That would, of course, defeat the purpose. Are there any runtimes that still do that? That's where the "small delay" rather than "zero delay" originated, if my recollection of history is correct. Tell the truth, Bob - just how much overhead is involved in the short delay described above (on a typical processor)? Is this a micro-efficiency concern that's big enough to trade for non-portable behavior? I'd feel more comfortable if there were an AI (Ada Interpretation) that _recommended_ the optimization you describe. Or if it were widely- enough adopted by implementers to be "quasi"-portable. Is this the case? Mike