comp.lang.ada
 help / color / mirror / Atom feed
* Real time and concurrency (was: What is real-time?)
@ 1992-10-24 13:01 pipex!warwick!uknet!yorkohm!minster!mjl-b
  0 siblings, 0 replies; 4+ messages in thread
From: pipex!warwick!uknet!yorkohm!minster!mjl-b @ 1992-10-24 13:01 UTC (permalink / raw)


In article <1992Oct22.153813.15179@mksol.dseg.ti.com> mccall@mksol.dseg.ti.com 
(fred j mccall 575-3539) writes:
>In <719335132.5132@minster.york.ac.uk> mjl-b@minster.york.ac.uk writes:
>>The Burns and Wellings book writes off C fairly early on, because it has no
>>features for concurrency. Real-time and concurrency go hand in hand, and the
>>authors provide a fairly convincing argument.
>
>How many languages (other than Ada) actually provide support for
>concurrency (as opposed to getting it from the OS)?

The book covers Ada, Modula, Modula-2 and occam, all of which provide
concurrency features. Other concurrent languages get a mention, but the
above are the "main" langauges used.

>Does this mean
>that there was no such thing as real time software until Ada was
>available? 

Of course not. My point is that a concurrent language makes the expression
and maintenance of real-time programs far easier. Real time software
is most often inherently concurrent, and it makes the job so much harder if
those concurrent tasks have to be forced into a sequential form.

Not only have you lost the inherent structure of the design in the
implementation, the expression of other real-time concerns such as deadline
constraints on a particular task becomes much harder.

For example, you have three tasks that must run concurrently: P, Q and R. If
you have a concurrent language, you simply map them to the unit of
concurrency. In a sequential language, you must split up P, Q and R into,
say, three bits, so that you can interleave their execution:

P1;
Q1;
R1;
P2;
Q2;
R2;
P3;
Q3;
R3;

Now try and express the constraint that P must complete its execution within
2.0 seconds of when it started... not impossible, but already the program's
difficult to understand and maintain.

The trade off is between manually interleaving P, Q and R at the coding
stage, and letting the language do it at run time. I know which one I'd
rather maintain :-)

>An awful lot of real-time software has been written in
>LANGUAGES to provide no support for concurrency (like assembly).

Yes, it has. But how easy was it to write, debug and maintain?

>There seems to be something a bit off about this line of reasoning.

The reasoning is better explained in Burns and Wellings' book -- read it and
then make your judgement.

>Fred.McCall@dseg.ti.com - I don't speak for others and they don't speak for me
.

Mat

| Mathew Lodge                      | "A conversation with you, Baldrick,     |
| mjl-b@minster.york.ac.uk          |  and somehow death loses its sting..."  |
| Langwith College, Uni of York, UK |  -- Blackadder II                       |

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Real time and concurrency (was: What is real-time?)
@ 1992-10-27  4:06 Michael Feldman
  0 siblings, 0 replies; 4+ messages in thread
From: Michael Feldman @ 1992-10-27  4:06 UTC (permalink / raw)


In article <719931680.25559@minster.york.ac.uk> mjl-b@minster.york.ac.uk (Mathe
w Lodge) writes:
>>
>>How many languages (other than Ada) actually provide support for
>>concurrency (as opposed to getting it from the OS)?
>
>The book covers Ada, Modula, Modula-2 and occam, all of which provide
>concurrency features. Other concurrent languages get a mention, but the
>above are the "main" langauges used.
>
>>Does this mean
>>that there was no such thing as real time software until Ada was
>>available? 
>
>Of course not. My point is that a concurrent language makes the expression
>and maintenance of real-time programs far easier. Real time software
>is most often inherently concurrent, and it makes the job so much harder if
>those concurrent tasks have to be forced into a sequential form.
>
[good stuff deleted]

Not only was lots of real-time software written without tasking constructs
(indeed, still is), lots of good software has been written in Fortran
(with no record types or recursion), Basic (no procedure parameters,
except in certain dialects), and, indeed, assembler.

It is fundamental computer science that we can solve any computable
problem using only machine language; indeed, using only Turing machines
with binary alphabets. But why would we want to?

The point here is that over the decades we have added constructs to
languages in order to support application needs, particularly the
_human_ needs we describe as all the -ilities (maintainability,
readability, etc. etc.). Tasking is really no different from all
the other higher-level constructs in that regard. 

The answer to the question "Does Ada really need tasking?" (or better,
"Does real-time programming really need tasking?"), the only honest
answer is "of course not!" On the other hand, those of us who favor
building language support for things, instead of doing everything
at machine-language level, would argue that it sure helps.

That R/T folks find tasking "not fast enough" to meet their needs in
specific projects, gives me a sense of deja vu. In the 70s, when 
"structured programming" was the rage, using procedures to break up
programs - mainly to improve _human_ productivity - was often blasted 
as hopelessly academic, because compilers couldn't deliver the goods
efficiently. Remember those days?

In the end, compilers did a better job, computers speeded up, 
and hardware architects gave us stack machines to support procedure calls. 
Not many people would give strong arguments against procedures these days.

Undoubtedly there are some functional holes in the tasking model; some 
are plugged by Ada-9X. But the basic model is, IMHO, sound, and if we
actually use it, instead of just complaining about it all the time, our
compiler folks and machine architects will find more efficient ways
to support it. The Rational machine (on the hardware side) and POSIX
threads (on the software side) are examples of how this is already
happening.

Before rejecting this or that language feature on a project, I argue 
as I always have, with the following aphorisms (I didn't make these up):

1. It's easier to make a correct program fast than a fast program correct.

2. Fast enough is fast enough.

3. If it's not, optimize from the inside out. Find the 10% of the
   code that does the 90% of the work, optimize it till it's fast
   enough, and forget about the rest.

4. Measure (or at least analyze openmindedly) before deciding. 

5. You gotta do what you gotta do. But make sure you gotta do it.

Cheers -

Mike Feldman
------------------------------------------------------------------------
Michael B. Feldman
co-chair, SIGAda Education Committee

Professor, Dept. of Electrical Engineering and Computer Science
School of Engineering and Applied Science
The George Washington University
Washington, DC 20052 USA
(202) 994-5253 (voice)
(202) 994-5296 (fax)
mfeldman@seas.gwu.edu (Internet)

"Americans wants the fruits of patience -- and they want them now."
------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Real time and concurrency (was: What is real-time?)
@ 1992-10-27 17:16 La rry Howard
  0 siblings, 0 replies; 4+ messages in thread
From: La rry Howard @ 1992-10-27 17:16 UTC (permalink / raw)


In article <1992Oct27.040609.24215@seas.gwu.edu>, mfeldman@seas.gwu.edu
(Michael Feldman) writes:
|>
|> Before rejecting this or that language feature on a project, I argue 
|> as I always have, with the following aphorisms (I didn't make these up):
|> 
|> 1. It's easier to make a correct program fast than a fast program correct.
|> 
|> 2. Fast enough is fast enough.
|> 
|> 3. If it's not, optimize from the inside out. Find the 10% of the
|>    code that does the 90% of the work, optimize it till it's fast
|>    enough, and forget about the rest.
|> 
|> 4. Measure (or at least analyze openmindedly) before deciding. 
|> 
|> 5. You gotta do what you gotta do. But make sure you gotta do it.
|> 

While I don't disagree with what Dr. Feldman has said, I think his aphorisms
focus on "fast" at the expense of what really matters in real-time systems --
predictability.  In systems where the time when computations are performed is
as important as the results of the computations, performance optimization is
_not_ merely a matter of increasing the speed of certain computations.  Local
efficiency optimizations are _not_ local issues in real-time systems, they
relate directly to global scheduling decisions and to the analysis framework
used to predict the performance of the whole from attributes of the parts.

We have found that design strategies which permit the selection of language
features as a part of global design to be an effective way to make early
predictions of system performance.  We do this by understanding the global
design (architecture) in terms of a relatively few design elements which
embody global coordination decisions (communication, activation,
synchronization) in sets of mechanisms associated with the elements.  The
partitioning consists entirely of instances of these elements.  The mechanisms
each design element provides can be mapped to programming language features.
Instances of the elements inherit these decisions by means of software
templates associated with the elements.

We can analyze such a system early in its development by using specifications
of the design element instances to perform design simulation.  The latter
reconstructs the system as synthetic instances of design elements sized
according to their specifications.  The simulation is then studied on the
target virtual machine.  As we consider alternatives for mapping design
element mechanisms to language features in software templates, we can
investigate the run-time implications of these decisions in the context of the
global design.  Of course, this is an example of global optimization --
decisions by the few affecting the many.

We can analyze local optimizations at the global (system) level through
performance variables associated with the instances of the design elements.
Values for these variables (worse-case execution time, period, order,
coherence) are communicated to system designers using specification forms.  As
described above, these performance variables are used to size synthetic
instances of design elements for performance studies using design simulation.
In this way, the effects of local optimizations (as changes in timing) can be
related to the analysis of system run-time behavior.

Finally, I support those who believe that when rejecting Ada tasking we should
be careful not to "throw the baby out with the bath water."  Lots of good work
(for example by the RMARTS project at SEI) has shown that preemptive,
priority-based scheduling with Ada-83 tasks can be used (within certain
constraints) predictably and efficiently for real-time systems, with
advantages over non-preemptive approaches.  The laggards will warm up to it.



 Larry Howard  (lph@sei.cmu.edu)                       |
 Software Engineering Institute, Carnegie Mellon Univ. | Vera pro gratiis.
 Pittsburgh, PA 15213-3890   (412) 268-6397/5857 (fax) | 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Real time and concurrency (was: What is real-time?)
@ 1992-10-29 16:55 cis.ohio-state.edu!zaphod.mps.ohio-state.edu!darwin.sura.net!jvnc.net!yal
  0 siblings, 0 replies; 4+ messages in thread
From: cis.ohio-state.edu!zaphod.mps.ohio-state.edu!darwin.sura.net!jvnc.net!yal @ 1992-10-29 16:55 UTC (permalink / raw)


In <1992Oct27.040609.24215@seas.gwu.edu> mfeldman@seas.gwu.edu (Michael Feldman
) writes:

>The point here is that over the decades we have added constructs to
>languages in order to support application needs, particularly the
>_human_ needs we describe as all the -ilities (maintainability,
>readability, etc. etc.). Tasking is really no different from all
>the other higher-level constructs in that regard. 

This starts to become an issue of how much stuff belongs in the
language, how much should be kept at the OS or 'environment' level,
and how much would be better served by allowing it to be handled by
outside tools that could evolve faster than a language spec can?

>That R/T folks find tasking "not fast enough" to meet their needs in
>specific projects, gives me a sense of deja vu. In the 70s, when 
>"structured programming" was the rage, using procedures to break up
>programs - mainly to improve _human_ productivity - was often blasted 
>as hopelessly academic, because compilers couldn't deliver the goods
>efficiently. Remember those days?

Vividly.  In fact, at some times we are STILL in those days -- it all
depends on just what the constraints are.

>In the end, compilers did a better job, computers speeded up, 
>and hardware architects gave us stack machines to support procedure calls. 
>Not many people would give strong arguments against procedures these days.

The only argument against them is if they keep you from meeting the
requirements.  While rarer than it used to be, this CAN still happen
to you. 

>Undoubtedly there are some functional holes in the tasking model; some 
>are plugged by Ada-9X. But the basic model is, IMHO, sound, and if we
>actually use it, instead of just complaining about it all the time, our
>compiler folks and machine architects will find more efficient ways
>to support it. The Rational machine (on the hardware side) and POSIX
>threads (on the software side) are examples of how this is already
>happening.

But you can't use it if you can't meet the requirements.  Note that
several people here have been talking about using global shared memory
WITH ADA in order to get sufficient speed.  This is the same way this
sort of thing is done in every other language, so I'm not sure why Ada
would be such a big win in such a case.

>Before rejecting this or that language feature on a project, I argue 
>as I always have, with the following aphorisms (I didn't make these up):

>1. It's easier to make a correct program fast than a fast program correct.

True, within limits.  However, sometimes you can't get there from
here, at which point it's time to look for a faster algorithm and code
arrangement. 

>2. Fast enough is fast enough.

When you can get there from here.

>3. If it's not, optimize from the inside out. Find the 10% of the
>   code that does the 90% of the work, optimize it till it's fast
>   enough, and forget about the rest.

Sometimes the only way to get there is to tear out great whacking
chunks, rework the algorithm, and write a monolithic mess with shared
sections and global variables.

>4. Measure (or at least analyze openmindedly) before deciding. 

ALWAYS a good idea.

>5. You gotta do what you gotta do. But make sure you gotta do it.

EXACTLY.  And when you gotta, just tell all the various breeds of
bigots to get out of the way and let you get the job done.

-- 
"Insisting on perfect safety is for people who don't have the balls to live
 in the real world."   -- Mary Shafer, NASA Ames Dryden
------------------------------------------------------------------------------
Fred.McCall@dseg.ti.com - I don't speak for others and they don't speak for me.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~1992-10-29 16:55 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1992-10-29 16:55 Real time and concurrency (was: What is real-time?) cis.ohio-state.edu!zaphod.mps.ohio-state.edu!darwin.sura.net!jvnc.net!yal
  -- strict thread matches above, loose matches on Subject: below --
1992-10-27 17:16 La rry Howard
1992-10-27  4:06 Michael Feldman
1992-10-24 13:01 pipex!warwick!uknet!yorkohm!minster!mjl-b

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox