* Re: Teaching Concurrency
1990-01-10 22:47 ` Richard Pattis
@ 1990-01-11 13:04 ` Robert Firth
1990-01-11 19:27 ` Vincent Manis
1990-01-12 19:02 ` Peter da Silva
1990-01-11 16:09 ` Michael Meissner
` (4 subsequent siblings)
5 siblings, 2 replies; 21+ messages in thread
From: Robert Firth @ 1990-01-11 13:04 UTC (permalink / raw)
In article <10330@june.cs.washington.edu> pattis@cs.washington.edu (Richard Pattis) writes:
>OK, so this begs the question: what is the "smallest" assignment that can
>use concurrency fruitfully. I would like to teach a bit about tasking in
>one of my classes, but I don't want students to get "wrong" ideas from the
>example I use. Anyone out there have such an assignment? Is there some prime
>example out there of a good use of multi-tasking that is amenable to
>classroom instruction?
In the hope that it helps, I'll describe briefly the exercises i used
to teach concurrency as part of a real-time systems design course.
These are the one-page programs; the next step up was device drivers,
a simple message switch, &c. A warning upfront: the tasking features
of Ada are inadequate to solve some of these exercises.
(1) delay. the program contains one task, which simply prints
"I'm still here" every five seconds. explains: task structure,
task creation and termination, delay.
(2) interleaf. the program consists of six tasks, which do this
print "." every second
print "5" every five seconds
print "6" every six seconds
print "7" every five seconds
print "8" every six seconds
print a newline every 30 seconds
explains: several tasks; reentrant use of task code (by middle 4),
interleaving of task output; control of interleaving by priority.
(3) veryslowsort. the program reads a list of integers and creates
one task per integer, whose priority is given by the integer and
whose sole action is to print the integer. the integers are
therefore output in task priority order, ie they are sorted.
illustrates: dynamic task creation and priority assignment, use
of task parameters.
(4) chain gang. the program creates a reasonably large number of
identical tasks (eg 100), each with a parameter that identifies
itself. Each task waits for a message from (task-1), prints
"This is (task) and I have the message", and passes the message
to (task+1). Except that the last task sends the message to
a master task. The master creates the tasks, sends a message to
the first, and waits for it to go all around the chain.
illustrates: array of tasks, message passing, task parameters
and dynamic task names, task blocking on message read.
(5) sampler. one task computes the value of pi by an excruciatingly
slowly converging series; the other task samples the computed
value every couple of seconds or so. illustrates: simple shared
variable, preemption of low-priority task.
(6) quantum mouse. The program reads in a small maze, with an entry
point and an exit point. it creates a mouse (a task) at the
entry point. the task traverses the maze, creating a copy of
itself at every branch point. the traversal lays a trail through
the maze, consisting of the letters ABCDE... in order, ie letter
'J' is placed in a cell first reached by a mouse (any mouse) 10
steps from the entry. A mouse that can move only to a marked cell
dies. When the last mouse dies, the maze is printed out; every
cell is marked with its minimum distance from the entry.
illustrates: shared data, dynamic task replication with state
inheritance, wait for termination of task set.
(Incidentally, the tasking features used in these exercises were
contained in a set of subroutines described in a six-page handout and
implemented on a bare machine in less than 200 lines of code.)
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Teaching Concurrency
1990-01-11 13:04 ` Robert Firth
@ 1990-01-11 19:27 ` Vincent Manis
1990-01-13 7:34 ` Peter G Ludemann
1990-01-12 19:02 ` Peter da Silva
1 sibling, 1 reply; 21+ messages in thread
From: Vincent Manis @ 1990-01-11 19:27 UTC (permalink / raw)
In article <5598@bd.sei.cmu.edu> firth@sei.cmu.edu (Robert Firth) writes:
>(3) veryslowsort. the program reads a list of integers and creates
> one task per integer, whose priority is given by the integer and
> whose sole action is to print the integer. the integers are
> therefore output in task priority order, ie they are sorted.
Aaargh! I feel like I'm flogging a dead horse, posting two articles in
succession on the subject of underlying processor models.
There is absolutely no reason why this should work. Not all schedulers
guarantee strict ordering of execution on a priority basis. Process
aging is one of many factors which could produce incorrect output.
Similarly, the scheduler might ignore priorities (the multi-task kernel
I wrote, for example!).
However, veryslowsort is an excellent example of how *not* to use
parallelism.
Apology dept: My last article contained a slighting remark about Ada. I
didn't notice that comp.lang.ada was included in the Newsgroups: line.
Mindful of Usenet protocol, I apologise for posting this to
comp.lang.ada.
--
\ Vincent Manis <manis@cs.ubc.ca> "There is no law that vulgarity and
\ Department of Computer Science literary excellence cannot coexist."
/\ University of British Columbia -- A. Trevor Hodge
/ \ Vancouver, BC, Canada V6T 1W5 (604) 228-2394
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Teaching Concurrency
1990-01-11 19:27 ` Vincent Manis
@ 1990-01-13 7:34 ` Peter G Ludemann
0 siblings, 0 replies; 21+ messages in thread
From: Peter G Ludemann @ 1990-01-13 7:34 UTC (permalink / raw)
In my experience, most "multi-tasking" turns out to be nothing
but coroutining with message-passing for communication.
On an AS/400 (for example), a task switch is only slight more expensive
than a subroutine call (basically the cost of saving the 16 registers
plus a return address), so the additional clarity of writing with
multi-tasking is well worth it.
Furthermore, in some dialects of Prolog (e.g., NU-Prolog), a
corouting style can be used to vastly INCREASE the efficiency
of programs (the programs are written in a multi-tasking style but
are actually run as coroutining).
- peter ludemann pgl@cup.portal.com
--- my opinions are my own responsibility, of course ---
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Teaching Concurrency
1990-01-11 13:04 ` Robert Firth
1990-01-11 19:27 ` Vincent Manis
@ 1990-01-12 19:02 ` Peter da Silva
1990-01-15 13:30 ` Robert Firth
1990-01-17 15:40 ` Kurt Luoto
1 sibling, 2 replies; 21+ messages in thread
From: Peter da Silva @ 1990-01-12 19:02 UTC (permalink / raw)
I believe that the following two problems are inappropriate, because
they are subject to race conditions related to task startup time,
quantum sizes, and so on:
> (3) veryslowsort.
> (6) quantum mouse.
--
_--_|\ Peter da Silva. +1 713 274 5180. <peter@ficc.uu.net>.
/ \
\_.--._/ Xenix Support -- it's not just a job, it's an adventure!
v "Have you hugged your wolf today?" `-_-'
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Teaching Concurrency
1990-01-12 19:02 ` Peter da Silva
@ 1990-01-15 13:30 ` Robert Firth
1990-01-17 15:40 ` Kurt Luoto
1 sibling, 0 replies; 21+ messages in thread
From: Robert Firth @ 1990-01-15 13:30 UTC (permalink / raw)
In article <=K01QVDxds13@ficc.uu.net> peter@ficc.uu.net (Peter da Silva) writes:
>I believe that the following two problems are inappropriate, because
>they are subject to race conditions related to task startup time,
>quantum sizes, and so on:
>
>> (3) veryslowsort.
>> (6) quantum mouse.
Thanks, Peter; my note omitted a couple of key points. Please
let me clarify
(a) the exercises were intended to teach concurrency on ONE
processor, as a preparatory to building a small local
operating system. (multiprocessor concurrency, and
concurrency models transparent to number of processors,
were covered elsewhere)
(b) the underlying model had, and relied upon, a scheduler
whose operation was defined precisely. the examples
in part illustrated this. however, the students were
emphatically taught that, in real life, one does NOT
rely on priority to effect ordering of task execution.
For example, the commentary explaining 'veryslowsort' pointed
out that
(1) the process reading input and creating tasks had the highest
priority, so all the input would be read before any of the
printing tasks ran
(2) the scheduler used strict priorities, so indeed the printing
tasks would be run in the right order to print a sorted list
I didn't think it necessary to add that this was a pretty silly
way to sort numbers - the name alone seemed hint enough. The
fact that the program in execution exhibits no concurrency at all
is one the students were able to deduce for themselves.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Teaching Concurrency
1990-01-12 19:02 ` Peter da Silva
1990-01-15 13:30 ` Robert Firth
@ 1990-01-17 15:40 ` Kurt Luoto
1 sibling, 0 replies; 21+ messages in thread
From: Kurt Luoto @ 1990-01-17 15:40 UTC (permalink / raw)
In article <=K01QVDxds13@ficc.uu.net> peter@ficc.uu.net (Peter da Silva) writes:
>I believe that the following two problems are inappropriate, because
>they are subject to race conditions related to task startup time,
>quantum sizes, and so on:
>
>> (3) veryslowsort.
>> (6) quantum mouse.
>--
> _--_|\ Peter da Silva. +1 713 274 5180. <peter@ficc.uu.net>.
Problems that are subject to startup problems and race conditions
make excellent material for learing about task synchronization
and coordination issues. All the more reason to include them.
f
l
u
f
f
l
i
n
e
s
--
----------------
Kurt W. Luoto kurtl@fai.com or ...!sun!fai!kurtl
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Teaching Concurrency
1990-01-10 22:47 ` Richard Pattis
1990-01-11 13:04 ` Robert Firth
@ 1990-01-11 16:09 ` Michael Meissner
1990-01-14 12:33 ` Re^2: " Kim Shearer
1990-01-11 18:50 ` Tom Griest
` (3 subsequent siblings)
5 siblings, 1 reply; 21+ messages in thread
From: Michael Meissner @ 1990-01-11 16:09 UTC (permalink / raw)
In article <10330@june.cs.washington.edu> pattis@cs.washington.edu (Richard Pattis) writes:
| OK, so this begs the question: what is the "smallest" assignment that can
| use concurrency fruitfully. I would like to teach a bit about tasking in
| one of my classes, but I don't want students to get "wrong" ideas from the
| example I use. Anyone out there have such an assignment? Is there some prime
| example out there of a good use of multi-tasking that is amenable to
| classroom instruction?
When I was at Data General, and we wanted to convey how to use the
operating systems (RDOS, AOS, AOS/VS, but not unix) task facility, we
would use a cut down version of the tape archiver as an example. In
the simplest case, you want to copy a file, overlapping the I and O
portions, you would then have a producer task and a consumer task, and
n buffers to use between the two, and then go on to the various
strategys of inter-task communication and buffer management.
The full blown tape archiver was a little bit more complicated (it had
tasks which did the equivalent of opendir, readdir, and closedir, and
fstat on the files, tasks which read in the files, and an output task
which ordered things appropriate, and wrote to the tape). It took all
of the parallelism to make most tapes stream when doing a through the
filesystem backup.
Another fun multitasking example, was to use two tasks to write an
arcade-style game with terminal graphics (one task for terminal input,
and the other task for output).
Finally, there is the example of a multi-tasked server, or transaction
processing system, but that is perhaps too complicated for a classroom
setting.
The problem with a real world examples like the above is if your
operating system does not have real tasks (threads, etc.), and merely
simulates it, you don't get any benifits from tasking, and it is
better to write the stuff single threaded.
IMHO, I believe that the normal task examples like slowsort, or what
have you are not so useful in real world programming, and may in fact
encourage bad habits.
--
Michael Meissner email: meissner@osf.org phone: 617-621-8861
Open Software Foundation, 11 Cambridge Center, Cambridge, MA
Catproof is an oxymoron, Childproof is nearly so
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re^2: Teaching Concurrency
1990-01-11 16:09 ` Michael Meissner
@ 1990-01-14 12:33 ` Kim Shearer
0 siblings, 0 replies; 21+ messages in thread
From: Kim Shearer @ 1990-01-14 12:33 UTC (permalink / raw)
meissner@osf.org (Michael Meissner) writes:
>In article <10330@june.cs.washington.edu> pattis@cs.washington.edu (Richard Pattis) writes:
>| OK, so this begs the question: what is the "smallest" assignment that can
>| use concurrency fruitfully. I would like to teach a bit about tasking in
>| one of my classes, but I don't want students to get "wrong" ideas from the
>| example I use. Anyone out there have such an assignment? Is there some prime
>| example out there of a good use of multi-tasking that is amenable to
>| classroom instruction?
Another simple real worldish type example would be a centrally controlled
distributed monitoring system. This is actually pinched from a computer
controlled irrigation monitoring system I worked on in ADA.
The idea is that you have a number of fairly dumb monitoring tasks which
report to a central controlling program. The central system accepts
input from the monitoring tasks and sends a simple reply. The input and
output could be very simple and very synthetic. The real tasking comes
as the student would need to produce input and output routines to
coordinate the input from and output to multiple monitors.
The students could be provided with a simple monitor program to set up
n times, and asked to write the central controller that conforms to
some simple protocol.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Teaching Concurrency
1990-01-10 22:47 ` Richard Pattis
1990-01-11 13:04 ` Robert Firth
1990-01-11 16:09 ` Michael Meissner
@ 1990-01-11 18:50 ` Tom Griest
1990-01-11 20:38 ` Brian L. Stuart
` (2 subsequent siblings)
5 siblings, 0 replies; 21+ messages in thread
From: Tom Griest @ 1990-01-11 18:50 UTC (permalink / raw)
Regarding the postings on teaching Ada tasking:
The important thing to consider is that tasks should be used
for operations which are really likely to be physically concurrent.
(Or in some cases to serialize concurrent access to devices.)
For example, embedded applications frequently have interactions
with real-world events which can happen concurrently. An
automotive processor which must monitor oxygen content of fuel
combustion while monitoring velocity is a case where two activities
are occurring concurrently and it fits more naturally into a tasking
model than a subprogram that polls to achieve the same effect.
Another case where tasking can be used is to achieve higher performance
by allowing additional processors to work on a single program's problem.
This is only meaningful when you have a system which has more than one
CPU and software that supports the distribution of the load. In these
cases, the Ada task provides a convenient method for designers to
specify where it is "reasonable" to divide the work load. The "reasonable"
decision must be made based on the overhead associated with communication
and scheduling among the cooperating processors. This is similar to
any project where decisions must be made how divide up tasks among
workers. LabTek Corp. uses this approach to divide up work to do rocket
trajectory calculations. Since the calculations are independent between
different rockets, we can divide up the work to get the job done sooner.
Our technique involves using a dynamic array of tasks which is dimensioned
during elaboration based on information provided by an underlying
distributed runtime. (The system has some fault tolerance and can
reconfigure itself based on how many processors are available.)
The interesting part is being able to reduce the overhead when only a
few (or one) processor is available. Obviously having a separate task per
rocket would consume considerable overhead in a case where there was
only 1 available processor and 20 rockets. Therefore Each "guidance" task has
the ability to accept a work list which is dynamically constrained.
Therefore only one rendezvous is needed if only one processor is
available. When additional processors are available the work list is
effectively "sliced" up to divide the work among additional "guidance"
tasks. The tricky part is keeping track of persistent information
and maintaining a consistent work list when multiple CPUs are involved.
What I mean by this is that as rockets are launched and destroyed, the
"guidance" tasks must get the same rockets each time. This is because
there is a substantial amount of "track history" related to each rocket
used in the calculations to implement the trajectory planner. Since
the application is distributed across a network, transferring the data
would be additional overhead which would probably be intolerable.
We implemented a rather simple allocation routine to make sure that
once a rocket was assigned to a processor, it remained there.
Unless you want to teach your students to be rocket scientists, this
is probably overkill for a teaching application (although "Missile Command"
used to be a popular video game). What I recommend is building a
simple game using a graphics display and a mouse (or joystick). One
task moves a symbol on the screen while another accepts data from the
mouse and plots the cursor on the screen. A third task might determine
if the symbol is "hit" by the cursor and does something in response to
this event. You will probably need a forth task to insure mutual exclusion
to the graphics interface.
One problem you might have is getting a low cost Ada system that is capable
of true tasking (interrupts, etc) and suitable for this application.
Typical DOS-based compilers have problems with concurrency because of
the lack of reentrancy in the BIOS.
If you're interested and need help with the graphics/mouse interface
send me a message.
Tom Griest
LabTek Corporation
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Teaching Concurrency
1990-01-10 22:47 ` Richard Pattis
` (2 preceding siblings ...)
1990-01-11 18:50 ` Tom Griest
@ 1990-01-11 20:38 ` Brian L. Stuart
1990-01-12 0:47 ` Robert Steigerwald x2468
1990-01-15 11:10 ` Stavros Macrakis
5 siblings, 0 replies; 21+ messages in thread
From: Brian L. Stuart @ 1990-01-11 20:38 UTC (permalink / raw)
In article <10330@june.cs.washington.edu> pattis@cs.washington.edu (Richard Pattis) writes:
>
>OK, so this begs the question: what is the "smallest" assignment that can
>use concurrency fruitfully. I would like to teach a bit about tasking in
>one of my classes, but I don't want students to get "wrong" ideas from the
>example I use. Anyone out there have such an assignment? Is there some prime
>example out there of a good use of multi-tasking that is amenable to
>classroom instruction?
>
>Rich Pattis
Here at Purdue, in CS503 (Operating Systems) we have often given the
dining philosophers problem as a first assignement to teach concurrency.
Here, each philosopher is implemented as a separate task (well process
actually; we are using XINU). It's a nice little problem that lets
the students get a feel for working with XINU before diving into its
guts.
Brian L. Stuart
Department of Computer Science
Purdue University
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Teaching Concurrency
1990-01-10 22:47 ` Richard Pattis
` (3 preceding siblings ...)
1990-01-11 20:38 ` Brian L. Stuart
@ 1990-01-12 0:47 ` Robert Steigerwald x2468
1990-01-15 11:10 ` Stavros Macrakis
5 siblings, 0 replies; 21+ messages in thread
From: Robert Steigerwald x2468 @ 1990-01-12 0:47 UTC (permalink / raw)
>
->OK, so this begs the question: what is the "smallest" assignment that can
->use concurrency fruitfully. I would like to teach a bit about tasking in
->one of my classes, but I don't want students to get "wrong" ideas from the
->example I use. Anyone out there have such an assignment? Is there some prime
->example out there of a good use of multi-tasking that is amenable to
->classroom instruction?
->
->Rich Pattis
A good example of a small multi-tasking problem is the Producer-Consumer
problem with an intermediate buffer. It is covered well in the book by
Gehani entitled Ada An Advanced Introduction, 1984, Prentice-Hall, pages
158-161.
Bob Steigerwald
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Teaching Concurrency
1990-01-10 22:47 ` Richard Pattis
` (4 preceding siblings ...)
1990-01-12 0:47 ` Robert Steigerwald x2468
@ 1990-01-15 11:10 ` Stavros Macrakis
5 siblings, 0 replies; 21+ messages in thread
From: Stavros Macrakis @ 1990-01-15 11:10 UTC (permalink / raw)
In article <650@ajpo.sei.cmu.edu>, griest@ajpo.sei.cmu.edu (Tom Griest) writes:
> The important thing to consider is that tasks should be used
> for operations which are really likely to be physically concurrent.
I strongly disagree. Tasking is a structuring mechanism as much as it
is a concurrency mechanism. There are applications of tasks where
concurrency is a happy accident. The standard name for this is "coroutines".
Applications for coroutines include simulation and data structure
transformation (cf. in particular the Jackson method). Modules in producer-
consumer relations (cf. Unix pipes) are a good candidate -- of course
in this case there is often the possibility of concurrency.
For many years, assembly language programmers were the only ones to
benefit from coroutine structures because no mainstream programming
languages supported them. Ada's support for coroutines is an important
step forward.
Of course, in order to benefit fully from this structuring mechanism, the
implementation should be efficient. As far as I know, there are no Ada
compilers today where coroutine call is comparable to subroutine call in
its cost (as it can be in principle).
-s
^ permalink raw reply [flat|nested] 21+ messages in thread