From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: * X-Spam-Status: No, score=1.8 required=5.0 tests=BAYES_05,INVALID_DATE, UNRESOLVED_TEMPLATE autolearn=no autolearn_force=no version=3.4.4 Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Path: utzoo!mnetor!seismo!lll-lcc!ames!ucbcad!ucbvax!slb-test.CSNET"!"PSI%SNMSN1::PSI%DSAVX2::WILLY From: "PSI%SNMSN1::PSI%DSAVX2::WILLY@slb-test.CSNET" Newsgroups: comp.lang.ada Subject: Stream management in Ada Message-ID: <8703151508.AA03015@ucbvax.Berkeley.EDU> Date: Sun, 15-Mar-87 06:41:00 EST Article-I.D.: ucbvax.8703151508.AA03015 Posted: Sun Mar 15 06:41:00 1987 Date-Received: Mon, 16-Mar-87 04:04:36 EST Sender: daemon@ucbvax.BERKELEY.EDU Distribution: world Organization: The ARPA Internet List-Id: Forwarded for: Hever Bitteur (BITTEUR%M_SFMVX1@SLB-DOLL.CSNET) I would be very interested to hear about any one who has already worked on this problem, especially if neat solutions taking advantage of high level concept of Ada rendezvous, have already been found. The application deals with various tasks. A task uses some data as input and produces some data as output, which can be in turn used partially or totally by another task, and so on. A feature common to all these pieces of data is that they behave like flows of data : typically, we can have sampled raw data which come from sensors, and which are passed through filters to give engineering data, which in turn can be used to compute derived variables, and so on. Of course, transmission of data can be achieved with direct rendezvous between a producer and a consumer task. The problem is that we rapidly get into a mess of tasks all interconnected, where any modification is painful, and where it is difficult to add a new task, since we have to connect this new task to any relevant producer task and any relevant consumer task. One more problem is that this design does not insure that all the data used by a task at a given time is consistent : there may be some dephasing between this and this variable. A better approach seems to organize the task activities around one or several "Stream Managers", whose function is to store written data, to forward it to tasks interested in it, and to handle the data consistency in a centralized manner. Thus, the design looks like a data bus, where tasks (writers and/or readers) are plugged on. Any new task just have to know on which bus(es) it has to be plugged. I have heard of implementations of this concept in "classical" real time, using queues, events and shared memory. Since this is to be integrated in a multi-tasking Ada application, I am looking for a pure Ada solution. Of course I would like to organize such a (generic) stream manager as a pure server, which could be efficiently used by other Ada tasks. Thank you for any help.