From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,71daebeb6145ebb6,start X-Google-Attributes: gid103376,public From: dennison@telepath.com Subject: Stream venting (long) Date: 1998/12/28 Message-ID: <768sng$6r9$1@nnrp1.dejanews.com>#1/1 X-Deja-AN: 426534577 X-Http-Proxy: 1.0 x4.dejanews.com:80 (Squid/1.1.22) for client 204.48.27.130 Organization: Deja News - The Leader in Internet Discussion X-Article-Creation-Date: Mon Dec 28 21:20:50 1998 GMT Newsgroups: comp.lang.ada X-Http-User-Agent: Mozilla/4.5 [en] (WinNT; I) Date: 1998-12-28T00:00:00+00:00 List-Id: I have this situation where I need a general-purpose method for a task to save off whatever data it chooses in real-time and possibly read it in at real-time later. This seems like a natural for streams, right? Well, I've run into a few complications that make streams not so helpful. 1. Stream 'Writes and 'Reads are inherently task unsafe. Possible solution 1a: You can of course try to implement your stream in a task safe manner. But as a stream implementor all you will see is a series of Write calls. A single 'Write from the client will cause an indeterminate number of these. There's no way to tell if a Write or Read call is the first or last in a series. If you need to make the enitre composite type's 'Write or 'Read atomic, you're SOL. 1b: You can pick task priorites for the reader and writer tasks and a task scheduling policy that make such operations atomic. EG: If you are using strict priority scheduling, giving every involved task a differenty priority will do the job. However, if your scheduling policy ever has to change, (eg: code port, multiple processor system, etc.) the code will break in a very hard to detect and hard to fix way. 1c: Have the client explicitly lock and unlock the stream before using it. Doesn't have the above drawbacks, but makes what should be a simple one liner for the client into a threee step process. 2. Streams don't support 'Read when all the data may not have arrived yet. Possible Solution 2a: Have the clients wait until the rest of the data comes in. This is unacceptable for real-time systems, as it causes really nasty prioity inversion. You just can't have a 60Hz task waiting for data to get read in from the hard disk. 2b: Have the clients handle the exeption when the end of the stream is reached, and try again later. This won't work because in a composite 'Read, some of the reads may have completed already. Thus the client's target data object is now corrupt (and they don't even know where!) 2c: Provide a hook for clients to check the amount of data in the stream before doing their 'Read. This looks promising, but unfortunately there's no way for the client to know how much data they need ahead of time. There's nothing stopping a client from making their own 'Write/Reads that transmit a *variable* amount of data, based on the value of internal fields. In fact, I suspect the default 'Write for variant records works in just that way. 2d: Have the clients attempt their 'Reads into temp variables, and copy the data over into the working location only if no exception occurrs. The problem with this is that some fields in that record of theirs may be "limited private" types. 'Read/Write have ways to handle those fields safely, but an assignment will raise Program_Error. 2e: Provide the client with a hook to write their own data checking routine which will verify if the stream is safe for them to read. The problem is that if they just want to use the default 'Read and 'Write for anything, they would have no way to know how to write the checking routine (other than try 2d above). -- T.E.D. -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own