From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,ba19b629e3223dba X-Google-Attributes: gid103376,public From: Samuel Mize Subject: Re: Q: Protected types and entries (long) Date: 1999/02/19 Message-ID: <7akelp$2cbt@news1.newsguy.com>#1/1 X-Deja-AN: 446183709 References: <36C98180.98296205@siemens.at> Organization: ImagiNet Communications, Ltd. User-Agent: tin/pre-1.4-981002 ("Phobia") (UNIX) (AIX/3-2) Newsgroups: comp.lang.ada Date: 1999-02-19T00:00:00+00:00 List-Id: Erik Margraf wrote: > entry put (data : in buffer) > when data'size + items_in_buffer <= internal_buffer'size is > begin ... > - Can someone tell me WHY this limitation in the barrier exists? Because the barrier expression applies to the whole entry queue, not to each call in the queue individually. Why? Remember that "a protected object is designed to be a very efficient conditional critical region[1]." Now, there are two ways that the reference to "data" could be interpreted. It could mean to look at the call at the head of the queue, or it it could mean to review each call individually and see if it can execute. If it looks only at the call at the head of the queue, one task trying to add a large item might block any number of tasks trying to add items that would fit, depending on whether or not it got there first. This kind of race condition makes the system's behavior much less predictable, which is a bad thing in hard-deadline real-time systems. If it looks at all entries in the queue, then the amount of time that it takes to evaluate the barriers can grow without limit, also a bad thing in a hard real-time environment. So the most efficient construct is the one chosen for Ada. However, sometimes one does need to have each queue element examined in turn, and that's where Ada's "requeue" come into play, as you thought. However, your solution has two flaws. First, if the object fits into the buffer, it won't be added -- you don't do anything in the body of "put". That's a nit, and you can fix it by always requeueing in put.[2] Second, and much more important, consider the case where you have two entries queued up on "put", the first with a larger "data" than the second has, and both larger than your current free space. The first will queue up on "p_put", then the second will queue up on "p_put", and data_size will show the second (smaller) size request. Then a consumer task frees up just that much space, and the first call is free to execute without enough space in the internal buffer. So, how do you solve your problem? 1. If you want the entries to be processed in order of arrival, you can prevent a second "put" call from resetting "data_size" by guarding "put" with an initially-true boolean -- call it "data_size_unset". Then, in the body of "put", you set data_size and set "data_size_unset" to false, and requeue on "p_put". "p_put" would set "data_size_unset" back to true. 2. If you want the entries to be scanned, and any that are small enough processed, do this. Have "put" add the item to the internal buffer if there is room. Otherwise it sets a boolean to false, and requeues the call on an internal entry, whose barrier is that boolean. When your consumer task frees up some space, it sets that boolean to be true. The internal entry just requeues on "put". Thus, each time some space is freed up, everybody tries "put" again and either succeeds or get requeued once more. You can fancy up that second approach. For instance, "p_put" might save the largest data size that would fit, and then requeue everybody on "p_p_put", which would requeue one request of that size to "put" and everybody else back to "p_put" (or something like that, you need to carefully think through the queue behavior in a design like this, and I haven't done so for this example.) The point is, you can create arbitrarily complex scanning behavior with inter-acting queues. And you aren't paying task switching overhead to do it, because of the way that protected objects operate. But you *are* creating a lot more overhead than they wanted to put into the base design of Ada, so you have to code it manually. Best, Sam Mize [1] Ada 95 Rationale, 9.1 [2] Or, you can duplicate the code from "p_put" in the body of "put", which would give precedence to callers with smaller "data" buffers. If you do this, you should encapsulate the shared code in a procedure, and call that procedure in both entries. -- Samuel Mize -- smize@imagin.net (home email) -- Team Ada Fight Spam: see http://www.cauce.org/ \\\ Smert Spamonam