From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 X-Received: by 10.52.74.230 with SMTP id x6mr16179668vdv.6.1409588151927; Mon, 01 Sep 2014 09:15:51 -0700 (PDT) Path: buffer2.nntp.dca1.giganews.com!border2.nntp.dca1.giganews.com!nntp.giganews.com!m5no4935702qaj.0!news-out.google.com!ef6ni6538igb.0!nntp.google.com!peer02.iad.highwinds-media.com!news.highwinds-media.com!feed-me.highwinds-media.com!post02.iad.highwinds-media.com!fx10.iad.POSTED!not-for-mail From: Brad Moore User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 Newsgroups: comp.lang.ada Subject: Re: STM32F4 Discovery, communication and libraries References: <60a42dc6-d8d0-4432-ae5a-86de18b82840@googlegroups.com> <5kkrv9hejn2qhdckkeo8lidkbh3bkme1gn@4ax.com> <5b91313c-acf9-4a6e-b157-6ba7c8021567@googlegroups.com> <0513ad07-6fbe-463a-be6f-097cd5113f52@googlegroups.com> <4f1ec65a-d66a-40bf-a0d6-278fde206e70@googlegroups.com> <1cjwzr30b24xy.11kpydntxhfo5$.dlg@40tude.net> <1xrcksk78afho$.xz6vgakq9o4t.dlg@40tude.net> In-Reply-To: Message-ID: NNTP-Posting-Host: 68.145.219.148 X-Complaints-To: internet.abuse@sjrb.ca X-Trace: 1409588151 68.145.219.148 (Mon, 01 Sep 2014 16:15:51 UTC) NNTP-Posting-Date: Mon, 01 Sep 2014 16:15:51 UTC Date: Mon, 01 Sep 2014 10:15:52 -0600 X-Received-Bytes: 3085 X-Received-Body-CRC: 1913636911 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Original-Bytes: 3003 Xref: number.nntp.dca.giganews.com comp.lang.ada:188773 Date: 2014-09-01T10:15:52-06:00 List-Id: On 2014-08-31 10:15 AM, Dmitry A. Kazakov wrote: > On Sun, 31 Aug 2014 09:44:26 -0600, Brad Moore wrote: > >> On 2014-08-31 1:02 AM, Dmitry A. Kazakov wrote: > >>> When I evaluated Ravenscar for our middleware (long ago), the concern was >>> publisher/subscriber services. I/O queue viewed as one of them. I didn't >>> consider a solution like yours because the requirement was that more than >>> one task could await for same I/O event. You reserve the event for single >>> task and other publisher/subscriber services (e.g. the data logger, network >>> data server, health monitor etc) may not use it because of >>> Max_Protected_Entries = 1. The event cannot propagate because of >>> No_Requeue_Statements. Tasks could flood the queue with their >>> requests/events but they cannot do that for more than one queue. >> >> I don't see this is an obstacle for Ravenscar. The clients could >> register their interest in an event type by passing in a reference to >> their IO_Response_T object, and when an event of that type occurs, the >> server could call 'Set' on the list of all registered IO_Response_T >> objects associated with that I/O event type. > > You mean one request queued in several queues? This would have a race > condition and also have no guarantee that no event is lost. The schema has > a procedure to pulse the event, as the entry is already spent. So the stuff > will leak. > No, I mean one request queued in one queue, where the processing of that request involves calling registered callbacks in some callback list for other clients that are interested in hearing about the servicing of that request.