From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=unavailable autolearn_force=no version=3.4.4 X-Received: by 10.13.203.135 with SMTP id n129mr10436581ywd.13.1466904846719; Sat, 25 Jun 2016 18:34:06 -0700 (PDT) X-Received: by 10.157.45.14 with SMTP id v14mr518532ota.12.1466904846680; Sat, 25 Jun 2016 18:34:06 -0700 (PDT) Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!mx02.eternal-september.org!feeder.eternal-september.org!news.glorb.com!r1no2361208ige.0!news-out.google.com!d62ni16677ith.0!nntp.google.com!r1no2361200ige.0!postnews.google.com!glegroupsg2000goo.googlegroups.com!not-for-mail Newsgroups: comp.lang.ada Date: Sat, 25 Jun 2016 18:34:06 -0700 (PDT) In-Reply-To: Complaints-To: groups-abuse@google.com Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=2601:191:8202:8510:5985:2c17:9409:aa9c; posting-account=fdRd8woAAADTIlxCu9FgvDrUK4wPzvy3 NNTP-Posting-Host: 2601:191:8202:8510:5985:2c17:9409:aa9c References: <58b78af5-28d8-4029-8804-598b2b63013c@googlegroups.com> User-Agent: G2/1.0 MIME-Version: 1.0 Message-ID: Subject: Re: RFC: Prototype for a user threading library in Ada From: rieachus@comcast.net Injection-Date: Sun, 26 Jun 2016 01:34:06 +0000 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Xref: news.eternal-september.org comp.lang.ada:30935 Date: 2016-06-25T18:34:06-07:00 List-Id: On Saturday, June 25, 2016 at 2:29:12 AM UTC-4, Dmitry A. Kazakov wrote: > On 2016-06-24 02:38, rieachus@comcast.net wrote: > > It is not a detail. The caller of Write does not know how much data the= =20 > transport layer is ready to accept. That is the nature of non-blocking=20 > I/O. Write takes as much data it can and tells through Last where the=20 > caller must continue *later*... Thanks for the correction. I'm used to doing that with an asynchronous pip= e abstraction managed as an array of buffers. (In Ada it would be a protec= ted object with Put, Get and Free operations.) The Put fills a buffer--a p= ointer to data and a size rather than copying anything. The Get is called = from the disk manager, which may be one of a number of storage managers, bu= t in any case is on different hardware. These calls can copy all the desc= riptors, and Free calls can free multiple buffers. Once you have this mech= anism coded or included in your massively parallel application, tuning is n= eeded to choose the number of buffers and how often the storage manager com= es by to collect output. You can do this truly asynchronously, but pushing= large amounts of data into the MPI fabric is considered bad form--pulling = is much more efficient.