From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Thread: 103376,68494635acddb77e X-Google-Attributes: gid103376,domainid0,public,usenet X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news1.google.com!news3.google.com!feeder.news-service.com!newsfeed-fusi2.netcologne.de!news.netcologne.de!newsfeed-hp2.netcologne.de!newsfeed.arcor.de!newsspool1.arcor-online.net!news.arcor.de.POSTED!not-for-mail From: "Dmitry A. Kazakov" Subject: Re: File output and buffering Newsgroups: comp.lang.ada User-Agent: 40tude_Dialog/2.0.15.1 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Reply-To: mailbox@dmitry-kazakov.de Organization: cbb software GmbH References: <60a35fd4-e5a6-4aa0-a73f-6815ce7e92fc@f36g2000hsa.googlegroups.com> <4af2f934-7458-4370-b325-c38e3a4068b8@s50g2000hsb.googlegroups.com> <1jt8nguvzf1hw$.189glcey6hmht.dlg@40tude.net> <84805bfc-26f4-4507-9024-9e3558c9cb32@m73g2000hsh.googlegroups.com> Date: Thu, 21 Aug 2008 11:24:01 +0200 Message-ID: <104g44sstk0to$.1lektfatijska$.dlg@40tude.net> NNTP-Posting-Date: 21 Aug 2008 11:24:03 CEST NNTP-Posting-Host: c958851b.newsspool1.arcor-online.net X-Trace: DXC=G?boeaE1EAmPKPPVf;4hUjic==]BZ:afn4Fo<]lROoRa4nDHegD_]Rek\CN]Ua8G2dDNcfSJ;bb[eIRnRBaCd On Thu, 21 Aug 2008 00:10:52 -0700 (PDT), Maciej Sobczak wrote: > On 20 Sie, 17:39, "Dmitry A. Kazakov" > wrote: > >> Buffering is used to make I/O in an asynchronous and/or conveyered way. > > No, it is not asynchronous. Nothing happens in the background, the > operations are only grouped. The group is (usually) transmitted in the > synchronous fashion. > > I do not know what is "conveyered". Pipelined processing. When you refer to throughput, then it is increased only because of existence of hidden conveyers, which ultimately always boils down to some asynchronously working elements. >> That does not make I/O faster in terms of latencies. > > It does make it faster in terms of throughput. > > Note: I do not imply that throughput is more valuable for optimization > than latency - these can be different goals and usually are. > >> Any language buffer on top of numerous layered buffers, typical for an OS, >> adds nothing, but overhead. > > It can reduce the overhead that is associated with the number of > requests. System calls are not free and there is also a significant > latency of the medium that is better to be avoided (like network > roundtrips or disk seek times). Well, here we need to clarify what is the I/O end point. When you say "system call" it presumes that the end point is the driver. Let us fix it. Now, the next question is where coalescing/pipelining is to happen. See where it goes? Is the driver's interface a stream of units or else, also, of blocks of units? Case A. There is no back door to the driver, you have only a stream. What can buffering add? Nothing, but overhead. Case B. There is a back door for pushing bigger chunks of units. Then use it in your application and it will go *faster* than whatever buffered interface on top of the same thing! Note also that A and B usually refer different protocol layers. It is common to put a stream layer onto something block-oriented beneath, and reverse. That stream is buffering and necessarily an overhead. Buffering is always overhead. We buy it only because the alternative is inaccessible, like to do DMA transfers from the application. But a language library is in the *same* position as the application, so buffering there would gain nothing, *from* performance perspective. Ada.Text_IO is slow because of the buffering it does in order to implement a protocol (pages) which you do not need. Classic abstraction inversion case. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de