comp.lang.ada
 help / color / mirror / Atom feed
From: Ted Dennison <dennison@telepath.com>
Subject: Re: Optimization Question
Date: Mon, 22 Jan 2001 15:24:53 GMT
Date: 2001-01-22T15:24:53+00:00	[thread overview]
Message-ID: <94hjbp$ks6$1@nnrp1.deja.com> (raw)
In-Reply-To: 94g431$ge3$1@nnrp1.deja.com

In article <94g431$ge3$1@nnrp1.deja.com>,
  Robert Dewar <robert_dewar@my-deja.com> wrote:
> In article <94ftfu$b59$1@nnrp1.deja.com>,
>   dvdeug@my-deja.com wrote:
>
> <<questions about speeding up code snipped>>
>
> I am *amazed* that this is only ten times slower when the
> I/O is done in such a perfectly gruesome manner (sequential
> I/O instantiated on bytes).
>
> It is elementary that you want to read big chunks of a file
> at a time. What GNAT does is to read the entire source of

As a point of reference, I'm in the process of writing a little app in
Windows NT to split files for the purpose of distributing large files on
floppies. My first iteration took the dumb approach and used Direct_IO
instantiated on bytes, copying each byte from one disk to the other.
Filling a whole floppy in this manner took me about 4.5 minutes.
However, I noticed that copying the same file to the floppy using
Windows explorer takes about 45 seconds.

For my next trick, I changed to using Ada.Streams.Stream_IO. First I
tried to copy the whole disks' worth of data into a buffer using 'Read,
then copy it to the floppy using 'Write. It still took 4.5 minutes.
That's not suprising, since 'Read on an array is defined as individual
'Reads for each element.

So next I changed the code to instead call Ada.Streams.Read and
Ada.Streams.Write directly for the entire amount of data that is going
on the disk (one operation each per disk). When I compiled and ran this,
a disk's worht of data copied in....(drumroll please)...45 seconds, just
like for Windows.

Of course 45 seconds is a bit long to wait with no feedback, so I
changed it to only write portions of the disk's worth of data, and
output a '*' character to the screen between them. Unsuprisingly, the
amount of chunks used has a serious impact on the amount of time the
operation takes. So one has to strike a balance. I found a realtively
happy medium at 10 copies per floppy. That only increases the copy time
by about 10 seconds.

Anyway, if you want to try to perform large file operations, it looks
like Stream_IO and Ada.Streams.Read and Write are the way to go. The
only other way I could think to do it would be to dynamicly instantiate
Sequential or Direct IO with the data size you want to use for each I/O
operation. That would be a pain.

--
T.E.D.

http://www.telepath.com/~dennison/Ted/TED.html


Sent via Deja.com
http://www.deja.com/



  parent reply	other threads:[~2001-01-22 15:24 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-01-22  0:05 Optimization Question dvdeug
2001-01-22  1:57 ` Robert Dewar
2001-01-22  3:22   ` dvdeug
2001-01-22  4:05     ` Robert Dewar
2001-01-22  4:06     ` Robert Dewar
2001-01-22 19:04     ` M. Kotiaho
2001-01-22 20:22       ` dvdeug
2001-01-22 15:24   ` Ted Dennison [this message]
2001-01-22 16:12     ` Robert Dewar
2001-01-22 16:48       ` Ted Dennison
2001-01-22 16:15     ` Robert Dewar
2001-01-22 15:26   ` Ted Dennison
2001-01-22 16:17     ` Robert Dewar
2001-01-22 16:59       ` Ted Dennison
2001-01-22 22:01 ` Keith Thompson
2001-01-22 22:52   ` dvdeug
2001-01-23  6:46     ` Keith Thompson
     [not found] ` <94ld65$1hs$1@nnrp1.deja.com>
     [not found]   ` <864ryodb1q.fsf@acm.org>
     [not found]     ` <3A6F663E.C84B94D8@acm.org>
2001-01-26 16:30       ` Optimization Question -- Follow up on using the stream read (and write) procedures directly Jeff Creem
2001-01-26 21:46         ` Florian Weimer
2001-01-27 19:14           ` Jeff Creem
2001-01-28  0:26             ` Robert Dewar
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox