From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,229a77b902096176 X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2002-12-11 20:22:22 PST Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!canoe.uoregon.edu!logbridge.uoregon.edu!newsfeed.cwix.com!wn13feed!worldnet.att.net!204.127.198.204!attbi_feed4!attbi.com!sccrnsc01.POSTED!not-for-mail From: "SteveD" Newsgroups: comp.lang.ada References: Subject: Re: fastest data structure X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2800.1106 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1106 Message-ID: <1aUJ9.318688$NH2.22572@sccrnsc01> NNTP-Posting-Host: 12.211.13.75 X-Complaints-To: abuse@attbi.com X-Trace: sccrnsc01 1039666941 12.211.13.75 (Thu, 12 Dec 2002 04:22:21 GMT) NNTP-Posting-Date: Thu, 12 Dec 2002 04:22:21 GMT Organization: AT&T Broadband Date: Thu, 12 Dec 2002 04:22:21 GMT Xref: archiver1.google.com comp.lang.ada:31721 Date: 2002-12-12T04:22:21+00:00 List-Id: I am not surprised by your results. When more time that is spent in "Processing" a smaller percentage of overall time is spent in the loop overhead. When the size of the records is increased, the time it takes to copy those records is increased. Also: If you're doing serious timing tests it is a good idea to run enough iterations to get the overal times significantly longer that the system clock rate. I usually shoot for overall times of at least a few seconds. Since you seem to be focused on optimizing, I would like to quote from Meiler Page-Jones "The Practical Guide to Structured Systems Design" (an old book, but may of the principles still hold). Jackson gives two rules for determining when to optimize: 1. Don't do it. 2. Don't do it yet. Optimize only the parts of a system worth optimizing. One of the old systems proverbs, the 90-10 rule, says: In a typical application, 90 percent of the total run time is devoted to executing only 10 percent of the code. With the added corrolaries (I can't quote the source): It's easier to optimize a working system than it is to make an optimized system work. Steve (The Duck) "Etienne Baudin" wrote in message news:at7k2o$9fb$1@news-reader10.wanadoo.fr... > Thanks for answers and the test program. I just modified My_type with a > record like that > > type My_Type is record > un : Integer; > deux : Integer; > trois : Integer; > quatre : Integer; > cinq : Integer; > six : Integer; > end record; > > and process: > > procedure Processing (Data : My_Type ) is > value : Integer; > pragma Volatile( Value ); > begin > value := data.un; > value := data.un; > value := data.un; > value := data.un; > end Processing; > > and I obtained some weird results : > > -- when I let the six components in My_type > Time for array based loop is 0.00151 > Time for linked list based loop is 0.00175 > > -- with only 3 components in the record > Time for array based loop is 0.00109 > Time for linked list based loop is 0.00112 > > --with only 1 component : > Time for array based loop is 0.00050 > Time for linked list based loop is 0.00047 > > The process is the same with the 3 test (always reading "un") , and the > running time seems depend on the record size....?? > > Etienne Baudin > >