From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,4cf1fd41f64f8f02 X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news2.google.com!news4.google.com!border1.nntp.dca.giganews.com!nntp.giganews.com!local01.nntp.dca.giganews.com!nntp.comcast.com!news.comcast.com.POSTED!not-for-mail NNTP-Posting-Date: Sat, 03 Jun 2006 01:50:49 -0500 From: tmoran@acm.org Newsgroups: comp.lang.ada Subject: Re: task-safe hash table? References: X-Newsreader: Tom's custom newsreader Message-ID: Date: Sat, 03 Jun 2006 01:50:49 -0500 NNTP-Posting-Host: 67.169.16.3 X-Trace: sv3-y8RoOlVY7ZH+d0BUoSgAuMAE3Ix6y0EU6e8qIq1HzoqiThMz2oFpLzRGbUu4fGRz5rQE1+JY9fRLXZJ!nb36DCYsr5PZ/G7fNLaXHZKvjOIGBMguxlx1n8GXiBkZQvBg60+jaiPfo5q1zrdlc9gOo+jpewdh!+KX03BagFarU X-Complaints-To: abuse@comcast.net X-DMCA-Complaints-To: dmca@comcast.net X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly X-Postfilter: 1.3.32 Xref: g2news2.google.com comp.lang.ada:4661 Date: 2006-06-03T01:50:49-05:00 List-Id: >Functions on protected objects could potentially run in parallel on a >multi-core CPU. Does anybody know if GNAT does this? >... >What about a window in which each task makes, say, n searches. n is the >window size. All misses are postponed till the window end. At that point I tried a simple approach: Lookup is asynchronous, Insert is Protected. I ran the test data using Gnat 3.15p on a dual-core Pentium and looking at the Task Manager CPU usage. With a small number of large buckets in the hash - so a lookup scans a rather long list and takes a long time - CPU usage is around 75% (of the total two CPUs). So there's overlap. With a more reasonable hash with a large number of small buckets, the Protected overhead is larger relative to the "useful work", and CPU usage goes down to more like 40% (I presume the missing 10% is probably tallied as System time in task switching instead of app time). Since by far the best speed comes from a large number of small buckets, this approach to doing the job concurrently is a losing proposition. Perhaps a windowing or batching approach such as you suggest would distribute the Protected overhead over more useful work and make the concurrent approach faster. I'll look into that this weekend.