From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,5642b5e0552e165e X-Google-Attributes: gid103376,public From: Ted Dennison Subject: Re: Performance and Thread Safety of the Booch Components Date: 2000/04/24 Message-ID: <39047E41.F28C2ECD@telepath.com>#1/1 X-Deja-AN: 615038975 Content-Transfer-Encoding: 7bit References: <1e9flrw.13c8d0t190e9v6N%herwin@gmu.edu> <39006687.9341ACAA@telepath.com> X-Accept-Language: en,pdf Content-Type: text/plain; charset=us-ascii X-Complaints-To: Abuse Role , We Care X-Trace: monger.newsread.com 956595476 216.14.8.12 (Mon, 24 Apr 2000 12:57:56 EDT) Organization: Telepath Systems (telepath.com) MIME-Version: 1.0 NNTP-Posting-Date: Mon, 24 Apr 2000 12:57:56 EDT Newsgroups: comp.lang.ada Date: 2000-04-24T00:00:00+00:00 List-Id: Simon Wright wrote: > Ted Dennison writes: > > > subprogram-level performance analyzer. It turned out that even the *bounded* > > maps (which you'd figure wouldn't do dynamic allocation) were dynamicly > > allocating an iterator every lookup. > > Unfortunately, "new" on VxWorks is a lot slower. Most VxWorks users, I > expect, allocte memory at startup and deprecate "new" at other times. The problem isn't just the speed (although in this case that was the main problem). There's also the unpredictability factor. Certain parts of a simulation rely on operations occurring at the same time every iteration, otherwise things could start to jitter or go unstable. If you throw vxWorks heap allocations and deallocations in front of one of these calculations, what happens to the timing? I could ask WindRiver to be sure, but since they weren't bragging to us about their great real-time allocation scheme, I can be pretty sure that its a typical one. That means there's going to be some heap coalescing algorithm in allocation or deallocation that will take a varying amount of time based on the configuration the heap happens to be in at that particular moment. Now our scheduler does have a certain amount of protection for the most hard realtime of its tasks (eg: the interface with the visual). But still its best to be safe. That is the main reason real-time programmers try to avoid runtime heap usage as a general rule, and that is why we want to use the bounded components, despite the waste of memory and its accordant cache slowdowns. So speed alone isn't the issue. I'd much prefer to see *no* dynamic allocation when I'm dealing with a bounded structure. If there is some, I'd like to see it identified in the specs for the offending routine so allocation-phobic users know to avoid it after initialization. > > I fixed that, then found that it was > > still quite slow due to the fact that the (20 or so byte) result of the > > lookup was getting passed as a "return" value through 3 successive > > function's return calls. That resulted in each object in the map getting > > copied 4 times at 60Hz. You'd think that there'd be some way for the > > compiler to optimize that down to one copy at the assignemt on the outside, > > but I couldn't find it. So I just wrote a procedure version and that did the > > trick. > > Perhaps "pragma Inline" might have helped. At the expense of bloat, > perhaps. That's what I thought too. No dice though.. :-( -- T.E.D. Home - mailto:dennison@telepath.com Work - mailto:dennison@ssd.fsi.com WWW - http://www.telepath.com/dennison/Ted/TED.html ICQ - 10545591