comp.lang.ada
 help / color / mirror / Atom feed
From: Ted Dennison <dennison@telepath.com>
Subject: Re: Performance and Thread Safety of the Booch Components
Date: 2000/04/24
Date: 2000-04-24T00:00:00+00:00	[thread overview]
Message-ID: <39047E41.F28C2ECD@telepath.com> (raw)
In-Reply-To: x7v8zy6vbxz.fsf@pogner.demon.co.uk

Simon Wright wrote:

> Ted Dennison <dennison@telepath.com> writes:
>
> > subprogram-level performance analyzer. It turned out that even the *bounded*
> > maps (which you'd figure wouldn't do dynamic allocation) were dynamicly
> > allocating an iterator every lookup.
>
> Unfortunately, "new" on VxWorks is a lot slower. Most VxWorks users, I
> expect, allocte memory at startup and deprecate "new" at other times.

The problem isn't just the speed (although in this case that was the main problem).
There's also the unpredictability factor. Certain parts of a simulation rely on
operations occurring at the same time every iteration, otherwise things could start
to jitter or go unstable. If you throw vxWorks heap allocations and deallocations in
front of one of these calculations, what happens to the timing? I could ask WindRiver
to be sure, but since they weren't bragging to us about their great real-time
allocation scheme, I can be pretty sure that its a typical one. That means there's
going to be some heap coalescing algorithm in allocation or deallocation that will
take a varying amount of time based on the configuration the heap happens to be in at
that particular moment.

Now our scheduler does have a certain amount of protection for the most hard realtime
of its tasks (eg: the interface with the visual). But still its best to be safe. That
is the main reason real-time programmers try to avoid runtime heap usage as a general
rule, and that is why we want to use the bounded components, despite the waste of
memory and its accordant cache slowdowns.

So speed alone isn't the issue. I'd much prefer to see *no* dynamic allocation when
I'm dealing with a bounded structure. If there is some, I'd like to see it identified
in the specs for the offending routine so allocation-phobic users know to avoid it
after initialization.

> >                                      I fixed that, then found that it was
> > still quite slow due to the fact that the (20 or so byte) result of the
> > lookup was getting passed as a "return" value through 3 successive
> > function's return calls. That resulted in each object in the map getting
> > copied 4 times at 60Hz. You'd think that there'd be some way for the
> > compiler to optimize that down to one copy at the assignemt on the outside,
> > but I couldn't find it. So I just wrote a procedure version and that did the
> > trick.
>
> Perhaps "pragma Inline" might have helped. At the expense of bloat,
> perhaps.

That's what I thought too. No dice though.. :-(

--
T.E.D.

Home - mailto:dennison@telepath.com  Work - mailto:dennison@ssd.fsi.com
WWW  - http://www.telepath.com/dennison/Ted/TED.html  ICQ  - 10545591






      reply	other threads:[~2000-04-24  0:00 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2000-04-21  0:00 Performance and Thread Safety of the Booch Components Harry Erwin
2000-04-21  0:00 ` Ted Dennison
2000-04-22  0:00   ` Simon Wright
2000-04-24  0:00     ` Ted Dennison [this message]
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox