* Re: Wanted: Performance Analysis Tools on PCs Info
1991-05-16 20:32 Wanted: Performance Analysis Tools on PCs Info Del Gordon
@ 1991-05-23 6:33 ` Bob Kitzberger @sation
1991-05-23 19:46 ` Steve Vestal
1991-05-24 18:21 ` John Goodenough
1 sibling, 1 reply; 4+ messages in thread
From: Bob Kitzberger @sation @ 1991-05-23 6:33 UTC (permalink / raw)
[This is something that folks in comp.realtime can probably help out on,
so I'm cross-posting there]
Del Gordon writes:
> project where Ada and PCs are required, and run-time performance
> analysis is required to show that a given program does not use more
> than a given percentage of the CPU or memory at any time.
"at any time" requires clarification. For any given instruction cycle, your
program use will be either 0% (running in slack task) or 100% (running in
application code). Of course, what is probably intended is "a given program
does not use more than a given percentage of the CPU over any 1 second span"
or somesuch. This introduces a complexity -- where to measure the beginning
and ending of the 1 second span? If you measure at every integral second
(i.e. 0 seconds, 1 sec, 2 secs, etc) then you may miss an overload during
the 0.5 seconds through 1.5 seconds interval, for example. Getting clear on
this type of issue is essential before you start measuring things.
If you have cyclic tasks, then a reasonable interval to measure unspent
time may be on the period boundaries of the lowest frequency task.
> a call to that vendor indicates that they
> have no performance analysis tools available for their compiler. I
> know other vendors have Ada performance analysis tools for other
> platforms such as Sun. However, this project requires PCs.
Vendor-provided performance analysis tools tend to be profilers, which are
based on periodic interrupts of the application. This may or may not
be appropriate for a given system's measurement requirements... problems
with profiling include, poor measurement resolution, non-negligible
overhead, and the likelihood of missing worst-case situations.
For system tuning, a profiler is a great tool to help out in finding 'hot
spots'. For verification of system timing correctness, a profiler is
just about useless.
> Finally, if all else fails and we have to "grow our own," does
> anybody have any experience with performance analysis (or code ;)
> they'd like to share?
To find the amount of unused CPU time on a system, I've used the following:
Implement a background task, with priority lower than all other tasks in the
system. This task will 'eat up' any excess CPU time. The background task
should do something repititious and predicatble, like incrementing several
global memory locations in a tight loop. We'll call this the slack task.
Your next highest priority task should probably be the lowest frequency
task (if you are following Rate Monotonic Scheduling). At each period
boundary, it can check the value of the global variables being updated
by the slack task. If you know how quickly the slack task increases
these global counters when the system load is nil (nominal case), then you
can calculate the amount of time spent in the slack task over the
measurement period. 100% minus the time spent in the slack task is your
system load. Determination of the maximum rate at which the slack task can
update global counters is pretty straightforward, and only needs to be done
once.
This method is slightly intrusive. Non-intrusive means generally require
a logic analyzer with _deep_ measurement buffers, and the ability to do other
than simple statistical sampling.
As far as measuring the maximum memory usage of an application, some
compiler vendors provide high-water marks for heap usage... it is really
easy to implement from a vendors perspective, and can be done with
very little runtime overhead. Stack usage measurement, on the other hand,
is expensive to implement at runtime, since each stack growth must be
burdened with code to conditionally set the high-water mark.
Hope this helps,
.Bob.
--
Bob Kitzberger Internet : rlk@telesoft.com
TeleSoft uucp : ...!ucsd.ucsd.edu!telesoft!rlk
5959 Cornerstone Court West, San Diego, CA 92121-9891 (619) 457-2700 x163
------------------------------------------------------------------------------
"Wretches, utter wretches, keep your hands from beans!" -- Empedocles
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Wanted: Performance Analysis Tools on PCs Info
1991-05-16 20:32 Wanted: Performance Analysis Tools on PCs Info Del Gordon
1991-05-23 6:33 ` Bob Kitzberger @sation
@ 1991-05-24 18:21 ` John Goodenough
1 sibling, 0 replies; 4+ messages in thread
From: John Goodenough @ 1991-05-24 18:21 UTC (permalink / raw)
In article Wanted: Performance Analysis Tools on PCs Info of 16 May 91
20:32:21 GMT gordon@Stars.Reston.Unisys.COM (Del Gordon) writes:
> There's a
> project where Ada and PCs are required, and run-time performance
> analysis is required to show that a given program does not use more
> than a given percentage of the CPU or memory at any time.
Of course, if this is what your specification says you have to do, you have to
do it, but people should be aware that a measurement of overall CPU loading is
a very rough and sometimes quite misleading measure of how much extra capacity
is truly available in a system. One of our case studies of the application of
rate monotonic analysis is of a system that was idle 46% of the time and was
not meeting its deadlines. Rate monotonic analysis helped to show how to
restructure the system (by modifying a few hundred lines of application-level
code) so that all deadlines were met even though the system was still idle 46%
of the time. (In essence, the restructuring of the code ensured that the
highest rate activity was not blocked very long by lower rate activities and
the rate monotonic analysis showed which blocking times needed to be reduced
and by how much.)
A write-up of this case study might be available in a few months. In the
meantime, to get a feeling for why gross measures of system load might be
misleading and to see some examples of better measures of available capacity
in a system, you might look at a paper by Steve Vestal in the Tri-Ada '90
Proceedings, although this won't help you to make the measurements your
project requires now.
John B. Goodenough Goodenough@sei.cmu.edu
Software Engineering Institute 412-268-6391
^ permalink raw reply [flat|nested] 4+ messages in thread