comp.lang.ada
 help / color / mirror / Atom feed
* Re: Ada Embedded Systems Efficiencies
@ 1993-02-25 19:58 Bob Kitzberger
  0 siblings, 0 replies; 10+ messages in thread
From: Bob Kitzberger @ 1993-02-25 19:58 UTC (permalink / raw)


wisniews@saifr00.cfsat.honeywell.com (Joe Wisniewski) writes:

>I am concerned that "good design" macro-efficiencies are going to be "un-done"
>due to perceived net overall gain of over-implementing micro-efficiencies.

Personally, I'd avoid altering the structure of the application in order
to take advantage of small micro-efficiencies.  There are certain
micro-efficiencies that can reduce memory usage, while not altering the
structure of the application at all.  

For example, an understanding of how the compiler lays out memory helps in
identifying and avoiding hot spots, and its surprising how often these
things are ignored, especially by developers new to the world of
embedded systems (no flames intended; this is merely an observation that
memory usage and determinism are rarely as critical in host-based systems
as they are on embedded systems).

For example, if a developer declares a large array of records, they may
not be aware that there may be a lot of 'white space' in their record
layout (inserted by the compiler for efficiency reasons), and a pragma pack
is often in order.  This is a micro-efficiency, but requires no structural
changes to the application.  

Also, a developer may not be aware of the impact of tasks on memory usage.
Specifically, each task usually has some fairly large default stack size.
In order to specify an alternate stack size, the task must be declared
using a task type (since the storage size clause can only be specified
on a task type, not on a task object).  Using the default stack size on
every task can add up quickly.  Again, this is a micro-efficiency, but
it requires no structural changes in order to fix it.  (Add another
thing to the Ada Rules of Thumb: always use task types, even if declaring
a single task of that type).

We are currently undergoing a size reduction effort on a large tightly-
coupled multiprocessor system.  The first step we are taking is to develop
tools to effectively report the actual memory usage of the application.
This includes:

	1. Compiler listings that show CODE, DATA, CONST, BSS usage

	2. Runtime hooks into memory allocation routines to get a handle
           on dynamic memory usage

        3. Task stack usage analysis (allows tuning of stack sizes).

  	4. Reporting of memory usage for each build

Each developer gets the memory usage report, along with a list of deltas
in memory usage from the previous build.  It is felt that individual developers
can effectively take the first pass at reducing memory usage in the areas they
are most familiar with.  A second pass would look at things from a systems
point of view.  

	.Bob.
----------------
Bob Kitzberger          VisiCom Laboratories, Inc.
rlk@visicom.com         10052 Mesa Ridge Court, San Diego CA 92121 USA
                        +1 619 457 2111    FAX +1 619 457 0888

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ada Embedded Systems Efficiencies
@ 1993-02-25 20:36 enterpoop.mit.edu!spool.mu.edu!howland.reston.ans.net!paladin.american.ed
  0 siblings, 0 replies; 10+ messages in thread
From: enterpoop.mit.edu!spool.mu.edu!howland.reston.ans.net!paladin.american.ed @ 1993-02-25 20:36 UTC (permalink / raw)


In article <1993Feb24.212146.13157@saifr00.cfsat.honeywell.com> wisniews@saifr0
0.cfsat.honeywell.com (Joe Wisniewski) writes:
>... I
>am involved in a very large Ada project where there is currently much concern 
about whether
>or not we are going to 'fit in the box'. As a result as you can probably imagi
ne, there
>is an increasing amount of attention being paid to addressing "micro-efficienc
ies". This
>attention takes on various forms:
>	1. Optimizing the compiler (back-end developed in-house)
>	2. Addressing issues that the compiler hasn't optimized well or can't o
ptimize
>	   well
>		i.e. getting rid of case statements
>		     limiting the number of parameters that are passed
>		       in procedure calls
>                     pushing for the use of package spec data vs. package body
 data
>			accessed by procedures and functions for state data usa
ge
>		     in-lining during design
>		     etc.
> 
>	Note: Very little prototyping was done and as a result effective early
>	      timing studies were not available or used.
>
>I am concerned that "good design" macro-efficiencies are going to be "un-done"
>due to perceived net overall gain of over-implementing micro-efficiencies.

I think you should be even more concerned that some of these "micro-efficiencie
s"
make no contribution to efficiency at all.  Some of them sound like
rank superstition. In particular:
(a) What advantage is obtained by getting rid of CASE statements?  The
    answer to this question should take into account how the compiler
    generates code for a case statement, and how the same compiler 
    would generate code for the alternative.
(b) What advantage is gained by moving "state data" from a package body
    to a package spec?  This affects only the visibility of those data,
    and does nothing to the way code is generated to use them.
(c) "Inlining during design" sounds like an oxymoron.  
>
>In short, many Ada object-oriented concepts are being 'questioned'; i.e. data
>encapsulation, private types usage, call-through procedures; in groups that 
>used them. 
(d) In general, when I hear that "X has been questioned," I want to
    know if the questioning was reasonable and responsible.  It isn't 
    always.  The value of pi has often been questioned.  See 
    _Mathematical Cranks_ by Underwood Dudley.
(e) What advantage is gained by not using private types?  Again, the 
    answer should be related to the way that the compiler generates
    code for operations on a private type.  I would conjecture that
    there is no difference in code generation.
(f) I don't really know what a "call-through procedure" is; if it 
    is a procedure that does nothing more than call another 
    procedure, then it can be expanded in-line.

I think that the motive force behind the objections that Mr. Wisniewski 
raises is not "fitting into the box" but upsetting the habits
of programmers.  It is regrettable that the programmers feel they
must disguise their concerns in this way.  

I see two issues raised in Mr. Wisniewski's article that may
affect efficiency in a substantial way.  One is a really messy
case statement.  Suppose that one of 5 things is to be done,
depending on the value of a variable J, range 0 .. 255; and
the relation between J and what's to be done is not simple.
It may be faster to look up the number of the correct alterna-
tive in a table and act depending on the value in that table.
If the choice depends on the values of two variables, then
constructing that table may be much better than trying
to organize an efficient scheme of cases and sub-cases.
I know of a compiler which does convert a case statement into
a table look-up if it determines that this is more efficient.
I have never heard of a compiler that would do this for a
complex structure of nested case statements.

The second issue is that of procedure calls with numerous 
parameters.  If the procedure is called frequently, and 
some of the parameters change seldom or not at all,
then efficiency is served by moving those parameters
into a package body, and having a noew procedure to
change their values.  Another useful technique is to
group such values in a record structure; usually the
compiler passes a record parameter by address.  OF
course, it is clear that these techniques have to be applied
with careful judgement of the individual situation.
Just handing down an edict from on high that there
shall be no procedures with more than N parameters
is not going to help.

Regards,
Chris Henrich

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ada Embedded Systems Efficiencies
@ 1993-02-26  5:59 Alex Blakemore
  0 siblings, 0 replies; 10+ messages in thread
From: Alex Blakemore @ 1993-02-26  5:59 UTC (permalink / raw)


> I am concerned that "good design" macro-efficiencies are going to be "un-done
"
> due to perceived net overall gain of over-implementing micro-efficiencies.
> In short, many Ada object-oriented concepts are being 'questioned'; i.e. data
> encapsulation, private types usage, call-through procedures;

Most or all of any additional overhead associated with the
techniques you mention can be avoided by proper use of pragma
inline. For large procedures this might explode code size, but dont
avoid it because of this. For small subprograms that access private
type components or serve as call throughs, there should be little
effect in code size.

If what your coworkers are doing is replacing private types with non
private types and access routines with direct references and call
through procedures with the eventual call, then pragma inline will
achieve exactly the same effect.

In the worst case, you could be expending lots of manual effort,
introducing errors, and making the system much less maintainable
with absolutely no gain.  The exact same result performance wise
could be obtained safely with much less trouble without changing the
design or affecting the maintainability by adding some pragmas.

If your backend doesnt implement this pragma, that would be a good
place to expend some effort.

There are a few things to watch out for. Pragma inline introduces a
dependency upon the package body that may cause circular
dependencies in a few cases, and longer recompilation times in many.
For this reason, I suggest adding the pragma to make sure you dont
generate an illegal dependency (the compiler should tell you if you
do) and then commenting it out while you are still developing the
unit body (to allow faster recompilation). Then you can uncomment
the pragma when you get into integration and test.

> Note: Very little prototyping was done and as a result effective
early > timing studies were not available or used.

Isnt it dangerous to make radical changes based on so little
information?  Serious benchmarks are useful and take alot of care;
but it isnt hard at all to get ball park measurements with very
little effort.  Perhaps you could convince people to measure a few
important aspects of the system before investing alot of time
optimizing the wrong component in the wrong way?







-- 
---------------------------------------------------
Alex Blakemore alex@cs.umd.edu   NeXT mail accepted

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ada Embedded Systems Efficiencies
@ 1993-02-26 15:10 ` MILLS,JOHN M.
  1993-03-02 20:10   ` Greg Franks
  0 siblings, 1 reply; 10+ messages in thread
From: MILLS,JOHN M. @ 1993-02-26 15:10 UTC (permalink / raw)


In article <1993Feb24.212146.13157@saifr00.cfsat.honeywell.com> wisniews@saifr0
0.cfsat.honeywell.com (Joe Wisniewski) writes:
[...]
> I
>am involved in a very large Ada project where there is currently much concern 
about whether
>or not we are going to 'fit in the box'. As a result as you can probably imagi
ne, there
>is an increasing amount of attention being paid to addressing "micro-efficienc
ies". This
>attention takes on various forms:
>	1. Optimizing the compiler (back-end developed in-house)
>	2. Addressing issues that the compiler hasn't optimized well or can't o
ptimize
>	   well
>		i.e. getting rid of case statements
>		     limiting the number of parameters that are passed
>		       in procedure calls
>                     pushing for the use of package spec data vs. package body
 data
>			accessed by procedures and functions for state data usa
ge
>		     in-lining during design
>		     etc.
> 
>	Note: Very little prototyping was done and as a result effective early
>	      timing studies were not available or used.
>
>I am concerned that "good design" macro-efficiencies are going to be "un-done"
>due to perceived net overall gain of over-implementing micro-efficiencies.

I am also completing an embedded Ada project -- three 68030s and comprising:
   266 files,
   40,000 total lines
   17,000 "semicolons"

and running from 0.5 MB RAM in each of the three CPU cards (with plenty of
margin).

This may be a large or a small program in your experience.  Parts of it are
fast and tightly coupled; parts are fast and well buffered; parts are real
lazy.

We looked at assembly listing for some of our common data structures (such
as Booch-style managed rings for lots of types), and concluded that we would
have to work _very_ hard to improve on the [SD-Scicon] compiler's performance,
and we would (naturally) have to resort to assembly language -- we couldn't
expect to improve at all in Ada.  The compiler was making clever choices for
register and stack usage, etc., depending on context.

The old advice of: "profile first and optimize after" is really the best.
Right now you don't know where your problems will be.  You may not even
have any.

We did find some "religious issues" obstructed our code and slowed it down.
Classical encapsulation isolating internal procedures by multiple layers
of abstraction was costing us _lots_ of execution time.  We went to "Pascal
style" local variables and simpler instantiation layering, and picked up
up "a lot" of speed.  (I didn't do this part, so I can't comment on the
amount, but we tripled our throughput in critical processes.)  Even more
serious, the heavy encapsulation had so concealed our code's functionality
that it was unportable and not economically maintainable, _even_by_other_
members_of_our_team_.  Watch your dogma, but probably you can trust your
language.  Figure out for yourself if you can trust your compiler, but
don't waste time solving problems you may not have.

I'm afraid there's no substitute for good judgement, both in the design phase
and in your implementation.  If you have a poor top-level design and good
low-level functions (i.e., the code was well modularized), you may be able
to get big functional improvements with very little recoding -- this is a
_design_ issue, _not_ a language issue (and probably not a compiler issue,
either).  This comes under the heading: "I'ld rather be lucky than good."

>I believe that it is possible that some of these micro-efficiencies that are
>implemented might actually do more harm than good, in the long-run.

The time to get the job re-done, and to reconstruct all your design,
documentation, test strategies and suites, etc., etc. will eat you alive, too.

That kills even more embedded projects than "not fitting in the box."
(IMHO)

>I think that this might be of interest to most readers, so I would suggest
>posting responses to the net. I will summarize all responses at a later date.

It's probably better than the 100-yrs' war [C++ vs Ada].

Regards --jmm--

-- 
John M. Mills, SRE; Georgia Tech/GTRI/TSDL, Atlanta, GA 30332
uucp: ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!prism!jm59
Internet: john.m.mills@gtri.gatech.edu
EBENE Chocolat Noir 72% de Cacao - WEISS - 42000 St.Etienne - very fine

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ada Embedded Systems Efficiencies
  1993-02-26 15:10 ` Ada Embedded Systems Efficiencies MILLS,JOHN M.
@ 1993-03-02 20:10   ` Greg Franks
  0 siblings, 0 replies; 10+ messages in thread
From: Greg Franks @ 1993-03-02 20:10 UTC (permalink / raw)


As has been noted here, some performance problems arise because of
less-than-optimal optimizations by the compiler, excessive procedure
calls etc.  However, I am curious if there are any articles in the
literature reporting performance problems caused by rendezvous (or
remote procedure calls).  For instance, it is possible to have several
communicating processes limit each others throughput without fully
utilizing the hardware resources on which they run because the tasks
are blocked waiting for replies from remote processors.

Send email.  I will summarize.
  Thanks
--
Greg Franks   (613) 788-5726     Systems and Computer Engineering,
uunet!mitel!cunews!sce!greg      Carleton University,
greg@sce.carleton.ca             Ottawa, Ontario, Canada  K1S 5B6.
Overwhelm them with small bugs so they don't see the big ones.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ada Embedded Systems Efficiencies
       [not found] <1993Feb24.212146.13157@saifr00.cfsat.honeywell.com>
  1993-02-26 15:10 ` Ada Embedded Systems Efficiencies MILLS,JOHN M.
@ 1993-03-02 22:36 ` Tom Wicklund
  1993-03-03 14:39   ` Gerald Walls
  1 sibling, 1 reply; 10+ messages in thread
From: Tom Wicklund @ 1993-03-02 22:36 UTC (permalink / raw)


In <1993Feb24.212146.13157@saifr00.cfsat.honeywell.com> wisniews@saifr00.cfsat.honeywell.com (Joe Wisniewski) writes:

>Anyone who has worked on an Ada embedded system program knows (or should know) that if the
>system being built does not 'fit in the box', the system is not of much use to anyone. I
>am involved in a very large Ada project where there is currently much concern about whether
>or not we are going to 'fit in the box'. As a result as you can probably imagine, there
>is an increasing amount of attention being paid to addressing "micro-efficiencies". This
>attention takes on various forms:

Remember Jon Bentley's comments in his Programming Pearls columns in
CACM (and the books collecting those columns).

As a rule, most execution time is spent in a small amount of the code.
It's important to identify where the bottlenecks are and then decide
to optimize those, whether by violating encapsulating functions,
inlining code, using structures the compiler handles well, going to
assembly language, or restructuring the code.  Wholesale optimization
won't do much more than obscure things and ensure that non-time
critical code is blindingly fast.

I've worked for many years with disk drives.  It sometimes takes a
while to get people to realize that other than read and write
it doesn't matter how inefficient disk command implementations are
(within reason, of course).

There are also external factors to consider.  For example, in my work
with disk drives one must realize that timing is based on a disk
revolution.  It makes no sense to optimize code which will have to
wait a full disk revolution (12-17ms) before it can do anything.
Overlapping code with external factors (though it may sometimes
obscure the structure a bit) gives a lot of "free" optimization since
the time critical element is outside software control.




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ada Embedded Systems Efficiencies
  1993-03-02 22:36 ` Tom Wicklund
@ 1993-03-03 14:39   ` Gerald Walls
  1993-03-03 23:24     ` Mike Berman
  0 siblings, 1 reply; 10+ messages in thread
From: Gerald Walls @ 1993-03-03 14:39 UTC (permalink / raw)


In article <1993Mar2.223618.18978@intellistor.com> wicklund@intellistor.com (Tom Wicklund) writes:
>In <1993Feb24.212146.13157@saifr00.cfsat.honeywell.com> wisniews@saifr00.cfsat.honeywell.com (Joe Wisniewski) writes:
>
>>Anyone who has worked on an Ada embedded system program knows (or should know) that if the
>>system being built does not 'fit in the box', the system is not of much use to anyone. I
>>am involved in a very large Ada project where there is currently much concern about whether
>>or not we are going to 'fit in the box'. As a result as you can probably imagine, there
>>is an increasing amount of attention being paid to addressing "micro-efficiencies". This
>>attention takes on various forms:
>
>Remember Jon Bentley's comments in his Programming Pearls columns in
>CACM (and the books collecting those columns).
>
>As a rule, most execution time is spent in a small amount of the code.
>It's important to identify where the bottlenecks are and then decide
>to optimize those, whether by violating encapsulating functions,
>inlining code, using structures the compiler handles well, going to
>assembly language, or restructuring the code.  Wholesale optimization
>won't do much more than obscure things and ensure that non-time
>critical code is blindingly fast.

The problem here is not speed efficiency where you can selectively
optimize.  The problem is size optimization where it has to be
done everywhere.  We don't have the luxury of buying more memory
but instead must make the code fit.


gerald walls (walls@saifr00.cfsat.honeywell.com) or
             (int_walls@am.eccx.umc@esu36.cfsat.honeywell.com)

         Don't blame me.  I voted Libertarian.
    How will you spend *your* middle-class tax cut?

       Any opinions expressed are my own and not
         those of Honeywell or its management.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ada Embedded Systems Efficiencies
  1993-03-03 14:39   ` Gerald Walls
@ 1993-03-03 23:24     ` Mike Berman
  1993-03-04  1:50       ` Dave Bashford
  0 siblings, 1 reply; 10+ messages in thread
From: Mike Berman @ 1993-03-03 23:24 UTC (permalink / raw)


In article <1993Mar3.143900.10592@saifr00.cfsat.honeywell.com> walls@saifr00.cfsat.honeywell.com (Gerald Walls) writes:

>The problem here is not speed efficiency where you can selectively
>optimize.  The problem is size optimization where it has to be
>done everywhere.  We don't have the luxury of buying more memory
>but instead must make the code fit.

There aren't a whole lot of ways of controlling space optimizations via
"micro-efficiencies". In this arena, compilers can do amazing things
that most of us mere mortals could ever accomplish by hand. If you start
hitting brick walls, bring in your compiler vendor.

Some things you can do within he language:

	- Avoid using tasking. The overhead can be large, since the run
time support libraries get linked in no matter how much or how little
tasking your application actually does. Use implementation-provided
methods for interrupt handling.
	As an alternative, many implementations provide small run time
environments if you use a self-enforced subset of tasking capabilities.

	- Learn to love your linker toolset. You will probably have to
play with how your linker allocates code and data. Most of the real time
compiler vendors provide decent tools for doing this.

	- Keep things off of the heap and in registers, whenever
possible. Likewise with the program stack.

	- Know how your compiler allocates storage. All of the major
vendors that I am aware of provide all of the gory details about how
each possible data object is actually stored. They are generally godd
about documenting known inefficiencies in their release notes.

	- pragma Optimize specifies two kinds of optimization sets: time
and space. Obviously, there are tradeoffs (larger, faster code vs.
smaller, slower code). Most vendors go at least one step further, adding
controls over many of the individual optimizations or groups of
optimizations that are performed.

In short, you have to know a lot about how your compilation system does
what it does. Again, a lot of analysis is needed at all stages of
development. Expect that some compromises will be necessary, but
certainly approach the design from the outset using all the good
programming and design practices that you would normally use.

BTW, these problems are by no means unique to Ada
development projects.

>gerald walls (walls@saifr00.cfsat.honeywell.com) or
>             (int_walls@am.eccx.umc@esu36.cfsat.honeywell.com)

-- 
Mike Berman
University of Maryland, Baltimore County	Fastrak Training, Inc.
berman@umbc.edu					(301)924-0050



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ada Embedded Systems Efficiencies
  1993-03-03 23:24     ` Mike Berman
@ 1993-03-04  1:50       ` Dave Bashford
  0 siblings, 0 replies; 10+ messages in thread
From: Dave Bashford @ 1993-03-04  1:50 UTC (permalink / raw)


I'm jumping in a little late with my experience, so I don't have the
original article to quote.  What I would like to address is the subject
of optimizing on the micro/macro level.

I was brought onto a R/T graphics program that had been "optimized" at
the micro level in order to implement a certain feature. The problems
though were that graphics hardware doubles in speed every 6 months
(more or less :-) and requirements change every 3.8ms. It was
eventually decided that the sacrifices made to implement that one
feature were too costly and what we really needed was higher update
rates. In order to accomplish this we needed to optimize at the highest
levels of our program which was not possible because of the tightly
coupled nature of the micro-optimizations. We were not able to take
advantage of the new hardware they kept throwing at the problem.

After many months, I convinced management that we should scrap the
system and start over with the experience and requirements that had
been learned from the past (a S/W engineer's dream !). Eight months
later (the original program had been going on for four years) we had a
new system that was running _many_ times faster, was infinitely easier
to maintain (the requirements are still changing), and now has the
original "feature" implemented without the original sacrifices.

I believe, like many others, that the compilers are much better at
micro-level optimizations; that requirements are _never_ static; and that
"optimizations" that tighten coupling are extremely costly in the long
run (if they work at all).
-- 

db
bashford@srs.loral.com (Dave Bashford, Sunnyvale, CA)



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Ada Embedded Systems Efficiencies
@ 1993-03-05 21:21 agate!howland.reston.ans.net!zaphod.mps.ohio-state.edu!saimiri.primate.wi
  0 siblings, 0 replies; 10+ messages in thread
From: agate!howland.reston.ans.net!zaphod.mps.ohio-state.edu!saimiri.primate.wi @ 1993-03-05 21:21 UTC (permalink / raw)


Some of my favorite ways to reduce ROM size in an embedded application
without hacking up your code, in order of estimated savings. (If your
problem is RAM, it's a different list.)

* Code shared generics
  Some compilers can generate shared code for generic instantiations.
  If you have many generic instantiations, this can be a big win. Some
  compilers can even do this without sacrificing execution speed.

* Enumeration image tables
  Have your compiler/linker throw these away. You probably are not
  using 'Image or 'Value anyway, and why should you pay a code penalty
  for using nice, long, readable  names instead of A1, A2, etc.

* Unused subprogram elimination
  Most modern Ada compilers support the elimination of unreferenced
  subprograms at link time. (Since some compilers also generate
  "hidden" support routines for complex data type comparison and
  assignment, if you have many structures but don't use all the
  operations, you can save a lot.)

  Almost all compilers do unreachable code elimination (but if your
  application is what I think it is, there will be no dead code.)

* Elaboration of constants & variables
  How does your compiler initialize complex constants and variables?
  If the values are known at compile time, can it just lay the data
  out directly in ROM? Or does it generate code to copy the value
  (either element by element or a block copy) thus wasting both ROM
  and RAM (for constants). If your constants are not static, can you
  tweak them so they are?  [Actually, for something initialized with
  (others => value), a block fill can save ROM at the expense of RAM.]

* Exception tables
  If you're not using them (and your compiler is generating them) have
  the linker toss 'em.

* pragma INLINE
  Yes, pragma INLINE can often reduce size as well as time.  If you
  have many small "get/put" routines to private types/objects, the
  inlined code can be smaller than the procedure call overhead. (If
  the code after pragma INLINE is not identical to direct access, it's
  time to have a *serious* talk with your compiler vendor.) For example,
  on one current program, compiling w/o inline actually *increased*
  the code size by 5%.

  Of course, to do this profitably, you must evaluate the size/speed
  effect of every inline.

* Code generation
  Most of the compilers I've worked with tend to optimize for speed
  instead of space. For example, the architecture may provide a "bit
  field extract" instruction but it may be faster to do the
  shifting and masking manually. For speed you'd prefer the latter,
  but it'll cost in space. (I chose this example 'cause a lot of
  embedded systems do a lot of bit-diddling.) Another obvious example
  is loop unrolling.

  You may have to convince your compiler vendor to allow selective
  control over what optimizations are performed if there is a
  size/speed tradeoff.

You're gonna have to get used to pouring over the assembly listings
for your compiler.  Just like with the government, there may be lots
of waste, fraud, and abuse but until you examine everything carefully,
it may be hard to pick out.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~1993-03-05 21:21 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1993Feb24.212146.13157@saifr00.cfsat.honeywell.com>
1993-02-26 15:10 ` Ada Embedded Systems Efficiencies MILLS,JOHN M.
1993-03-02 20:10   ` Greg Franks
1993-03-02 22:36 ` Tom Wicklund
1993-03-03 14:39   ` Gerald Walls
1993-03-03 23:24     ` Mike Berman
1993-03-04  1:50       ` Dave Bashford
1993-03-05 21:21 agate!howland.reston.ans.net!zaphod.mps.ohio-state.edu!saimiri.primate.wi
  -- strict thread matches above, loose matches on Subject: below --
1993-02-26  5:59 Alex Blakemore
1993-02-25 20:36 enterpoop.mit.edu!spool.mu.edu!howland.reston.ans.net!paladin.american.ed
1993-02-25 19:58 Bob Kitzberger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox