comp.lang.ada
 help / color / mirror / Atom feed
* Heap vs Stack allocation
@ 2005-10-11 20:56 Lionel Draghi
  2005-10-11 21:47 ` Randy Brukardt
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Lionel Draghi @ 2005-10-11 20:56 UTC (permalink / raw)


The paper "Java theory and practice: Urban performance legends, 
revisited - Allocation is faster than you think, and getting faster" 
(http://www-128.ibm.com/developerworks/java/library/j-jtp09275.html?ca=dgr-lnxw07JavaUrbanLegends)
explains (as far as I understand) how JVMs optimize "on-the-fly" memory 
allocation to use stack instead of heap, by using "Escape Analysis".

My feeling is that those writing JVMs are working hard to guess the data 
lifetime and optimize memory allocation accordingly, because the 
language is unable to express this data locality.
I think that Ada semantics open much more opportunity to use the stack 
(or registers).
Am I right, or is this "Escape Analysis" something really powerful?

-- 
Lionel Draghi 
http://en.wikibooks.org/wiki/Wikibooks:Book_of_the_month/September_2005



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Heap vs Stack allocation
  2005-10-11 20:56 Heap vs Stack allocation Lionel Draghi
@ 2005-10-11 21:47 ` Randy Brukardt
  2005-10-12  5:44   ` Jeffrey R. Carter
  2005-10-12  1:38 ` Jon Harrop
  2005-10-12 12:35 ` Florian Weimer
  2 siblings, 1 reply; 8+ messages in thread
From: Randy Brukardt @ 2005-10-11 21:47 UTC (permalink / raw)


"Lionel Draghi" <Lionel.nospam.Draghi@Ada-France.org> wrote in message
news:434c2709$0$21298$626a54ce@news.free.fr...
> The paper "Java theory and practice: Urban performance legends,
> revisited - Allocation is faster than you think, and getting faster"
>
(http://www-128.ibm.com/developerworks/java/library/j-jtp09275.html?ca=dgr-l
nxw07JavaUrbanLegends)
> explains (as far as I understand) how JVMs optimize "on-the-fly" memory
> allocation to use stack instead of heap, by using "Escape Analysis".
>
> My feeling is that those writing JVMs are working hard to guess the data
> lifetime and optimize memory allocation accordingly, because the
> language is unable to express this data locality.
> I think that Ada semantics open much more opportunity to use the stack
> (or registers).
> Am I right, or is this "Escape Analysis" something really powerful?

I guess I would see these as somewhat Apples and Oranges; the JVM is an
interpreter, and that opens a lot of possibilities that aren't every going
to be possible for purely static analysis. Moreover, pretty much everything
that can be done statically can also be done by pure analysis if you want to
spend enough time. (For instance, you don't need strong type to produce good
code, but it certainly makes the job a lot easier.)

That said, I think the more information that the programmer can provide the
compiler about what they're doing, the better code that can be generated.
I'm still convinced that, given a big enough budget, an Ada compiler can
produce faster and smaller programs than that for any of the other
"contenders". But you'd need an Ada-specific optimizer to take full
advantage of the information that Ada provides, and it's probably not cost
effective to create those.

In all honesty, that doesn't matter that much. That last 10% of speed or
space probably doesn't matter in the vast majority of uses. (Which is
probably the real point of the "urban legend" article) What does matter is a
lot of issues that have nothing much to do with the compiler -- tools
support, library support, and language capability.

                               Randy.






^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Heap vs Stack allocation
  2005-10-11 20:56 Heap vs Stack allocation Lionel Draghi
  2005-10-11 21:47 ` Randy Brukardt
@ 2005-10-12  1:38 ` Jon Harrop
  2005-10-12 12:35 ` Florian Weimer
  2 siblings, 0 replies; 8+ messages in thread
From: Jon Harrop @ 2005-10-12  1:38 UTC (permalink / raw)


Lionel Draghi wrote:
> The paper "Java theory and practice: Urban performance legends,
> revisited - Allocation is faster than you think, and getting faster"
>
(http://www-128.ibm.com/developerworks/java/library/j-jtp09275.html?ca=dgr-lnxw07JavaUrbanLegends)
> explains (as far as I understand) how JVMs optimize "on-the-fly" memory
> allocation to use stack instead of heap, by using "Escape Analysis".
> 
> My feeling is that those writing JVMs are working hard to guess the data
> lifetime and optimize memory allocation accordingly, because the
> language is unable to express this data locality.
> I think that Ada semantics open much more opportunity to use the stack
> (or registers).

Note that OCaml allocates primarily on the heap and not the stack and is
much faster than Java.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Heap vs Stack allocation
  2005-10-11 21:47 ` Randy Brukardt
@ 2005-10-12  5:44   ` Jeffrey R. Carter
  2005-10-13 23:44     ` Freejack
  0 siblings, 1 reply; 8+ messages in thread
From: Jeffrey R. Carter @ 2005-10-12  5:44 UTC (permalink / raw)


Randy Brukardt wrote:

> That said, I think the more information that the programmer can provide the
> compiler about what they're doing, the better code that can be generated.
> I'm still convinced that, given a big enough budget, an Ada compiler can
> produce faster and smaller programs than that for any of the other
> "contenders". But you'd need an Ada-specific optimizer to take full
> advantage of the information that Ada provides, and it's probably not cost
> effective to create those.

There seems to be an existence proof for this: the Tartan Ada-83 compilers had 
excellent optimizers. It was a Tartan compiler that resulted in the "Ada Beats 
Assembler" article; the compiler produced smaller and faster code than 
hand-optimized assembler from a team of experts. There was also an interesting 
article on Tartan's benchmarks that they used to sell their C compilers. The Ada 
version was faster than the C version. The article listed the Ada features that 
allowed this. True arrays was one such feature.

I'm speaking from memory; I don't have the articles around. They were in /Ada 
Letters/ in the late 1980s or early 1990s.

-- 
Jeff Carter
"Go and boil your bottoms."
Monty Python & the Holy Grail
01



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Heap vs Stack allocation
  2005-10-11 20:56 Heap vs Stack allocation Lionel Draghi
  2005-10-11 21:47 ` Randy Brukardt
  2005-10-12  1:38 ` Jon Harrop
@ 2005-10-12 12:35 ` Florian Weimer
  2 siblings, 0 replies; 8+ messages in thread
From: Florian Weimer @ 2005-10-12 12:35 UTC (permalink / raw)


* Lionel Draghi:

> Am I right, or is this "Escape Analysis" something really powerful?

Depends on your coding style.  The savings could be significant for
some uses of Unbounded_String, for example.  It might also be faster
to return a pointer to heap object (which is in fact region or stack
allocated) than to return the unconstrained object itself, if this
saves a few copies.

All in all, I would expect that most Ada code is far less demanding
than Java code as far as the memory allocate is concerned (assuming
that you use a compiler which can allocate unconstrained objects on
the stack).



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Heap vs Stack allocation
  2005-10-13 23:44     ` Freejack
@ 2005-10-13  2:47       ` Steve
  2005-10-13  5:30       ` Jeffrey R. Carter
  1 sibling, 0 replies; 8+ messages in thread
From: Steve @ 2005-10-13  2:47 UTC (permalink / raw)


"Freejack" <freejack@nowhere.net> wrote in message 
news:pan.2005.10.13.23.44.23.670599@nowhere.net...
> On Wed, 12 Oct 2005 05:44:10 +0000, Jeffrey R. Carter wrote:
>
>> There seems to be an existence proof for this: the Tartan Ada-83 
>> compilers had
>> excellent optimizers. It was a Tartan compiler that resulted in the "Ada 
>> Beats
>> Assembler" article; the compiler produced smaller and faster code than
>> hand-optimized assembler from a team of experts. There was also an 
>> interesting
>> article on Tartan's benchmarks that they used to sell their C compilers. 
>> The Ada
>> version was faster than the C version. The article listed the Ada 
>> features that
>> allowed this. True arrays was one such feature.
>>
>> I'm speaking from memory; I don't have the articles around. They were in 
>> /Ada
>> Letters/ in the late 1980s or early 1990s.
>
> Is Tartan still in existence? Are the current crop of Ada vendors making
> Ada specific optimizers? I'd be very interested in getting my hands on 
> one.
>
> Freejack
>

It appears that DDC-I acquired the tartan compilers:

http://www.ddci.com/news_tads2000.shtml

Steve
(The Duck) 





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Heap vs Stack allocation
  2005-10-13 23:44     ` Freejack
  2005-10-13  2:47       ` Steve
@ 2005-10-13  5:30       ` Jeffrey R. Carter
  1 sibling, 0 replies; 8+ messages in thread
From: Jeffrey R. Carter @ 2005-10-13  5:30 UTC (permalink / raw)


Freejack wrote:

> Is Tartan still in existence? Are the current crop of Ada vendors making
> Ada specific optimizers? I'd be very interested in getting my hands on one.

Tartan is no longer in existence. The Tartan compilers were acquired by DDC-I. I 
believe they're still available from DDC-I, but they are Ada-83 compilers. I 
don't know if they incorporated the Tartan optimizers in their Ada-95 compilers. 
It seems a shame that more compiler vendors aren't creating such optimizers, but 
I guess their customers aren't willing to pay for them.

-- 
Jeff Carter
"Many times we're given rhymes that are quite unsingable."
Monty Python and the Holy Grail
57



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Heap vs Stack allocation
  2005-10-12  5:44   ` Jeffrey R. Carter
@ 2005-10-13 23:44     ` Freejack
  2005-10-13  2:47       ` Steve
  2005-10-13  5:30       ` Jeffrey R. Carter
  0 siblings, 2 replies; 8+ messages in thread
From: Freejack @ 2005-10-13 23:44 UTC (permalink / raw)


On Wed, 12 Oct 2005 05:44:10 +0000, Jeffrey R. Carter wrote:

> There seems to be an existence proof for this: the Tartan Ada-83 compilers had 
> excellent optimizers. It was a Tartan compiler that resulted in the "Ada Beats 
> Assembler" article; the compiler produced smaller and faster code than 
> hand-optimized assembler from a team of experts. There was also an interesting 
> article on Tartan's benchmarks that they used to sell their C compilers. The Ada 
> version was faster than the C version. The article listed the Ada features that 
> allowed this. True arrays was one such feature.
> 
> I'm speaking from memory; I don't have the articles around. They were in /Ada 
> Letters/ in the late 1980s or early 1990s.

Is Tartan still in existence? Are the current crop of Ada vendors making
Ada specific optimizers? I'd be very interested in getting my hands on one.

Freejack




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2005-10-13 23:44 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-10-11 20:56 Heap vs Stack allocation Lionel Draghi
2005-10-11 21:47 ` Randy Brukardt
2005-10-12  5:44   ` Jeffrey R. Carter
2005-10-13 23:44     ` Freejack
2005-10-13  2:47       ` Steve
2005-10-13  5:30       ` Jeffrey R. Carter
2005-10-12  1:38 ` Jon Harrop
2005-10-12 12:35 ` Florian Weimer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox