comp.lang.ada
 help / color / mirror / Atom feed
From: David Marceau <davidmarceau@sympatico.ca>
Subject: Re: Memory chunks: are they much speed-up?
Date: Fri, 24 Jan 2003 21:06:03 -0500
Date: 2003-01-24T21:06:03-05:00	[thread overview]
Message-ID: <3E31F10B.FA064739@sympatico.ca> (raw)
In-Reply-To: 3e30f9a1$0$33922$bed64819@news.gradwell.net

Victor Porton wrote:
> 
> Is allocating memory in chunks by several (small) objects at once a
> significant speed-up (compared to standard allocators) in typical Ada
> impl.?
If you follow ravenscar or spark I don't think you're even allowed to
allocated dynamically small or big chunks.
Everything is static.
Memory leaks are not a problem in that way.

Going back to your question though if it wasn't critical or
high-integrity, then yes using memory pools each dedicated to a certain
type of object and then reserving the memory pools in advance is
certainly much more efficient.  If ever one of the memory pools is full,
no problem just dynamically allocate another memory pool dedicated to
the object type in question.

I have seen one library product dedicated to c/C++ programmers in this
area in the past.  SmartHeap by Microquill.  You might want to check it
out.

Do memory pools impact/improve software performance?  Yes.

I hope this helps.

Cheers,
David Marceau



      parent reply	other threads:[~2003-01-25  2:06 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-01-24  8:27 Memory chunks: are they much speed-up? Victor Porton
2003-01-24 14:32 ` Mark Johnson
2003-01-24 14:37 ` Stephen Leake
2003-01-25  2:06 ` David Marceau [this message]
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox