comp.lang.ada
 help / color / mirror / Atom feed
From: msimonides@power.com.pl
Subject: Re: Ada.Containers and Storage_Pools
Date: 11 Aug 2006 11:22:24 -0700
Date: 2006-08-11T11:22:24-07:00	[thread overview]
Message-ID: <1155320544.504578.144080@75g2000cwc.googlegroups.com> (raw)
In-Reply-To: 44db3fb0$0$1013$39cecf19@news.twtelecom.net

Matthew Heaney wrote:
> msimonides@power.com.pl wrote:
>  >
> > Having such a pool for use by containers' internal structures would
> > enable us to choose more appropriate container types in terms of memory
> > usage (eg. we had a list of data elements in resource records but now
> > use vectors as they have less memory overhead on each element and are
> > as good performance-wise when the number of nodes is small. But there
> > is no (easy) way of measuring it).
>
> If you had a single-linked list, would you have used that instead of the
> vector?

I would consider it, for my application (1-3 elements usually,
sometimes a dozen or so) it would probably use similar amount of memory
as a vector and it would be possible to apply your previous suggestion.

> > Yes, but this is only a partial solution. Adapting it to thousands of
> > small vectors seems infeasible.
>
> Here's another idea.  For the pool, you could use a container (say, a
> list or map) of vectors, with the vector elements of the container
> ordered by their capacity.  If you need a vector, you would search the
> container for a vector having the requisite capacity, and then use the
> Move operation to move its internal array onto your vector object.
> Would that work?

In this case I would probably prefer to use a counter for objects in
cache. On each insertion it would be increased, on removing -
decreased. This would probably be faster, easier to implement and it
would be trivial to check whether amount of free memory is below a
threshold to trigger additional cleaning.
It's basically the same idea that is used currently, but uglier - it
would be implemented outside of a storage pool (where it in my opinion
belongs).

> > Ideally I could just specify the maximal size for all cache structures
> > together and have the storage pool enforce it.
>
> The pool abstraction I describe above would only allow you to control
> the memory for the vectors, not the total memory for maps and their
> vector elements.  But it might be good enough if the storage for the
> elements stored in the vectors is large compared to the number of map
> elements.

It isn't required to be precise to fulfill it's basic purpose, which is
preventing one module form using all available memory, so such
solutions are viable. In fact such is the current state - by limiting
the number of zone and data nodes, the size of the whole cache is
limited, we just don't know what the limit is and how much it can vary
based on data that is inserted.

On the other hand it would be desirable to be able to tune maximal
amount of memory assigned to each module of the program to let it make
the most of it (and not use too much).
It seems this would be easy if containers supported custom storage
pools.

Thank you for your suggestions and ideas. I will look into them once
again when we start optimizing our program (all containers will be
reviewed in the near future in order to choose best types, so now it is
too soon for working on detailed solutions that you proposed).
-- 
Marcin Simonides




      reply	other threads:[~2006-08-11 18:22 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-08-09 12:04 Ada.Containers and Storage_Pools msimonides
2006-08-09 13:27 ` Colin Paul Gloster
2006-08-09 16:16   ` Matthew Heaney
2006-08-09 16:05 ` Martin Krischik
2006-08-09 16:13 ` Matthew Heaney
2006-08-10  7:01   ` msimonides
2006-08-10 14:16     ` Matthew Heaney
2006-08-11 18:22       ` msimonides [this message]
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox