comp.lang.ada
 help / color / mirror / Atom feed
* Ada.Containers and Storage_Pools
@ 2006-08-09 12:04 msimonides
  2006-08-09 13:27 ` Colin Paul Gloster
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: msimonides @ 2006-08-09 12:04 UTC (permalink / raw)


I'm working on a caching DNS resolver for our project. The structures
used to hold data in cache make use of Ada.Containers packages (e.g.
each node stores a map of list resource records for each data type, so
there is a lot of individual containers).
Currently we're using Containers that come with GNAT 3.4.5 but we plan
to move to Ada 2005 containers in newer GNAT.

The problem I'm facing (and that is certain to show up in other parts
of our project) is that I need to limit the size of cache in memory. To
do this I would need to be able to have the containers allocate their
nodes from a user-specified storage pool with some limiting mechanism
(I already have such a pool and use it for objects that are allocated
by my code).

So I have a question: is there a possibility, either in GNAT 3.4.5 or
in Ada 2005, to specify a custom storage pool for use by containers? If
not, is there any other solution to limiting the size of memory used by
containers?

The only solution that I have found is using some other containers
library (I've seen that Booch components allow for specifying custom
storage pools), but I'd really prefer to stick with the standard ones.
-- 
Marcin Simonides




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ada.Containers and Storage_Pools
  2006-08-09 12:04 Ada.Containers and Storage_Pools msimonides
@ 2006-08-09 13:27 ` Colin Paul Gloster
  2006-08-09 16:16   ` Matthew Heaney
  2006-08-09 16:05 ` Martin Krischik
  2006-08-09 16:13 ` Matthew Heaney
  2 siblings, 1 reply; 8+ messages in thread
From: Colin Paul Gloster @ 2006-08-09 13:27 UTC (permalink / raw)


On Wed, 9 Aug 2006 Marcin Simonides posted:

"[..]

The problem I'm facing (and that is certain to show up in other parts
of our project) is that I need to limit the size of cache in memory. To
do this I would need to be able to have the containers allocate their
nodes from a user-specified storage pool with some limiting mechanism
(I already have such a pool and use it for objects that are allocated
by my code).

So I have a question: is there a possibility, either in GNAT 3.4.5 or
in Ada 2005, to specify a custom storage pool for use by containers? If
not, is there any other solution to limiting the size of memory used by
containers?

[..]"

Perhaps I have failed to notice something obvious and important but could 
you use Storage_Size? From 13.11(15) in the Ada 2005 draft standard,
http://www.adaic.org/standards/05rm/html/RM-13-11.html#I4699
: "Storage_Size [..] may be specified for a non-derived 
access-to-object type via an attribute_definition_clause; [..]".

Regards,
Colin Paul Gloster



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ada.Containers and Storage_Pools
  2006-08-09 12:04 Ada.Containers and Storage_Pools msimonides
  2006-08-09 13:27 ` Colin Paul Gloster
@ 2006-08-09 16:05 ` Martin Krischik
  2006-08-09 16:13 ` Matthew Heaney
  2 siblings, 0 replies; 8+ messages in thread
From: Martin Krischik @ 2006-08-09 16:05 UTC (permalink / raw)


Am 09.08.2006, 15:04 Uhr, schrieb <msimonides@power.com.pl>:

> The only solution that I have found is using some other containers
> library (I've seen that Booch components allow for specifying custom
> storage pools), but I'd really prefer to stick with the standard ones.

No chance. The standart container lib was specificly designed to be simple  
with the hope that more advanced features would be supplied by 3rd parties  
later.

I would indeed suggest the booch components which allow memory fine funing  
and have bounded variants as well.

Martin
-- 
Martin Krischik
krischik@users.sourceforge.net



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ada.Containers and Storage_Pools
  2006-08-09 12:04 Ada.Containers and Storage_Pools msimonides
  2006-08-09 13:27 ` Colin Paul Gloster
  2006-08-09 16:05 ` Martin Krischik
@ 2006-08-09 16:13 ` Matthew Heaney
  2006-08-10  7:01   ` msimonides
  2 siblings, 1 reply; 8+ messages in thread
From: Matthew Heaney @ 2006-08-09 16:13 UTC (permalink / raw)
  To: msimonides; +Cc: heaney, Matthew Heaney

msimonides@power.com.pl wrote:
> I'm working on a caching DNS resolver for our project. The structures
> used to hold data in cache make use of Ada.Containers packages (e.g.
> each node stores a map of list resource records for each data type, so
> there is a lot of individual containers).

OK, it sounds like you have a map container whose elements are list 
containers.  Is that right?


> The problem I'm facing (and that is certain to show up in other parts
> of our project) is that I need to limit the size of cache in memory. To
> do this I would need to be able to have the containers allocate their
> nodes from a user-specified storage pool with some limiting mechanism
> (I already have such a pool and use it for objects that are allocated
> by my code).

Is the pool bounded or unbounded?  In other words, can you run out of 
nodes?  If so, what should happen?



> So I have a question: is there a possibility, either in GNAT 3.4.5 or
> in Ada 2005, to specify a custom storage pool for use by containers? If
> not, is there any other solution to limiting the size of memory used by
> containers?

Our philosophy in the design of the Ada 2005 container library was to 
optimize around the common case.  Unfortunately, your application needs 
a custom storage pool and hence is not common case.  So no you can't 
specify a custom storage pool.

One possibility is to keep a list around and use it as a kind of storage 
pool for the list objects that are map elements.  You can use the Splice 
operation for lists to move list nodes between list objects (that are 
map elements) and the list object that serves as the pool.


> The only solution that I have found is using some other containers
> library (I've seen that Booch components allow for specifying custom
> storage pools), but I'd really prefer to stick with the standard ones.

I'd like to know more about what you're doing, and about the nature of 
your storage pool.  If you don't mind drop me some email so we can 
discuss your issues in more detail.

Thanks,
Matt



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ada.Containers and Storage_Pools
  2006-08-09 13:27 ` Colin Paul Gloster
@ 2006-08-09 16:16   ` Matthew Heaney
  0 siblings, 0 replies; 8+ messages in thread
From: Matthew Heaney @ 2006-08-09 16:16 UTC (permalink / raw)


Colin Paul Gloster wrote:
> On Wed, 9 Aug 2006 Marcin Simonides posted:
> 
> Perhaps I have failed to notice something obvious and important but could 
> you use Storage_Size? 

No, because that attribute applies to a storage pool, and storage pools 
are associated with access types in Ada.  In the container library, the 
access types and storage pools are an implementation-specific detail.

-Matt



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ada.Containers and Storage_Pools
  2006-08-09 16:13 ` Matthew Heaney
@ 2006-08-10  7:01   ` msimonides
  2006-08-10 14:16     ` Matthew Heaney
  0 siblings, 1 reply; 8+ messages in thread
From: msimonides @ 2006-08-10  7:01 UTC (permalink / raw)


Matthew Heaney wrote:
> msimonides@power.com.pl wrote:
> > I'm working on a caching DNS resolver for our project. The structures
> > used to hold data in cache make use of Ada.Containers packages (e.g.
> > each node stores a map of list resource records for each data type, so
> > there is a lot of individual containers).
>
> OK, it sounds like you have a map container whose elements are list
> containers.  Is that right?

Yes, that is the basic idea.
There is a tree representing DNS zones. Each zone has a map of names to
child zone accesses (to be replaced by Hashed_Sets in the future) and a
map of resource types to resource record accesses. Each resource record
has a vector of data elements and some additional information.

> > The problem I'm facing (and that is certain to show up in other parts
> > of our project) is that I need to limit the size of cache in memory. To
> > do this I would need to be able to have the containers allocate their
> > nodes from a user-specified storage pool with some limiting mechanism
> > (I already have such a pool and use it for objects that are allocated
> > by my code).
>
> Is the pool bounded or unbounded?  In other words, can you run out of
> nodes?  If so, what should happen?

This storage pool is nothing fancy. It has a maximal size supplied at
creation of the pool object and at each allocation increases its
internal counter of used up Storage_Units and raises an error when the
limit is reached. Memory allocation itself is performed with "new", so
it is really a wrapper on the default allocator.

It is also possible to query the amount of memory left in the pool, it
is used to perform cleaning of the cache once a certain threshold is
reached. Two such pool objects are used for both access to zone and
access to resource record objects, which makes it possible to see which
structures use more memory.

Having such a pool for use by containers' internal structures would
enable us to choose more appropriate container types in terms of memory
usage (eg. we had a list of data elements in resource records but now
use vectors as they have less memory overhead on each element and are
as good performance-wise when the number of nodes is small. But there
is no (easy) way of measuring it).

> > So I have a question: is there a possibility, either in GNAT 3.4.5 or
> > in Ada 2005, to specify a custom storage pool for use by containers? If
> > not, is there any other solution to limiting the size of memory used by
> > containers?
>
[...]
>
> One possibility is to keep a list around and use it as a kind of storage
> pool for the list objects that are map elements.  You can use the Splice
> operation for lists to move list nodes between list objects (that are
> map elements) and the list object that serves as the pool.

Yes, but this is only a partial solution. Adapting it to thousands of
small vectors seems infeasible.

Ideally I could just specify the maximal size for all cache structures
together and have the storage pool enforce it.

> > The only solution that I have found is using some other containers
> > library (I've seen that Booch components allow for specifying custom
> > storage pools), but I'd really prefer to stick with the standard ones.
>
> I'd like to know more about what you're doing, and about the nature of
> your storage pool.  If you don't mind drop me some email so we can
> discuss your issues in more detail.

I hope I have included most of the important information in this post.
I will gladly provide more details if necessary.

Thank you for your help.
-- 
Marcin Simonides




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ada.Containers and Storage_Pools
  2006-08-10  7:01   ` msimonides
@ 2006-08-10 14:16     ` Matthew Heaney
  2006-08-11 18:22       ` msimonides
  0 siblings, 1 reply; 8+ messages in thread
From: Matthew Heaney @ 2006-08-10 14:16 UTC (permalink / raw)


msimonides@power.com.pl wrote:
 >
> Having such a pool for use by containers' internal structures would
> enable us to choose more appropriate container types in terms of memory
> usage (eg. we had a list of data elements in resource records but now
> use vectors as they have less memory overhead on each element and are
> as good performance-wise when the number of nodes is small. But there
> is no (easy) way of measuring it).

If you had a single-linked list, would you have used that instead of the 
vector?


>> One possibility is to keep a list around and use it as a kind of storage
>> pool for the list objects that are map elements.  You can use the Splice
>> operation for lists to move list nodes between list objects (that are
>> map elements) and the list object that serves as the pool.
> 
> Yes, but this is only a partial solution. Adapting it to thousands of
> small vectors seems infeasible.

Here's another idea.  For the pool, you could use a container (say, a 
list or map) of vectors, with the vector elements of the container 
ordered by their capacity.  If you need a vector, you would search the 
container for a vector having the requisite capacity, and then use the 
Move operation to move its internal array onto your vector object. 
Would that work?



> Ideally I could just specify the maximal size for all cache structures
> together and have the storage pool enforce it.

The pool abstraction I describe above would only allow you to control 
the memory for the vectors, not the total memory for maps and their 
vector elements.  But it might be good enough if the storage for the 
elements stored in the vectors is large compared to the number of map 
elements.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ada.Containers and Storage_Pools
  2006-08-10 14:16     ` Matthew Heaney
@ 2006-08-11 18:22       ` msimonides
  0 siblings, 0 replies; 8+ messages in thread
From: msimonides @ 2006-08-11 18:22 UTC (permalink / raw)


Matthew Heaney wrote:
> msimonides@power.com.pl wrote:
>  >
> > Having such a pool for use by containers' internal structures would
> > enable us to choose more appropriate container types in terms of memory
> > usage (eg. we had a list of data elements in resource records but now
> > use vectors as they have less memory overhead on each element and are
> > as good performance-wise when the number of nodes is small. But there
> > is no (easy) way of measuring it).
>
> If you had a single-linked list, would you have used that instead of the
> vector?

I would consider it, for my application (1-3 elements usually,
sometimes a dozen or so) it would probably use similar amount of memory
as a vector and it would be possible to apply your previous suggestion.

> > Yes, but this is only a partial solution. Adapting it to thousands of
> > small vectors seems infeasible.
>
> Here's another idea.  For the pool, you could use a container (say, a
> list or map) of vectors, with the vector elements of the container
> ordered by their capacity.  If you need a vector, you would search the
> container for a vector having the requisite capacity, and then use the
> Move operation to move its internal array onto your vector object.
> Would that work?

In this case I would probably prefer to use a counter for objects in
cache. On each insertion it would be increased, on removing -
decreased. This would probably be faster, easier to implement and it
would be trivial to check whether amount of free memory is below a
threshold to trigger additional cleaning.
It's basically the same idea that is used currently, but uglier - it
would be implemented outside of a storage pool (where it in my opinion
belongs).

> > Ideally I could just specify the maximal size for all cache structures
> > together and have the storage pool enforce it.
>
> The pool abstraction I describe above would only allow you to control
> the memory for the vectors, not the total memory for maps and their
> vector elements.  But it might be good enough if the storage for the
> elements stored in the vectors is large compared to the number of map
> elements.

It isn't required to be precise to fulfill it's basic purpose, which is
preventing one module form using all available memory, so such
solutions are viable. In fact such is the current state - by limiting
the number of zone and data nodes, the size of the whole cache is
limited, we just don't know what the limit is and how much it can vary
based on data that is inserted.

On the other hand it would be desirable to be able to tune maximal
amount of memory assigned to each module of the program to let it make
the most of it (and not use too much).
It seems this would be easy if containers supported custom storage
pools.

Thank you for your suggestions and ideas. I will look into them once
again when we start optimizing our program (all containers will be
reviewed in the near future in order to choose best types, so now it is
too soon for working on detailed solutions that you proposed).
-- 
Marcin Simonides




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2006-08-11 18:22 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-08-09 12:04 Ada.Containers and Storage_Pools msimonides
2006-08-09 13:27 ` Colin Paul Gloster
2006-08-09 16:16   ` Matthew Heaney
2006-08-09 16:05 ` Martin Krischik
2006-08-09 16:13 ` Matthew Heaney
2006-08-10  7:01   ` msimonides
2006-08-10 14:16     ` Matthew Heaney
2006-08-11 18:22       ` msimonides

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox