comp.lang.ada
 help / color / mirror / Atom feed
* Design help
@ 2007-03-09 22:43 Carroll, Andrew
  2007-03-09 23:07 ` Simon Wright
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Carroll, Andrew @ 2007-03-09 22:43 UTC (permalink / raw)
  To: comp.lang.ada

I am trying to design (what I guess is) a database table adapter.  Yes,
it is a master's course assignment.  The current design is to use a file
and have one record per line.  My goal is to get each line to be
"serialized" so I can read the whole line in bytes and then take chunks
of it and "cast" those into objects.

On this list from a previous poster I found:

   type Byte is range 0..255;
   for Byte'Size use 8;
   package Byte_IO is new Sequential_IO(Byte);

Does that mean I could define the record like:

   type dbrecord is range 0..sum_of_sizes_of_attributes;
   for dbrecord'size use sum_of_sizes_of_attributes;
   package DBRecord_IO is new Sequential_IO(dbrecord);

My next big question....

If I had
   myrec: dbrecord;
   ...
   Dbrecord := <some data>...
   ...

   X := dbrecord(33..70);  --what type is x here?
   attribute := cast(X);  --how do I cast x to an 'class?


Am I making any sense?


Andrew Carroll
Software Services
405-744-4943
andrew.carroll@okstate.edu



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Design help
  2007-03-09 22:43 Carroll, Andrew
@ 2007-03-09 23:07 ` Simon Wright
  2007-03-10  1:00 ` Jeffrey R. Carter
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Simon Wright @ 2007-03-09 23:07 UTC (permalink / raw)


I think you should look at Ada.Streams and Ada.Streams.Stream_IO. OK,
this could do most of the work for you, but that's what higher-level
languages & their libraries are for ... not sure how your professor
would react.



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Design help
  2007-03-09 22:43 Carroll, Andrew
  2007-03-09 23:07 ` Simon Wright
@ 2007-03-10  1:00 ` Jeffrey R. Carter
  2007-03-10  4:40 ` Steve
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Jeffrey R. Carter @ 2007-03-10  1:00 UTC (permalink / raw)


Carroll, Andrew wrote:
> 
>    myrec: dbrecord;
>    ...
>    type dbrecord is range 0..sum_of_sizes_of_attributes;

>    X := dbrecord(33..70);  --what type is x here?

Dbrecord is a type, not an object, so I presume you meant Myrec here. 
But Dbrecord is an integer type, so Myrec (33 .. 70) is meaningless. It 
will be hard for you to get a meaningful answer until we can understand 
what you're asking.

-- 
Jeff Carter
"English bed-wetting types."
Monty Python & the Holy Grail
15



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Design help
  2007-03-09 22:43 Carroll, Andrew
  2007-03-09 23:07 ` Simon Wright
  2007-03-10  1:00 ` Jeffrey R. Carter
@ 2007-03-10  4:40 ` Steve
  2007-03-10 13:38 ` Ludovic Brenta
  2007-03-17 20:34 ` Michael Erdmann
  4 siblings, 0 replies; 13+ messages in thread
From: Steve @ 2007-03-10  4:40 UTC (permalink / raw)


>"Carroll, Andrew" <andrew.carroll@okstate.edu> wrote in message 
>news:mailman.116.1173480208.18371.comp.lang.ada@ada-france.org...
>I am trying to design (what I guess is) a database table adapter.  Yes,
>it is a master's course assignment.  The current design is to use a file
>and have one record per line.  My goal is to get each line to be
>"serialized" so I can read the whole line in bytes and then take chunks
>of it and "cast" those into objects.
>
>On this list from a previous poster I found:
>
>   type Byte is range 0..255;
>   for Byte'Size use 8;
>   package Byte_IO is new Sequential_IO(Byte);
>

This defines a scalar data type for bytes.  Ok so far.

>Does that mean I could define the record like:
>
>   type dbrecord is range 0..sum_of_sizes_of_attributes;
>   for dbrecord'size use sum_of_sizes_of_attributes;
>   package DBRecord_IO is new Sequential_IO(dbrecord);
>

This looks a bit odd.  Here a scalar range is called "dbrecord" is defined 
that may contain the values in the range of  0 to 
sum_of_sizes_of_attributes.

So, for example, if sum_of_sizes_of_attributes is 32, values of type 
dbrecord may be assigned values in the range of 0 to 32.  The representation 
clause for dbrecord'size use sum_of size_attributes indicates that the 
storage size for those values is 32 bits... So values in the range of 0 to 
32 are being stored in 32 bits... no problem with that, but I don't think it 
is what was intended.

>My next big question....
>
>If I had
>   myrec: dbrecord;
>   ...
>   Dbrecord := <some data>...
>   ...
>
>   X := dbrecord(33..70);  --what type is x here?
>   attribute := cast(X);  --how do I cast x to an 'class?
>
>
>Am I making any sense?
>

Not really.

I would suggest that you pick up a book on Ada.  It is a language worth 
learning.

Regards,
Steve
(The Duck)

>
>Andrew Carroll
>Software Services
>405-744-4943
>andrew.carroll@okstate.edu
> 





^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Design help
  2007-03-09 22:43 Carroll, Andrew
                   ` (2 preceding siblings ...)
  2007-03-10  4:40 ` Steve
@ 2007-03-10 13:38 ` Ludovic Brenta
  2007-03-17 20:34 ` Michael Erdmann
  4 siblings, 0 replies; 13+ messages in thread
From: Ludovic Brenta @ 2007-03-10 13:38 UTC (permalink / raw)


Andrew Carrol writes:
> I am trying to design (what I guess is) a database table adapter.  Yes,
> it is a master's course assignment.  The current design is to use a file
> and have one record per line.  My goal is to get each line to be
> "serialized" so I can read the whole line in bytes and then take chunks
> of it and "cast" those into objects.

The answer depends on whether or not your file contains fixed-width
records.  If that is the case, I would simply declare a record type
and use Sequential_IO for the record type directly, like e.g.

type Seconds_Since_Epoch is new Ada.Interfaces.Unsigned_32;

type DB_Record is record
   Primary_Key   : Ada.Interfaces.Unsigned_32;
   Name          : String (1 .. 20);
   Address       : String (1 .. 100);
   Date_Of_Birth : Seconds_Since_Epoch;
end record;

package DB_Record_IO is
   new Ada.Sequential_IO (Element_Type => DB_Record);

But if, on the other hand, the file contains variable-width records,
such as comma-separated values (CSV), you will need more sophisticated
conversion functions.  In that case, I would use Ada.Streams.Stream_IO
directly and convert the Stream_Elements one by one.

If you want to convert to objects of tagged types, you should provide
a constructor function for your tagged type, like so:

type T is tagged record ... end record;

function To_T (Raw_Bytes : in Ada.Streams.Stream_Element_Array)
   return T;

HTH

Note that the database file will probably not contain the tag itself.
OTOH, if it does, then you can just use streams.

-- 
Ludovic Brenta.



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Design help
@ 2007-03-13  0:50 Carroll, Andrew
  2007-03-13  2:48 ` Randy Brukardt
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Carroll, Andrew @ 2007-03-13  0:50 UTC (permalink / raw)
  To: comp.lang.ada

I hope to have the database records be variable length however it may
make my life easier if they are fixed length due to the fact that some
of the record structures contain pointers.  I guess that would be access
type instead of pointer, anyway...

Here are the record structures I have defined:

   type attribute is tagged
      record
         name         : string_ptr;
         domain       : string_ptr;
         isprimarykey : boolean    := false;
         byte_start : integer;
         byte_end   : integer;
         size       : integer;
      end record;

   -----------------------------------
   --    Extended Attribute Types   --
   -----------------------------------
   type booleanattribute is new attribute with
      record
         value      : boolean := false;
      end record;
   type booleanattribute_ptr is access all booleanattribute'class;



   type integerattribute is new attribute with
      record
         value      : integer := 0;
      end record;
   type integerattribute_ptr is access all integerattribute'class;



   type stringattribute is new attribute with
      record
         value      : string_ptr;
      end record;
   type stringattribute_ptr is access all stringattribute'class;



   type dateattribute is new attribute with
      record
         year       : ada.calendar.year_number;
         month      : ada.calendar.month_number;
         day        : ada.calendar.day_number;
         value      : ada.calendar.time;
      end record;
   type dateattribute_ptr is access all dateattribute'class;



Obviously a database tuple could have any combination of these in it but
luckily I only have to worry about 5 attributes in a database table
schema.  So, I want to read a whole database tuple in as a bit string
(or something like that) and then be able to convert small sections of
it into the appropriate record structures above for each of the
attributes defined on the table.  

I can get rid of the string_ptr and set a default string size.  If I did
that I would be pretty close to a fixed-width database record and could
use something like Ludovic mentioned.  I think the drawback to this is
that I would have to have an I/O package for each type of attribute
right?  Like this:

package Attribute_IO is new Ada.Sequential_IO (Element_Type =>
Attribute); 
package BooleanAttribute_IO is new Ada.Sequential_IO (Element_Type =>
BooleanAttribute); 
package IntegerAttribute_IO is new Ada.Sequential_IO (Element_Type =>
IntegerAttribute); 
package StringAttribute_IO is new Ada.Sequential_IO (Element_Type =>
StringAttribute); 
package DateAttribute_IO is new Ada.Sequential_IO (Element_Type =>
DateAttribute); 

It seems like the second method that Ludovic mentioned might be better
in my case even if there is a fixed-width database record.  For each
type I have I could define a constructor function.  Like this:

function To_BooleanAttribute (Raw_Bytes : in
Ada.Streams.Stream_Element_Array)  return T
Is
Begin
	--but I don't know what to do here.
	--I'm assuming that because it's an array of stream_elements
(bytes?) 
      --that I can grab any number of them from the array.
	--something like Raw_Bytes(offset_byte..size);
      --where offset_byte index could be calculated ???
	--and the size would be taken from the size of the attribute
type.
      --
      --this is where I need help. 
   	--even if I knew that I could do the above, I wouldn't know how
to make
	--it officially a BooleanAttribute (in this case).  Would I use
a 
	--Qualified expression like:  return
BooleanAttribute'(Raw_Bytes(offset_byte..size));
End to_t;


HTHYHM



Andrew Carroll
Software Services
405-744-4943
andrew.carroll@okstate.edu



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Design help
  2007-03-13  0:50 Design help Carroll, Andrew
@ 2007-03-13  2:48 ` Randy Brukardt
  2007-03-13  8:52 ` Stuart
  2007-03-13  9:40 ` Dmitry A. Kazakov
  2 siblings, 0 replies; 13+ messages in thread
From: Randy Brukardt @ 2007-03-13  2:48 UTC (permalink / raw)


"Carroll, Andrew" <andrew.carroll@okstate.edu> wrote in message
news:mailman.120.1173747027.18371.comp.lang.ada@ada-france.org...
...
> It seems like the second method that Ludovic mentioned might be better
> in my case even if there is a fixed-width database record.  For each
> type I have I could define a constructor function.  Like this:

You need to look at the stream attributes and how you can compose them when
constructing user-defined versions. I looked in the usual places, and it
didn't seem to be covered very throughly (for instance, John Barnes' book
only gives 5 pages to this topic, which really isn't enough). But writing a
full tutorial on this topic is a bit much...

Anyway, you don't need necessarily need to write contructors (and matching
writers): the compiler will do it for you. Unless your types have embedded
pointers (then you have to do it yourself). You can use the predefined
stream attributes for the types to handle the individual components. Of
course, if you want to use a specific representation for the result, then
you'll need to do it yourself. And then Unchecked_Conversion is your friend.
;-)

                               Randy.







^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Design help
  2007-03-13  0:50 Design help Carroll, Andrew
  2007-03-13  2:48 ` Randy Brukardt
@ 2007-03-13  8:52 ` Stuart
  2007-03-13  9:40 ` Dmitry A. Kazakov
  2 siblings, 0 replies; 13+ messages in thread
From: Stuart @ 2007-03-13  8:52 UTC (permalink / raw)


"Carroll, Andrew" <andrew.carroll@okstate.edu> wrote in message 
news:mailman.120.1173747027.18371.comp.lang.ada@ada-france.org...

> I hope to have the database records be variable length however it may
> make my life easier if they are fixed length due to the fact that some
> of the record structures contain pointers.  I guess that would be access
> type instead of pointer, anyway...
...
> I can get rid of the string_ptr and set a default string size.

I may have missed some earlier discussion (you mention Ludovic's previous 
comments).

Have you considered what your pointer (accessor) is actually pointing to and 
where the value contained in that object is actually held relative to your 
database?  (In particular consider the lifespan of the object value compared 
to the pointer value in the database).

Regards
-- 
Stuart 





^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Design help
  2007-03-13  0:50 Design help Carroll, Andrew
  2007-03-13  2:48 ` Randy Brukardt
  2007-03-13  8:52 ` Stuart
@ 2007-03-13  9:40 ` Dmitry A. Kazakov
  2007-03-13 20:18   ` Simon Wright
  2007-03-13 22:22   ` Randy Brukardt
  2 siblings, 2 replies; 13+ messages in thread
From: Dmitry A. Kazakov @ 2007-03-13  9:40 UTC (permalink / raw)


On Mon, 12 Mar 2007 19:50:11 -0500, Carroll, Andrew wrote:

> I hope to have the database records be variable length however it may
> make my life easier if they are fixed length due to the fact that some
> of the record structures contain pointers.  I guess that would be access
> type instead of pointer, anyway...

> Here are the record structures I have defined:
> 
>    type attribute is tagged
>       record

[...] 

Because attribute is not limited the compiler would generate assignment as
a shallow copy. That might get quite dangerous with pointers. Either it has
to be limited or strings need to be by-value (or by-reference counted smart
pointers).

> Obviously a database tuple could have any combination of these in it but
> luckily I only have to worry about 5 attributes in a database table
> schema.

Why don't you compose tuple types from attributes?

type Row is record
   Name : StringAttribute;
   Birth : DateAttribute;
   ...
end record;

Row could be read from stream.

However there is a problem of cross-platform portability. The blob you
read/write using stream attributes would be platform-dependent. If that's
the problem you'd need to override stream attributes or else derive all
types from some base type with Serialize/Construct abstract methods
(loosing automatic aggregation of rows as above).

(I used the latter approach in my persistence layer, that was in Ada 95
times. With Ada 2005 limited types, a platform-independent stream solution
could turn possible. It would be interesting to investigate when a stable
2005 compiler appear.)

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Design help
  2007-03-13  9:40 ` Dmitry A. Kazakov
@ 2007-03-13 20:18   ` Simon Wright
  2007-03-13 22:22   ` Randy Brukardt
  1 sibling, 0 replies; 13+ messages in thread
From: Simon Wright @ 2007-03-13 20:18 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> However there is a problem of cross-platform portability. The blob
> you read/write using stream attributes would be
> platform-dependent. If that's the problem you'd need to override
> stream attributes or else derive all types from some base type with
> Serialize/Construct abstract methods (loosing automatic aggregation
> of rows as above).

With GNAT, you can rebuild the RTS with platform-independent default
stream attributes.



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Design help
  2007-03-13  9:40 ` Dmitry A. Kazakov
  2007-03-13 20:18   ` Simon Wright
@ 2007-03-13 22:22   ` Randy Brukardt
  1 sibling, 0 replies; 13+ messages in thread
From: Randy Brukardt @ 2007-03-13 22:22 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:1v79s2t3fjrxu.14pyinp46ngmp.dlg@40tude.net...
...
> Because attribute is not limited the compiler would generate assignment as
> a shallow copy. That might get quite dangerous with pointers. Either it
has
> to be limited or strings need to be by-value (or by-reference counted
smart
> pointers).

Or controlled types with an appropriate Adjust routine. (This is the option
I most likely would use). The stream attributes don't include the tag, so
they wouldn't cause any additional trouble over the untagged versions.

                   Randy.





^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Design help
  2007-03-09 22:43 Carroll, Andrew
                   ` (3 preceding siblings ...)
  2007-03-10 13:38 ` Ludovic Brenta
@ 2007-03-17 20:34 ` Michael Erdmann
  4 siblings, 0 replies; 13+ messages in thread
From: Michael Erdmann @ 2007-03-17 20:34 UTC (permalink / raw)


Carroll, Andrew wrote:
> I am trying to design (what I guess is) a database table adapter.  Yes,
> it is a master's course assignment.  The current design is to use a file
> and have one record per line.  My goal is to get each line to be
> "serialized" so I can read the whole line in bytes and then take chunks
> of it and "cast" those into objects.

Hallo,

look in the CVS of GNADE. You will find in the contrib/objects section a
technology demo doing this. May be you reuse the ideas.

Michael



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Design help
@ 2007-03-26 14:56 Carroll, Andrew
  0 siblings, 0 replies; 13+ messages in thread
From: Carroll, Andrew @ 2007-03-26 14:56 UTC (permalink / raw)
  To: comp.lang.ada

Alright, here is my status.  I decided to use the fixed record length
and I have the program limping successfully.  I kept getting
segmentation faults in GNAT and I tracked it down to missing "all"
operator on pointers.  I'll come back to the segmentation fault issue in
a moment.  I have some further questions about the record structures I
used and their size with respect to calculating a positive_count within
a file to use the stream_io set_index procedure.  

The base types are (not in this order):

    type schema (number_of_attributes : Integer) is tagged record
        tablename : String (1 .. max_tablename_length) := (1 ..
max_tablename_length => ' ');
        attributes    : attribute_array (1 .. number_of_attributes);
        record_length : Integer := 0;
        byte_start    : Integer := 0;
        byte_end      : Integer := 0;
    end record;

    type attribute is tagged record
        name         : String (1 .. max_attributename_length) := (1 ..
max_attributename_length => ' ');
        domain       : String (1 .. max_typename_length) := (1 ..
max_typename_length => ' ');
        isprimarykey : Boolean := False;
        byte_start   : Integer := 0;
        byte_end     : Integer := 0;
    end record;

Extended types are:
    type booleanattribute is new attribute with record
        value : Boolean := False;
    end record;

    type integerattribute is new attribute with record
        value : Integer := 0;
    end record;

    type stringattribute is new attribute with record
        value : String (1 .. max_stringlength) := (1 .. max_stringlength
=> ' ');
    end record;

    type dateattribute is new attribute with record
        year  : Year_Number  := 1901;
        month : Month_Number := 1;
        day   : Day_Number   := 1;
        value : Time         := Time_Of (1901, 1, 1);
    end record;


My database file looks something like:
--------------------------------
<<Schema'output here  >><<schema.attributes'output here>>
<<attribute(1)'output>><<attribute(2)'output>>...<<attribute(n)'output>>
<<attribute(1)'output>><<attribute(2)'output>>...<<attribute(n)'output>>
. Each of these rows are tuples
.
.
<<attribute(1)'output>><<attribute(2)'output>>...<<attribute(n)'output>>
--------------------------------


Where attribute(1) - attribute(n) are the streamed values of the
attributes.  The schema.attributes'output is only for
descriptive/comparative purposes so that when I open the file I can load
the schema information as a guide and a reference as to the format,
count, properties of each tuple that follows it in the file.  Tuples are
not ended with an end_of_line character in my database file.

When inserting tuples I calculate the byte_start and byte_end values for
each attribute.  When doing this I found that I must adjust them as
follows:

            if Trim (schemainfo.attributes (x).domain, Ada.Strings.Both)
= "BOOLEAN" then
                values (x).byte_end := values (x).byte_start +
(booleanattribute'size / 8) - 1;
                booleanattribute'output (Stream (fout), booleanattribute
(values (x).all));
            elsif Trim (schemainfo.attributes (x).domain,
Ada.Strings.Both) = "STRING" then
                values (x).byte_end := values (x).byte_start +
(stringattribute'size / 8) - 8;
                stringattribute'output (Stream (fout), stringattribute
(values (x).all));
            elsif Trim (schemainfo.attributes (x).domain,
Ada.Strings.Both) = "INTEGER" then
                values (x).byte_end := values (x).byte_start +
(integerattribute'size / 8) - 7;
                integerattribute'output (Stream (fout), integerattribute
(values (x).all));
            else -- "DATE"
                values (x).byte_end := values (x).byte_start +
(dateattribute'size / 8) - 11;
                dateattribute'output (Stream (fout), dateattribute
(values (x).all));
            end if;

I had to debug the program to get the -1, -8, -7, and -11 values.  The
adjustments were made so that the byte_start and byte_end values match
the values returned by the stream_io.index function before and after
calling 'output on each attribute.  I'm sure that I don't need to go
into detail as to why the positive_index value is imporant to record on
a database file so I won't spend time explaining that.

I don't understand why I needed the -1, -8, -7, and -11 adjustments.  I
know it relates to the size of the object but why isn't it on a byte
boundary?
Thanks for any information you can provide.



Andrew Carroll
Software Services
405-744-4943
andrew.carroll@okstate.edu



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2007-03-26 14:56 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-03-13  0:50 Design help Carroll, Andrew
2007-03-13  2:48 ` Randy Brukardt
2007-03-13  8:52 ` Stuart
2007-03-13  9:40 ` Dmitry A. Kazakov
2007-03-13 20:18   ` Simon Wright
2007-03-13 22:22   ` Randy Brukardt
  -- strict thread matches above, loose matches on Subject: below --
2007-03-26 14:56 Carroll, Andrew
2007-03-09 22:43 Carroll, Andrew
2007-03-09 23:07 ` Simon Wright
2007-03-10  1:00 ` Jeffrey R. Carter
2007-03-10  4:40 ` Steve
2007-03-10 13:38 ` Ludovic Brenta
2007-03-17 20:34 ` Michael Erdmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox