From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.5-pre1 (2020-06-20) on ip-172-31-74-118.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-1.9 required=3.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.5-pre1 Date: 8 Apr 93 23:29:40 GMT From: scf16!bashford@ford-wdl1.arpa (Dave Bashford) Subject: Summary - Intel Bit/Byte Order Conversion for Alsys Ada ? Message-ID: <1993Apr8.232940.27030@scf.loral.com> List-Id: Oops, sorry this is sooooo late. Several weeks ago I asked for advice and comments on some data-interoperability problems we are having and promised to post a summary. Unfortunately, because of my frustration, I used a few loaded phrases (e.g. "completely garbled", "pretty screwy"); and I didn't make the different representations very clear, so most replys were not very helpful. I guess I was looking for a magic bullet to avoid the _ugly_ conversion I subconsciously knew we would have to write and I didn't find it. Another reason I posted our problem is that I was hoping to generate discussion. I'm not sure what direction the discussion should take except that I'm not trying to start a flame war (I'm trying to be good :-) What makes the conversion routine so _ugly_ is that because of the differences between the two storage formats, the conversion needs to have intimate knowledge of how long each field is, what the data-type is, and what the byte alignment was when it was originally built. (The three bits that were not specified belong to the next message - they are not spare.) It is _not_ simply a matter of swapping bytes and/or reversing bits. One comment that I've heard a lot lately is that the compilers chose the most efficient way to store the records on their respective machines. The problem I have with this statement is that it completely ignores the intentions behind defining rep-specs. I rep-spec a record because I am more concerned about it's format than it's efficiency - if I wanted efficient I would've let the compiler chose the format. I also think this argument is partially wrong. The Intel processor does not have bit-oriented instructions (e.g. bit-set, bit-test, etc) so as far as performance is concerned there is no difference between ascending and descending bit-numbers. Both the Intel and Motorola use descending bit- ordering in their documentation. I ended up writing two different format conversion routines, one in Ada using generics, and one in "another" language. The two conversions followed different algorithms, and I spent more time optimizing the one in "another" language. The reason I wrote the second one was that the first one was significantly slower than I had anticipated and I couldn't think of anyway to optimize it significantly considering the reason for its existence. In other words, the only way I could think of to solve the problem was to use bit-shifts, masking, and bit-sets, which are directly available in "another" language and are implemented with bit-arrays whose indices are "backwards" in this Ada compiler. Anyway, the benchmarks I wrote were: 1. The Ada version; 2. The "other" version; and 3. Ada records and rep-specs with no conversion. The benchmarks were all run on the same machine (not the target machine) for 1_000_000 iterations. Test 1. The conversion in Ada took 61.8 seconds. Test 2. The conversion in "another" language took 8.9 seconds. Test 3. The no-conversion base-line took 1.3 seconds. In order to avoid the problem of having to propagate the original byte- alignment of each field and the message formats any further than necessary, I have been advocating that the conversions take place on the Intel side when each field is initialized or extracted. The byte- alignment of a field may not be the same when it is initialized (given its value) as when the value is extracted because each of our messages are packed into a larger envelope for transmission and these sub-messages are not modulo 8-bits in length. In some ways the problem of leaving things "implementation dependent" reminds me of something I saw in a Verdix manual. I seem to have lost the actual quote so I will have to paraphrase: "If you depend on [some language extension] your algorithm may not be portable. But since Ada compilers are required to ignore pragmas that they don't understand, the source code will still be portable Ada." In other words, it doesn't have to work to be portable. My Original Post: >We are having a major problem sending data between an Intel processor >and a Motorola processor both running Alsys compiled Ada. A simple >bit-packed message gets completely garbled by the compiler on the Intel >side even though it was completely rep-spec'd. Example: > > type Message is record > Field_1: Some_11_Bit_Type; > Field_2: Some_02_Bit_Type; > Field_3: Some_01_Bit_Type; > Field_4: Some_15_Bit_Type; > end record; > for Message use record > Field_1 at 0 range 0..10; > Field_2 at 0 range 11..12; > Field_3 at 0 range 13..13; > Field_4 at 0 range 14..28; > end record; > >When this message is sent across to the Motorola processor the bit >stream looks like this: > > Least-Significant 8 Bits of Field_1, > Least-Significant 2 Bits of Field_4, > Field_3, > Field_2, > Most-Significant 3 Bits of Field_1, > middle 8 bits of Field_4, > Bits 29..31, > Most-Significant 5 Bits of Field_4 > >This seems pretty screwy to me, but we have to find a solution ! >We obviously have a space problem - otherwise we wouldn't bit-pack. >But we also have very tight time requirements. > >Does anyone have a solution to this ? I would also be interested in >editorial comments. Please e-mail your replys and I will summarize >for the network. (I won't be able to read the news before it expires.) -- db bashford@srs.loral.com (Dave Bashford, Sunnyvale, CA)