From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,60e2922351e0e780 X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2003-11-21 09:56:54 PST Path: archiver1.google.com!news2.google.com!news.maxwell.syr.edu!cyclone.bc.net!sjc70.webusenet.com!news.webusenet.com!nf3.bellglobal.com!nf1.bellglobal.com!nf2.bellglobal.com!news20.bellglobal.com.POSTED!not-for-mail From: "Warren W. Gay VE3WWG" User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.4) Gecko/20030624 Netscape/7.1 (ax) X-Accept-Language: en-us, en MIME-Version: 1.0 Newsgroups: comp.lang.ada Subject: Re: Re-Marketing Ada (was "With and use") References: <3FB0B57D.6070906@noplace.com> <3FB22125.1040807@noplace.com> <3FB3751D.5090809@noplace.com> <3FB8B9BC.5040505@noplace.com> <3FBA1118.4060105@noplace.com> <0fxub.48425$bQ3.12107@nwrdny03.gnilink.net> <3FBB6527.4040702@noplace.com> <3FBCBA38.8040000@noplace.com> <067vb.13803$iT4.1718658@news20.bellglobal.com> <3FBF774C.2050108@shore.net> In-Reply-To: <3FBF774C.2050108@shore.net> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Message-ID: Date: Fri, 21 Nov 2003 12:42:14 -0500 NNTP-Posting-Host: 198.96.223.163 X-Complaints-To: abuse@sympatico.ca X-Trace: news20.bellglobal.com 1069436499 198.96.223.163 (Fri, 21 Nov 2003 12:41:39 EST) NNTP-Posting-Date: Fri, 21 Nov 2003 12:41:39 EST Organization: Bell Sympatico Xref: archiver1.google.com comp.lang.ada:2825 Date: 2003-11-21T12:42:14-05:00 List-Id: David J.A. Koogler wrote: > Warren, > > Let me point one nasty feature of C include files: there are > idiots out there that use the C preprocessor as a code > generator (Oracle immediately comes to mind). The include > file gets referenced multiple times within a source file. The > expansion of the include file depends upon the setting of > conditional variables. For instance, you include the file at > the start of code and it generates a block of data definitions. > Right after the #include you #define say CODE_GEN. Later on > in the source you include the file again but this time it > generates blocks of code. Yes, I am familiar with this technique, and have used it myself for many years. This problem is a bit nasty, but I think it can be controlled (extra care is required, as you point out). > It is a powerful technique Well, I view it more of a hack: it is designed to free the C/C++ programmer from having to worry about what comes first: "cart or horse?" The obvious reason for this is the complex sets of relationships that develop amongst the various include files. > and people use it as form of Ada > package, especially when they want in-lining effects. I have > seen the technique carried to an extreme where the include > file is designed for recursive inclusion. It is pure hell > to understand systems built in this way. I usually find you > must reverse engineer the system using the compiler's -E > option to see preprocesor's output. I have had to resort to the same technique. Sometimes the compile error msgs just don't provide enough clues to the nature of the error, because so many complex macro iterations have occured to get you to the current state of affairs. No argument there. Recursive inclusion seems very much like an abuse of the feature to me (and people focus on gotos!) > Thankfully there are few > people who write such code and so it is an extreme case but > look closely at some of the Linux include files and you may > see similar sorts of idiotcy. Hence translating include files > into Ada packages can be more challenging than expected. No, I think I understand the problem well, having sorted through many such problems before. In the end however, the C preprocessor + compiler are able to make sense of it all. So I try not to get discouraged by the job at hand, because algorithms can handle complexity, as long as the methods involved are not overly complex. > Here is an alternative course of action for such a translator, > which should appeal to your database outlook. I wrote a > simple translation system based on the Progress database/ > language. The parser was quite basic and grabbed the data > definitions, routine definitions and #defines. I took > advantage of the fact that all names in C must be unique > (i.e. there is but a single global namespace). I used > the names as database keys. I created records that held > the parsed C descriptions. From the C descriptions I > generated a parallel set of Ada descriptions, linked to > the original C records via the C name. > > The translator is not expected to produce a perfect > translation, but just to transcribe enough information to > give you a good starting point. The translator does lots ... > I have not worked on my translator in several years > but if you are interested we could discuss this topic > at greater depth. Apart from the need for database, I think we're pretty much on the same wavelength. If you look at my APQ project, I use a combination of shell scripts and simple run-once C programs to generate Ada specs that are customized for the flavour of UNIX/Windows that the user is compiling on (I don't handle cross compiles). The thinbind tool could follow along the lines that you and I have talked about. The first time the tool is run in development, it is likely to spew warnings and errors all over the place. In fact, one approach might be to write a new hints file, with commented out lines where the programmer could review, uncomment and choose courses of action for. Feed in hints file for round 2, and iterated until success. Then iterate over a number of available platforms, so that you encode in the final hints file, enough info to handle just about any platform related issue. I believe that what you use a database for, can all be done in memory (as a compile process), with the final state of problems and programmer coded hints being written to some hints.new file. This can be discarded or suppressed in the final stages of development, or simply discarded with a "make clean". If new problems come up, then you take the hints.new file, edit it, and replace your current freds_fft.hints file. Agreed that an early rendition of the project could just capture those local #defines (that is what I do in the APQ make/shell/C-program mess). But function prototypes are also very important. More sophisticated releases would also address structures etc. I do believe that with the correct design, you could come very close on this. My problem, is that I already have too many Ada projects to finish. I just keep hoping that the seed of an idea gets planted out there with someone else that is spoiling for a good project to do. ;-) -- Warren W. Gay VE3WWG http://home.cogeco.ca/~ve3wwg [Remove nospam from the email address: the worms made me do it!]