From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.9 required=5.0 tests=BAYES_00,SYSADMIN autolearn=no autolearn_force=no version=3.4.4 X-Google-Thread: 103376,ab5f27c42c253ac5 X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news1.google.com!news1.google.com!news.glorb.com!news2.telebyte.nl!news-fra1.dfn.de!news-ham1.dfn.de!news.uni-hamburg.de!cs.tu-berlin.de!uni-duisburg.de!not-for-mail From: Georg Bauhaus Newsgroups: comp.lang.ada Subject: Re: GNAT and GNU build system Date: Thu, 5 Aug 2004 02:15:07 +0000 (UTC) Organization: GMUGHDU Message-ID: References: NNTP-Posting-Host: l1-hrz.uni-duisburg.de X-Trace: a1-hrz.uni-duisburg.de 1091672107 26495 134.91.1.34 (5 Aug 2004 02:15:07 GMT) X-Complaints-To: usenet@news.uni-duisburg.de NNTP-Posting-Date: Thu, 5 Aug 2004 02:15:07 +0000 (UTC) User-Agent: tin/1.5.8-20010221 ("Blue Water") (UNIX) (HP-UX/B.11.00 (9000/800)) Xref: g2news1.google.com comp.lang.ada:2559 Date: 2004-08-05T02:15:07+00:00 List-Id: Tapio Kelloniemi wrote: : Building some non-autoconfized projects with custom compiler flags : almost requires a full rewrite of their huge Makefile suites. If you look at a program like RXP, there is very little in the Makefile that needs to be adjusted, if anything at all. If an Ada library isn't very system-specific, I wonder why it should test all sorts of things using quite a few Unix-centric testing tools? If it is a library for mass computers, I find it very likely that requirements like "we need a C90 compiler and integer size of 32 bits" "we need an Ada 95 compiler" are a lot more helpful than test programs that try to find out whether some vintage C issue can be handled. Is it helpful to write software that runs a large configure job in order to find out things like whether some non-ANSI C Unix function works properly? (Hey, it is 5% faster than the ANSI C function so we want to use it even in outer loops, or for our infrequent disk access.) : GCC itself has a specs file which contains default include directories : and ld has /etc/ld.so.conf file on Unix. : If GNAT would use these, writing these extra files (and changing them : whenever GCC is updated) is just more silly effort. But ld.so.conf is not about sources? Some Ada libraries are built so they can be used for dynamic linking. The standard include paths for the Ada part needs not be adjusted. Same as C. But if you want to, or have to, use two different versions of a non-standard library? I wouldn't want to create a mess using GCC's spec files. configure is very convenient then, if it works. How does it work in non-standard situations? Am I right in assuming that the average Free Software developer tries to delegate software configuration efforts to autotools instead of trying to make the design more amenable to platform changes? : First, building software does not require these autotools to be : installed, it is the whole point of them. I have come across more than one configure script that was just wrong because it had been built around assumptions (For example assuming that a command is just one word). And the point you mention is sometimes neglected. Some things wont work with just configure. : Secondly configure and : automake generated Makefiles only use the tools that are the most common : (they don't even suppose that mkdir -p works correctly). Why do configure scripts use so many sh features in the first place? A construct like var=`cmd | sed rx` when a simple expr would do. Pipes in the background (requires a shell with job control and Unix process creation) when simple file redirections and simple piping would do? Or if you look at the huge config.guess, why all this, why so many Unix specific processes and pipes just to guess a string like "i123-pc-linol-gna"? When I tried to compile some Unix program on a very Non-Unix system, the best thing I could do with config.guess was to replace it with a 1-liner that just echo "i123-pc-linol-gna". : When GNU : programs are required are the situations were all other implementations : are silly or just incompatible with each others in way that cannot be : resolved easily. It is the responsib9ility of the package provider, not : to generate dependicies to the scripting languages you mentioned. I wish there were more traces of responsibility in packaging. My impression is that many package providers don't strive for minimal dependence. One example is Gtk(Ada) on Debian testing (I'm not speaking about Ludovic's efforts to provide a very good Ada developement platform). Some pixel oriented stuff in Gtk seems to require that a lot of GNOME be installed, somewhat more than the infrastructure including a broker. If you want just the GUI portion of Gtk, bad luck. Speaking of de facto, the current Debian testing build dependences of a font transformation program written in C indirectly drag in Python and OCaml via a specific version of autoconf's build dependeces (iirc). Nice languages, but the specific versions are not easy to compile if you have a Debian stable and not a testing. (No backport available.) This is not because the font transformation program's source code requires that the compiler be checked. But the number of libraries needed is high, and there is a new Debian build system that is assumed by the tools that the maintainer of the font program has used, and recursively so. This accumulates the transitive closure of dependences. Which is rarely noticed because the developers usually work with their own machine with everything installed it seems. On what basis do developers specify the version number of libraries needed? For example, GNU awk currently depends on the GNU C library >= 2.3.2 on Debian. And the GNU C library depends on the Berkeley DB routines for compatibility reasons (this alone is interesting). The DB routines again depend on the GNU C library >= 2.2.5... But does this mean that GNU awk cannot be built with the GNU C library version 2? What good is the ANSI C library if text processing tool maintainers specify, possibly not knowing that they do, that a recent version of a huge C library is needed? Can configure assist in avoiding this kind of version number and tools dependence? :>If a multitude of C libraries cannot be built without :>autotools, I don't see many reasons why this should be made :>true of Ada libraries as well. : : I can't see why the aproach, "install this to /usr/ADALIBRARY and adjust : your ada_*_paths and environment variables, or copy your files by hand : and oops, the paths were hardwired", would be better than running : configure. Can configure figure out which of the two versions of the X library and which configuration of the Y library I wish to use for this build? After all, if I use a standard setup, I can as well use carefully built trusted binaries. I guess configure is popular because it makes people think they have configured their software easily. They haven't at all, they have trusted others who before them have tried to put lots of effort into assuming and checking. Luckily, it works on their system too. Hmmm. The "It works" is sometimes rephrased "We are pragmatic" to cover an adventurous development strategy... Just imagine there were not thousands of people trying to configure some piece of Free Software. Would a handful of programmers succeed in porting the software? : learning a separate configuration method for each lib and : app is a hell to a system administrator. BTDT. OTOH why do system administrators build software at all and do not install prebuilt binary with heavenly ease? For example because there is a company style of administering software. But then, can a library maintainer anticipate this company's style when collecting information for auto/conf/igure? : Somebody might like a GUI : configuration tool and developer will use weeks to get it working on : every platform, This I think is a dream. A configuration tool that works on every platform has not seen many platforms I guess :-) -- Georg