From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00,FREEMAIL_FROM, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!news.eternal-september.org!news.eternal-september.org!feeder.eternal-september.org!aioe.org!.POSTED!not-for-mail From: anon@att.net Newsgroups: comp.lang.ada Subject: Re: Ada platforms and pricing, was: Re: a new language, designed for safety ! Date: Fri, 20 Jun 2014 09:54:58 +0000 (UTC) Organization: Aioe.org NNTP Server Message-ID: References: <1402308235.2520.153.camel@pascal.home.net> <85ioo9yukk.fsf@stephe-leake.org> <255b51cd-b23f-4413-805a-9fea3c70d8b2@googlegroups.com> <5ebe316d-cd84-40fb-a983-9f953f205fef@googlegroups.com> <2100734262424129975.133931laguest-archeia.com@nntp.aioe.org> <857442918424729589.090275laguest-archeia.com@nntp.aioe.org> Reply-To: anon@att.net NNTP-Posting-Host: 8m0lN+AfzZ7GqzCo79yowA.user.speranza.aioe.org X-Complaints-To: abuse@aioe.org X-Notice: Filtered by postfilter v. 0.8.2 X-Newsreader: IBM NewsReader/2 2.0 Xref: news.eternal-september.org comp.lang.ada:20466 Date: 2014-06-20T09:54:58+00:00 List-Id: First, for the client the priority of "Safety" falls below "Data Integrity" and "Performance". And for a gamer type of client its "Performance" and "Functionally" then "Data Integrity" and before "Safety". Just to give an example of how some non-programmers feel. Now for a compiler. its easier than most here thanks, and it is even easier if you use Ada itself. Step 1: remove OPP packages which will reduce the libraries that are needed to the bare minimum. Create and add the Opps packages back later once the compiler is working for everything else. And since OPPs does not require any system connections you could simply use GNAT's OPP packages during the testing phases, before writing your own set. Note 0: All coding outside Lexical Analysis should use the BNF in "Annex P" starting with 10.1.1. Step 2: Create the "Lexical" and "Syntactic" routines with Error report. Note 1: If you write your compiler right then the "Aggregate" routines will become the most important routine in the Semantic Analysis. This is because a lot Ada code within parenthesis can be process as an aggregate or modified aggregate routine. An example list: argument_association -- within pragma statement aggregate qualified_expression -- can uses aggregate enumeration_type_definition index_constraint -- no "choice" discriminant_constraint -- Choice is simple name indexed_component -- after prefix slice -- after prefix parameter_association actual_parameter_part entry_declaration -- after prefix, no "choice" entry_call_statement -- after prefix, no "choice" generic_actual_part enumeration_representation_clause -- uses an aggregate code_statement -- uses a record aggregate -- modified replacing the arrow with assignment component_list discriminant_part discriminant_specification formal_part -- modified for "when" could handle -- and the expression can be sequence_of_statements variant case_statement_alternative formal_part entry_declaration entry_call_statement select_alternative exception_handler Step 3: Semantic Analysis should be dome in two or three phases. Phase 1: Type checking, such as type identifier is range simple_expression .. simple_expression In this case both simple_expressions must be integer. And in some case you can determine if expression is static (required or integer type). Phase 2: Range checking. Add code to do range checking. In the compile I created that was a problem since the lower and upper ranges are not defined by the hardware, but by the programmer. In other words you could define a number as -- a foolishly large integer type Sample_Integer is range - 2 ** ( 2 ** 32 ) .. 2 ** ( 2 ** 32 ) - 1 ; -- or a real, in this case the compiler generates -- lower and upper bounds type Sample_Float is digits 32767 ; Even though the line length (RM 2.2 (15)) shall exceed 199 characters, the compiler may have limits such as setting the token size to 80 characters. And forcing a limit on token size will force an internal compiler storage limit. Which in turn does force source code limits but not programming limits. The reason I did this was to see if a compiler could truly allow universal_integer and universal_reals and still be functional with only limited performance loss. Phase 3: Maintain a Data Dictionary instead of recalculating types in "Code Generation". Saves compile time Step 4: First Optimization, to reduce dead code. While GNAT will report generate a warning message for unreference variables it does not report unreference routines. Here there are three options: A) do nothing, B) perform only if "pragma Optimize (Space)" is used C) just automatically remove all dead code. In the two later cases, this process can be simple. Step 5: Expansion. Where "Generic" evaluated and new packages are created. And complex structures are rewritten to simpler structures, then in some cases "Semantic" may need to be repeated for these routines or structures. Step 6: Second Optimization, for speed of code. For GNAT this is done by the GCC back_end, which exceeds the Optimization defined in the RM. Here there are four options: A) do nothing, B) perform only if "pragma Optimize (Time)" is used, C) use a implementation created pragma to define Optimization such as "pragma Optimization ( )". Then perform optimization at the level requested. Could also be a command-line option. D) automatically optimize code. Initial it is easier and faster to "do nothing" except for may be adding a dummy routine that can be modified later once the compiler works. Step 7: Code Generation: If the compiler generate code based on the system configuration identifiers. "System_Name", "Storage_Unit" and "Memory_Size" Which are located in System package and can be modified by using one or more configuration pragmas which can alter these identifiers. These configurational identifiers allow the compiler to perform cross-compiling for different CPU and systems such as bare, Linux or Windows and can include Apple OS, or other OS. By loading using libraries to set opcodes and calls for specific CPU/OS or bare board. Note 2: Use the ACVC files for your Testsuite. Helps to make compiler ACVC compliant from the start. Just a Suggestion: Since you will have to write your version of the Ada libraries such as "Ada.Text_IO" write it and the rename the new library to "Compiler.Text_IO" and etc. Then use "Compiler.Text_IO" instead of "Ada.Text_IO". Later if you want use can spend a few minutes changing "Compiler." to "Ada.". As for using GNAT, the GPL only effects your code if you link to the GNU or GNAT libraries, such "Ada.Text_IO" code generation does not force the code to be under GPL, unless you use GNAT version of "Ada.Text_IO". Otherwise, GNU would own all intellectual property of all codes use by its compilers and that's not legal. Note 3. Using GNAT's obsolete pragma "pragma No_Run_time" removes access to most GNAT's libraries. So, it will force you to use your own packages instead of GNAT's. In , "Dmitry A. Kazakov" writes: >On Wed, 18 Jun 2014 17:34:35 +0000 (UTC), Natasha Kerensikova wrote: > >> For some reason I'm much more frightened by parsing Ada text than by >> code generation. I know the latter is probably not easier than the >> former (I'm aware of LLVM vs nested functions), but who said fright is >> rational? > >Oh, but parsing is really simple thing. You do recursive descent all the >time. Except for expressions. For expressions you could take this: > >http://www.dmitry-kazakov.de/ada/components.htm#12.9 > >Semantic analysis is a hell. Code generation, optimization is a hell within >hell, IMO. > >-- >Regards, >Dmitry A. Kazakov >http://www.dmitry-kazakov.de