comp.lang.ada
 help / color / mirror / Atom feed
* Re: Computer beats Kasparov
       [not found] <4g29e2$ea0$1@mhadg.production.compuserve.com>
@ 1996-02-17  0:00 ` Cordes MJ
  1996-02-24  0:00 ` Tore Joergensen
  1 sibling, 0 replies; 7+ messages in thread
From: Cordes MJ @ 1996-02-17  0:00 UTC (permalink / raw)


Stuart Gascoyne (100525.632@CompuServe.COM) wrote:
: Whats this got to do with Ada? You ask. Well it used to be 
: asserted that a computer could never beat a human being at chess. 
: Then when that was disproved it was asserted that a computer 
: could never beat the best human chess players. Wrong. 

: Recently it was asserted in this newsgroup that a computer 
: (compiler) could never best a human at writing assembly language. 
: Even those that favoured high level languages over assembler 
: conceded this point.

: Why can't a compiler produce better assembly language than a 
: human? What is so intractable about the problem of writing 
: assembly language that prevents it ever being computable?

: And what is meant by a human assembler programmer? Is it intended 
: to mean a grandmaster, a good club player or even one at 
: schoolboy level. Or is it only the very best assembler 
: programmer?

: Look to your laurels guys.

: -- 
: Stuart Gascoyne
: gascoyne@tartan.com

Does Deep Blue come in an Ada VMS/1750A cross compilation flavor? :-)

Your point is true, but if I start to look too far ahead I'll see auto 
code generators which write "grand master" code for compilers which write 
"grand master" machine code. I can see the ad now:

  Wanted: real-time embedded Ada programmer to work night shift 
  at 7-11 store. Great benifits.

:-) Mike

---------------------------------------------------------------------
Michael J Cordes
Phone: (817) 935-3823
Fax:   (817) 935-3800
EMail: CordesMJ@lfwc.lockheed.com
---------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Computer beats Kasparov
       [not found] <4g29e2$ea0$1@mhadg.production.compuserve.com>
  1996-02-17  0:00 ` Computer beats Kasparov Cordes MJ
@ 1996-02-24  0:00 ` Tore Joergensen
  1996-02-26  0:00   ` Cordes MJ
  1 sibling, 1 reply; 7+ messages in thread
From: Tore Joergensen @ 1996-02-24  0:00 UTC (permalink / raw)


Stuart Gascoyne (100525.632@CompuServe.COM) wrote:
: Whats this got to do with Ada? You ask. Well it used to be 
: asserted that a computer could never beat a human being at chess. 
: Then when that was disproved it was asserted that a computer 
: could never beat the best human chess players. Wrong. 

: Recently it was asserted in this newsgroup that a computer 
: (compiler) could never best a human at writing assembly language. 
: Even those that favoured high level languages over assembler 
: conceded this point.

: Why can't a compiler produce better assembly language than a 
: human? What is so intractable about the problem of writing 
: assembly language that prevents it ever being computable?

I'm not sure if Deep Blue used a neural network or not, but let me
say a few words about neural networks used for assembly programming
(and let me say that my knowledge about neural networks is very
limited any mostly based on talking with a friend that studies
neural networks as his main topic in his master degree).

For the moment, the biggest problem with using neural networks in
critical applications is that the people that works with neural
network can't explain in details why the network does what it
does. This means that you will have to wait until they can, or
find a method to validate the result. If you make machines for
hospitals or air planes, it doesn't sound like a good idea to
say "This code is very fast, and testing seems to indicate that
it does what it is supposed to do". You may say that this is 
more or less the same thing that we can say about optimized
code made by a compiler, but at least we can understand the
optimizations and choose to use only optimizations that we are
sure works properly. It is a bit harder if the compiler does
something one place that makes a big difference in the code
another place, just because it seems like a good thing to
do (I can't give you an example, but that is part of the
problem :-). I guess that what I'm saying is: Neural networks
is, or will maybe soon be, good enough to make fast assembly
code from higher level languages, but as long as it isn't
fully understood I doubt that it will be accepted for critical
tasks. Because of the danger of law suits, most commercial
programming tasks is considered critical. Deep Blue on the
other hand (as I said, I don't know if it used neural networks
or not) is research, and even though it lost big bucks to
Kasparov, that can be viewed as Kasparov's sallary for 
participating in the research. What the future might bring
is hard to say though :-).
--
+-------------------------+-------------------------------------------+
| Tore B. Joergensen      | e-mail : tore@lis.pitt.edu                |
| Centre Court Villa      | web    : http://www.pitt.edu/~tojst1      |
| 5535 Centre Avenue # 6  |                                           |
| Pgh, PA 15232, USA      | Norwegian MSIS-student at Univ. of Pgh.   |
+-------------------------+-------------------------------------------+




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Computer beats Kasparov
  1996-02-26  0:00   ` Cordes MJ
@ 1996-02-25  0:00     ` Robert Dewar
  1996-02-26  0:00       ` Cordes MJ
  1996-02-26  0:00       ` Ken Garlington
  0 siblings, 2 replies; 7+ messages in thread
From: Robert Dewar @ 1996-02-25  0:00 UTC (permalink / raw)


"As long as generated code is identical each time you compile the same
code, it doesn't matter if the code generator uses AI. We use an Ada
compiler to generate safety critical embedded SW and have seen
code generation errors with code generators using common optimization
techniques."

Well the issue of whether the code generator "uses AI" (whatever the
heck that might mean) is a red herring.

The issue is whether the code generated is "reviewable" in the sens
of annex H of the RM. Achieving reviewability may involve inhibiting
some optiizations (regardless of how they are done).





^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Computer beats Kasparov
  1996-02-25  0:00     ` Robert Dewar
@ 1996-02-26  0:00       ` Cordes MJ
  1996-02-27  0:00         ` Robert Dewar
  1996-02-26  0:00       ` Ken Garlington
  1 sibling, 1 reply; 7+ messages in thread
From: Cordes MJ @ 1996-02-26  0:00 UTC (permalink / raw)


Robert Dewar (dewar@cs.nyu.edu) wrote with deletions:

: Well the issue of whether the code generator "uses AI" (whatever the
: heck that might mean) is a red herring.

: The issue is whether the code generated is "reviewable" in the sens
: of annex H of the RM. Achieving reviewability may involve inhibiting
: some optiizations (regardless of how they are done).
"Reviewable" in the sense of annex H of the RM will help identify areas
in the generated code where the implementation has done something 
worthy of "inspection" (e.g., run-time checks and calls to run-time
support routines). However, it really doesn't say anything about the 
correctness of the code (i.e., generated code).

As for inhibiting optimizations, I can't afford to turn off optimizations
(e.g., code hoisting techniques) simply to make my Ada/assembly 
interspersed listing easier to read. Although supporting "reviewable"
greatly improves our ability to identify unsafe code, the proof that
a program is actually safe still requires tests on the execution of
the code.

In any event, your first statement is true, it does not matter if the 
code was generated by Deep Blue, one of todays CGs, or by hand. Reviewability
(and I'll include testability in this) is the issue.


--
---------------------------------------------------------------------
Michael J Cordes
Phone: (817) 935-3823
Fax:   (817) 935-3800
EMail: CordesMJ@lfwc.lockheed.com
---------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Computer beats Kasparov
  1996-02-25  0:00     ` Robert Dewar
  1996-02-26  0:00       ` Cordes MJ
@ 1996-02-26  0:00       ` Ken Garlington
  1 sibling, 0 replies; 7+ messages in thread
From: Ken Garlington @ 1996-02-26  0:00 UTC (permalink / raw)


Robert Dewar wrote:
> 
> "As long as generated code is identical each time you compile the same
> code, it doesn't matter if the code generator uses AI. We use an Ada
> compiler to generate safety critical embedded SW and have seen
> code generation errors with code generators using common optimization
> techniques."
> 
> Well the issue of whether the code generator "uses AI" (whatever the
> heck that might mean) is a red herring.
> 
> The issue is whether the code generated is "reviewable" in the sens
> of annex H of the RM. Achieving reviewability may involve inhibiting
> some optiizations (regardless of how they are done).

I think what Mike was trying to say was that using extremely complex
optimization techniques -- including, possibly, using AI-type heuristics --
to try to capture the process by which experienced programmers generate
"tight" assembly code would not necessarily be a problem from a safety-critical
standpoint. Assuming that Reviewable would give you information to understand
the relationship of the generated object code to the source (which is what
I expected it to do), then such advanced optimizations may be tolerable in
safety-critical applications.

This assumes, of course, that the Ada toolset generates the same code given the
same initial conditions (a set of source code compiled in some determinable
and consistent order, I guess). The task would be more complicated if, for
example, the toolset "learns" with each compilation, such that compiling the
same code six months later generates "tighter" but possibly incorrect code.

The bottom line is that we don't usually know exactly how the toolset does
optimizations, and don't care (within some limits). We assume that we will
have to validate the resulting code, using Reviewable and other
techniques, to assure its reliability and safety regardless.

The key phrase with respect to disabling optimizations, from the Rationale:
"...some optimizations could be disabled when the pragma Reviewable is in
force, rather than enhancing the compiler to meet the requirements with full
optimization." With Ada 83, we pay to get these "enhancements," and I suspect
that we will continue to do so with Ada 95. As a result, we would not disable
optimizations to get reviewable code. After all, safe code that won't fit in
the box is a little _too_ safe for our needs!




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Computer beats Kasparov
  1996-02-24  0:00 ` Tore Joergensen
@ 1996-02-26  0:00   ` Cordes MJ
  1996-02-25  0:00     ` Robert Dewar
  0 siblings, 1 reply; 7+ messages in thread
From: Cordes MJ @ 1996-02-26  0:00 UTC (permalink / raw)


Tore Joergensen (tore@lis.pitt.edu) wrote:

  <snip>

: ... If you make machines for
: hospitals or air planes, it doesn't sound like a good idea to
: say "This code is very fast, and testing seems to indicate that
: it does what it is supposed to do". You may say that this is 
: more or less the same thing that we can say about optimized
: code made by a compiler, but at least we can understand the
: optimizations and choose to use only optimizations that we are
: sure works properly. It is a bit harder if the compiler does
: something one place that makes a big difference in the code
: another place, just because it seems like a good thing to
: do (I can't give you an example, but that is part of the
: problem :-). I guess that what I'm saying is: Neural networks
: is, or will maybe soon be, good enough to make fast assembly
: code from higher level languages, but as long as it isn't
: fully understood I doubt that it will be accepted for critical
: tasks.  ... 

As long as generated code is identical each time you compile the same
code, it doesn't matter if the code generator uses AI. We use an Ada 
compiler to generate safety critical embedded SW and have seen 
code generation errors with code generators using common optimization
techniques. 

The bottom line is that we always test the code in order to catch
design errors, code errors, and yes, compiler errors.

--
---------------------------------------------------------------------
Michael J Cordes
Phone: (817) 935-3823
Fax:   (817) 935-3800
EMail: CordesMJ@lfwc.lockheed.com
---------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Computer beats Kasparov
  1996-02-26  0:00       ` Cordes MJ
@ 1996-02-27  0:00         ` Robert Dewar
  0 siblings, 0 replies; 7+ messages in thread
From: Robert Dewar @ 1996-02-27  0:00 UTC (permalink / raw)


"As for inhibiting optimizations, I can't afford to turn off optimizations
(e.g., code hoisting techniques) simply to make my Ada/assembly
interspersed listing easier to read. Although supporting "reviewable"
greatly improves our ability to identify unsafe code, the proof that
a program is actually safe still requires tests on the execution of
the code."

Fine, you prefer to keep optimizations turned on, but for a LOT of safety
critical code, the request that is constantly heard from the community
is to turn off all optimization to make the code easier to track.

Now it is always hard to know what exactly this means. If you really turn
off ALL optimization, then you makee code harder to read (for exaple,
I usually find GCC generated code harder to read at -O0 than -O2).
However, GCC does not do much in the way of elaborate global optimization
which results in scrambling your code severely, and it is this kind of
scrambling that many practitioners in the area find severely hinders
the process of careful code certification at the object level.

Note that in my original article, I only suggested that there may be
circumstances in which optimziations may need to be suppressed to get
acceptable levels of reviewability. I did not say that everyone will
want this.

So, unless you are claiming that everyone agrees with your position in
the first paragraph, you are not disagreeing with my point. If you
really think everyone agres that one cannot afford to turn off code
hoisting, then I am sure you are wrong, since I have heard so many
people energetically insist on the opposite position.

It is indeed hard to know what reviewability means, and indeed it is
unlikely that "one size fits all" will be viable.

Note incidentally that GCC is an interesting compiler from this point of
view. It demonstrates that remarkably good code can be obtained simply
by local optimizations and simple loop optimizations. Even though GCC
lacks sophisticated global optimziations, it often turns in better
performance than compilers which have these global optimziations but
don't do such a good job of local optimization.

It may be that for many purposes, gcc's -O2 represents a good compromise
between no optimization and "excessive" global optimization. Certainly
I have not found that the -O2 code is hard to read, even though the
assembly language output by GCC is pretty poor from a point of view
of human readability (e.g. no interpersed source, and only debugging
directives to provide symbol names etc).






^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~1996-02-27  0:00 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <4g29e2$ea0$1@mhadg.production.compuserve.com>
1996-02-17  0:00 ` Computer beats Kasparov Cordes MJ
1996-02-24  0:00 ` Tore Joergensen
1996-02-26  0:00   ` Cordes MJ
1996-02-25  0:00     ` Robert Dewar
1996-02-26  0:00       ` Cordes MJ
1996-02-27  0:00         ` Robert Dewar
1996-02-26  0:00       ` Ken Garlington

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox