comp.lang.ada
 help / color / mirror / Atom feed
From: Georg Bauhaus <rm.dash-bauhaus@futureapps.de>
Subject: Re: Some exciting new trends in concurrency and software design
Date: Fri, 24 Jun 2011 12:32:50 +0200
Date: 2011-06-24T12:32:51+02:00	[thread overview]
Message-ID: <4e0467d3$0$6577$9b4e6d93@newsspool3.arcor-online.net> (raw)
In-Reply-To: <op.vxj3nzlnule2fv@douda-yannick>

On 6/24/11 3:27 AM, Yannick Duchêne (Hibou57) wrote:
> Le Fri, 24 Jun 2011 00:17:23 +0200, Georg Bauhaus <rm.dash-bauhaus@futureapps.de> a écrit:
>> Ignoring things is a pattern in programming
> Funny wording :)
>
>> whose consequences,
>> while known to turn out expensive, are consistently ignored, too.
>> A pattern, incidentally, that seems to have been strikingly absent
>> when the HOLWG collected a catalog of language requirements more
>> than 30 years ago.
> Please, tell more

That's Ada history. Find it looking for "Requirements for
High Order Computer Programming Languages".


> I was not talking about FPL in the large, as I know only one, which is SML (others are too much different). And this one was not designed to be cute, but to be sound. It was specified prior the first implementation (as you know I guess), just like Ada was, and this specification was not based on The Great Cute Levels Catalog, but made of coherence proofs instead. Unfortunately, you've made this error so much early, that a large part of what you wrote since here is unclear due to that assumption. But you are still OK, as its my sole fault I was too much imprecise about the meaning of “forget” (will be clarified a bit later).

My topic is the pattern of deliberately ignoring things, and
how it is related to forgetting, simplifying, or refining.

ML has one feature whose characteristics and effects were basically
ignored.  It may be that, when designing the feature, something
essential about writing programs had just been forgotten, but
certainly the feature was not to be refined, later.  On the contrary.

If you forget things, they tend to be added later, or at
least you will concede that there is a bug.  This can be
fixed.

Refining is possible only when there had been an assumption
of something to be refined.  Ignoring is different from refining.

The pattern of ignoring things is present in ML syntax.  Andrew
Appel expounds in his Critique of ML. In one paragraph, he mentions
that ML syntax does not get him into trouble any more.  But in later
paragraphs, he explains at some length where and how ML syntax was
*not* given the same attention as the semantics of Standard ML, and
why this is not the best of things.

I believe that ML syntax wasn't forgotten. Just ignored. It wasn't to
be refined later, either.  It wasn't simplified for modeling
reasons.  It was just ignored.  With SML out the door, we, the
programmers,  can get our load of condescension when we complain
about inscrutable error messages when some expression gulps the
rest of the compilation unit:  "Why didn't you put the
optional semicolon there, STUPID!? If your expression awaits
another function, then the compiler is quite right to consider
the next fun definition in the file to be that function!"  Or similar.

(Do you recognize the familiar argument: Joe Bloggs, the programmer,
writes something obviously correct, but it isn't correct; a language
"feature" has caught Joe; then someone says that Joe is stupid (and
shouldn't be programming) because Joe doesn't know the language
"feature".)

Next they address us in managerial style to talk about the Practical
Programmer who would just accept things the way they are.  Just don't
have the makers of ML admit they have made a serious mistake
when ignoring the very manifestation of human-computer interaction:
the syntax.   And once the example is out, others will copy it.

Syntax is the means by which humans structure what they wish
to express, a program in our case.  How can this be forgotten?
When does industry accept the time loss spent in training
awareness of idiosyncrasies and in tackling the effects
of things ignored?


>> So they are seen as having potential.
> Not really, as people complains the crowd did not adopted it (true for Haskell too).

F#, Scala, R, ...


>> - when the scripting language is used to process international text
>> in a concurrent environment and fails, since char issues and concurrency
>> have been ignored.
> Not a language failure, but an application design error.

When a language is basically not thread safe, I'll call it
a language issue.


>> - when the program's data model turns out to be relational, but
>> the object-as-a-table magic does not quite work.
> Wrong analysis. No language can avoid this, as this is often prior to enough text in any language, except human's languages.

Let a problem P have a relational solution R. Demand that R be
implemented using a non-relational, but acclaimed language O.
Ignore programmers who mention that the admittedly fashionable
choice O might turn out wrong, and costly in the long run.

Another instance of the pattern of ignoring things at work.  The
motive here is the relative weight of feeling the immediate advantages
of crowd decision on the one hand and concerns about long-term
technical consequences on the other.  If the crowd is the current
source of profit, the choice seems clear.



>> The puzzling issue is why don't functional programmers just copy
>> the solution from Lisp, such as CMUCL? Or from Python? Are they
>> ignoring even "their own relatives"?
>>
>> * (let ((i 4000000000))
>> (* i i))
>>
>> 16000000000000000000
>> *
>>
>>>>> i = 4000000000
>>>>> i * i
>> 16000000000000000000L
> I could not understand that point.

This is the crucial bit.  The goal is a language based on ML,  like
ATS.  But by the pattern, the designers of ATS seem to ignore both
ML's int (which raises Overflow in place of erroneous execution)
and they also ignore alternative "functional ints".  Instead,
for "practical reasons" I guess, they choose C int behavior.
Assume that the new language is out the door, and is being used
on commercial projects. Will it continue to offer only C int and
associated semantics or will it move towards better defined int?
Will sacred backwards compatibility be in the way of rectifying
int towards ML's, for example?


>> How can a safe ATS pointer be safe in the presence of int ?
> What is the question precisely ? I am not sure I've understood the question.

Assume, for the sake of generalization, that a pointer points
safely to a sum of addresses computed from int offsets.



  reply	other threads:[~2011-06-24 10:32 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-20 10:49 Some exciting new trends in concurrency and software design jonathan
2011-06-20 14:40 ` Georg Bauhaus
2011-06-20 14:48   ` Georg Bauhaus
2011-06-20 23:56   ` jonathan
2011-06-21  9:36     ` steveh44
2011-06-21 13:04       ` Phil Clayton
2011-06-22  0:37       ` Shark8
2011-06-22  9:45       ` anon
2011-06-29 21:39         ` Robert A Duff
2011-06-30 16:52           ` anon
2011-07-01 18:31             ` Shark8
2011-06-23  9:59       ` Yannick Duchêne (Hibou57)
2011-06-23 10:25         ` Dmitry A. Kazakov
2011-06-23 10:57           ` Yannick Duchêne (Hibou57)
2011-06-23 12:20             ` Dmitry A. Kazakov
2011-06-23 22:17             ` Georg Bauhaus
2011-06-24  1:26               ` Phil Clayton
2011-06-24  1:34                 ` Yannick Duchêne (Hibou57)
2011-06-24 10:41                 ` Georg Bauhaus
2011-06-24  1:27               ` Yannick Duchêne (Hibou57)
2011-06-24 10:32                 ` Georg Bauhaus [this message]
2011-06-24 13:45                   ` Yannick Duchêne (Hibou57)
2011-06-21 12:19     ` Dmitry A. Kazakov
2011-06-21 12:14   ` Phil Clayton
2011-06-22  8:39   ` Oliver Kleinke
2011-06-23  2:48     ` Nasser M. Abbasi
2011-06-23  9:23   ` Yannick Duchêne (Hibou57)
2011-06-23 10:03     ` Nasser M. Abbasi
2011-06-23 11:07       ` Yannick Duchêne (Hibou57)
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox