From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=0.4 required=5.0 tests=BAYES_00,FORGED_MUA_MOZILLA autolearn=no autolearn_force=no version=3.4.4 X-Google-Thread: 103376,b076e6315fd62dc5 X-Google-NewGroupId: yes X-Google-Attributes: gida07f3367d7,domainid0,public,usenet X-Google-Language: ENGLISH,UTF8 Received: by 10.204.157.134 with SMTP id b6mr556177bkx.5.1337191336441; Wed, 16 May 2012 11:02:16 -0700 (PDT) Path: e27ni247bkw.0!nntp.google.com!news1.google.com!goblin1!goblin.stu.neva.ru!news2.euro.net!xlned.com!feeder1.xlned.com!npeer.de.kpn-eurorings.net!npeer-ng0.de.kpn-eurorings.net!newsfeed.arcor.de!newsspool1.arcor-online.net!news.arcor.de.POSTED!not-for-mail Date: Wed, 16 May 2012 20:01:56 +0200 From: Georg Bauhaus User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:12.0) Gecko/20120428 Thunderbird/12.0.1 MIME-Version: 1.0 Newsgroups: comp.lang.ada Subject: Re: fyi, very interesting Ada paper OOP vs. Readability References: <1ir29foizbqv1.v9uuhpykjl3n.dlg@40tude.net> <18ct9oamzq1u1$.wh6hj9mlqxna$.dlg@40tude.net> <4faf8700$0$6635$9b4e6d93@newsspool2.arcor-online.net> In-Reply-To: Message-ID: <4fb3eb94$0$9505$9b4e6d93@newsspool1.arcor-online.net> Organization: Arcor NNTP-Posting-Date: 16 May 2012 20:01:56 CEST NNTP-Posting-Host: a3dd5853.newsspool1.arcor-online.net X-Trace: DXC=:UH6A]OlMe8Tia]Ho99G50ic==]BZ:af>4Fo<]lROoR1nkgeX?EC@@08WEnDiKGIR2nc\616M64>:Lh>_cHTX3j=o_ldbAU39?8 X-Complaints-To: usenet-abuse@arcor.de Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Date: 2012-05-16T20:01:56+02:00 List-Id: On 16.05.12 17:00, NatarovVI wrote: >>> in the modern parallel world, functional way better than imperative. >>> because data flows in parallel program will be clearer and more >>> deterministic. Modern parallel world is what I'm asking about: the modern world has a small niche, that of scientific computations, socio-economic modelling, image processing and the like. But at large, any computation in a modern world includes two mathematically sound variables in every problem statement to be solved by programming: TIME and SPACE. So does Harper (at existentialtype.wordpress.com). See below. Which is what I am asking about. Sometimes, well, most of the time, these notions need to be more precise than those of general efficiency classes O(f(...)). The modern world poses many problem statements that have more specific time bounds. Examples: - Stop the car/train/plain/ship/oven/recorder/saw, and turn things 1 .. M off. Now. You have 650 milliseconds for your algorithm, and the budget allows for at most p processors, buses etc. - How many traders will likely sell before tomorrow? These examples include potentially parallel, not concurrent, sub-computations, and see below what Harper actually says about how to arrive at parallel. Note that Harper mentions the "fork-join structure" of a Quicksort example, and that Randy Brukardt mentions the overhead that Harper delegates to "the engineers". The overhead appears vaguely (somewhat imprecisely one might say) in the following theorem mentioned by Harper. To quote: "*Theorem* A computation with work w and depth d can be implemented in a p-processor PRAM in time O(max(w/p, d))." My question then translates into: How precise can O be for each p? Because the *problem* statement *requires* that we make predictions of a certain precision, as outlined in the modern real world examples above. >> In all fairness, contemporary parallelism frequently needs to handle >> signals, and emit signals, at well defined points in time, and in an >> order dictated by the purpose of the program. The signals are results of >> the program's parallel computations. >> In which ways does monadic IO simplify predictability of effects? > > "parallelism is not concurrency" > look at existentialtypes.wordpress.com for terminology explanation. That's existstentialtypes where, worth mentioning, Prof Rober Harper, well known and respected ML guy, offers thoughts. More specifically, the terminology is here http://existentialtype.wordpress.com/2011/03/17/parallelism-is-not-concurrency/ >From which I quote: "Now I can hear you object, but isn’t concurrency required to implement parallelism? Well, yes, it is, (...)! The point is that concurrency is not relevant to parallelism, even if the engineers who build our parallel computing platforms must deal with concurrency." Paraphrased: How can we parallel programmers rid us of the real work of implementing hardware and software for our parallel programs and delegate the not-so-entertaining programming and computer construction to "the concurrency engineers"? Don't get me wrong. I rather like the idea of language-based model of [parallel] computation that assigns "costs to the steps of the program we actually write". I rather like that. It is roughly equivalent to my question about predictability. (Note: not determinism. Predictability of time, storage, and effects.) And it is an ages old dream. Note that Ada is between the "horrendous" programming without language support about which Harper is complaining, and the envisioned language that has parallel constructs without all the concurrency things. Ada, more than other languages, lets you define constructs made from language that operate in parallel (not concurrent). Without resorting to "locks, monitors, deadlock, dining philosophers, …" > FP <> Haskell! > i not sayed one word about monads or damned Haskell. OK. ML has reference types. (Why?) Again, if they some day arrive at a language-based solution that will allow the removal of reference types etc, and still allow precise considerations of efficiency of parallel sub-computations, and further processing of the results of sub-computations, in time, in the modern world, perfect!