From: "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de>
Subject: Re: RFC: Prototype for a user threading library in Ada
Date: Tue, 21 Jun 2016 09:34:17 +0200
Date: 2016-06-21T09:34:17+02:00 [thread overview]
Message-ID: <nkaqmn$1n9n$1@gioia.aioe.org> (raw)
In-Reply-To: 0b853aa8-e542-4f10-bd55-c4e76bb7bf75@googlegroups.com
On 21/06/2016 04:40, rieachus@comcast.net wrote:
> On Friday, June 17, 2016 at 12:46:46 PM UTC-4, Dmitry A. Kazakov wrote:
>> On 2016-06-17 18:18, Niklas Holsti wrote:
>>
>> My take on this problematic is that there cannot exist a solution
>> implemented at the library level. All these frameworks maybe fun (for
>> the developer) but useless (horror for the end user) when the key
>> problem is not solved. That is, the control-flow state (as you said) and
>> the stack of the local objects both preserved between scheduling points.
>> This can be done IMO only at the language level as co-routines,
>> non-preemptive, cooperative, user-scheduled tasks, call it as you wish.
>
> I've been trying to understand not just the code, but the goal. I
> decided to start from a different perspective. What if I had a problem
> and wanted to distribute the solution across thousands of processors?
> Since I tend to bang my head against NP-hard or NP-complete problems, I
> would want a program structure that allowed me to start up at least one
> (Ada) task per processor, with enough data to complete, and for the job
> creation software to use a very wide tree. A similar reverse tree could
> be used to collect results if needed.
I have a similar problem. I implement network protocols. A classic
implementation is to start one task per client-server connection, or two
tasks when the exchange is full duplex. An OS can typically handle a few
hundred tasks before it runs out of juice. So the solution is to share a
large number of sockets between some few tasks. And so it runs into the
problem of an I/O-events-driven design and thus to co-routines.
> But to do any of this, I head for the distributed systems annex. It
> might be nice to have a simple example of how to do that on top of MPI:
> https://computing.llnl.gov/tutorials/mpi/#MPI2-3 (To be honest, using
> the C or Fortran bindings is what I have done...)
Distributed annex is unusable for massively parallel systems because it
is based on RPCs. It is the same problem in its core. Such a system
requires asynchronous communication driven by I/O events. Yet the
application logic is incoherent with the logic of I/O. In order to bring
them together, again, [distributed] co-routines are needed to restore
the application view of control flow.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
next prev parent reply other threads:[~2016-06-21 7:34 UTC|newest]
Thread overview: 72+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-17 9:44 RFC: Prototype for a user threading library in Ada Hadrien Grasland
2016-06-17 16:18 ` Niklas Holsti
2016-06-17 16:46 ` Dmitry A. Kazakov
2016-06-18 8:16 ` Hadrien Grasland
2016-06-18 8:47 ` Dmitry A. Kazakov
2016-06-18 9:17 ` Hadrien Grasland
2016-06-18 11:53 ` Dmitry A. Kazakov
2016-06-20 8:23 ` Hadrien Grasland
2016-06-20 9:22 ` Dmitry A. Kazakov
2016-06-23 1:42 ` Randy Brukardt
2016-06-23 8:39 ` Dmitry A. Kazakov
2016-06-23 22:12 ` Randy Brukardt
2016-06-24 7:34 ` Dmitry A. Kazakov
2016-06-24 23:00 ` Randy Brukardt
2016-06-25 7:11 ` Dmitry A. Kazakov
2016-06-26 2:02 ` rieachus
2016-06-26 6:26 ` Dmitry A. Kazakov
2016-06-24 0:38 ` rieachus
2016-06-25 6:28 ` Dmitry A. Kazakov
2016-06-26 1:34 ` rieachus
2016-06-26 3:21 ` Randy Brukardt
2016-06-26 6:15 ` Dmitry A. Kazakov
2016-06-28 20:44 ` Anh Vo
2016-07-02 4:13 ` Randy Brukardt
2016-07-02 10:25 ` Dmitry A. Kazakov
2016-07-05 21:53 ` Randy Brukardt
2016-07-06 9:25 ` Dmitry A. Kazakov
2016-07-07 0:32 ` Randy Brukardt
2016-07-07 6:08 ` Niklas Holsti
2016-07-08 0:03 ` Randy Brukardt
2016-07-08 7:32 ` Dmitry A. Kazakov
2016-07-11 19:40 ` Randy Brukardt
2016-07-12 8:37 ` Dmitry A. Kazakov
2016-07-12 21:31 ` Randy Brukardt
2016-07-08 20:17 ` Niklas Holsti
2016-06-24 21:06 ` Hadrien Grasland
2016-06-26 3:09 ` Randy Brukardt
2016-06-26 6:41 ` Dmitry A. Kazakov
2016-07-02 4:21 ` Randy Brukardt
2016-07-02 10:33 ` Dmitry A. Kazakov
2016-07-05 21:24 ` Randy Brukardt
2016-07-06 13:46 ` Dmitry A. Kazakov
2016-07-07 1:00 ` Randy Brukardt
2016-07-07 14:23 ` Dmitry A. Kazakov
2016-07-07 23:43 ` Randy Brukardt
2016-07-08 8:23 ` Dmitry A. Kazakov
2016-07-11 19:44 ` Randy Brukardt
2016-06-26 9:09 ` Hadrien Grasland
2016-07-02 4:36 ` Randy Brukardt
2016-07-02 5:30 ` Simon Wright
2016-07-05 21:29 ` Randy Brukardt
2016-07-02 11:13 ` Hadrien Grasland
2016-07-02 13:18 ` Dmitry A. Kazakov
2016-07-02 16:49 ` Hadrien Grasland
2016-07-02 21:33 ` Niklas Holsti
2016-07-03 20:56 ` Hadrien Grasland
2016-07-02 17:26 ` Niklas Holsti
2016-07-02 21:14 ` Niklas Holsti
2016-07-03 7:42 ` Hadrien Grasland
2016-07-03 8:39 ` Dmitry A. Kazakov
2016-07-03 21:15 ` Hadrien Grasland
2016-07-04 7:44 ` Dmitry A. Kazakov
2016-07-05 21:38 ` Randy Brukardt
2016-06-21 2:40 ` rieachus
2016-06-21 7:34 ` Dmitry A. Kazakov [this message]
2016-06-18 7:56 ` Hadrien Grasland
2016-06-18 8:33 ` Hadrien Grasland
2016-06-18 11:38 ` Hadrien Grasland
2016-06-18 13:17 ` Niklas Holsti
2016-06-18 16:27 ` Jeffrey R. Carter
2016-06-20 8:42 ` Hadrien Grasland
2016-07-10 0:45 ` rieachus
replies disabled
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox