"Hadrien Grasland" wrote in message news:1e32c714-34cf-4828-81fc-6b7fd77e4532@googlegroups.com... >Le dimanche 26 juin 2016 05:09:25 UTC+2, Randy Brukardt a écrit : >> "Hadrien Grasland" wrote in message... >> ... >> It seems to me that the problem is with the "typical" Ada implementation >> more than with the expressiveness of features, when it comes to highly >> parallel implementations. Mapping tasks directly to OS threads only works >> if >> the number of tasks is small. So if it hurts when you do that, then DON'T >> DO >> THAT!! :-) > >Yes, the problem could be solved at the Ada implementation level. That >would also help >greatly with the abstraction side of things, as "natural" Ada abstractions >could be made to >work as expected (see below). > >However, any code which relies on this implementation characteristic would >then become >unportable, unless the >standard also imposes that all implementation >follow this path. >Would that really be a reasonable request to make? Performance is naturally unportable. You'll get rather different performance characteristics with Janus/Ada and GNAT on the same machine, even though they're both Ada implementations. It's the nature of the beast -- if they really were identical, who'd need more than one implementation?? If you're really depending on performance characteristics that much, you're probably not even portable to a different CPU or disk system anyway. Still, the language could help with aspects or library calls to provide suggestions to the compiler/runtime. We already have stuff like that (Inline and Pack come to mind). >> I use Ada because I want it to prevent or detect all of my programming >> mistakes before I have to debug something. (I HATE debugging!). I'd like >> to >> extend that to parallel programming to the extent possible. I don't want >> to >> be adding major new features that will make that harder. >An Ada implementation which would want to make the life of concurrent >programmers easier could do the following things: >1/ Keep the amount of OS threads low (about the amount of CPU cores, a bit >more >for I/O), and map tasks to threads in an 1:N fashion. Reasonably easy. (For Windows, stack limitations might be a problem; not a problem on a bare target.) >2/ Make sure that any Ada feature which blocks tasks does the right thing >by switching >to another task and taking care of waking up the blocked task later, >instead of just >blocking the underlying OS thread. I think this follows from (1) and the Ada semantics. Blocking the underlying thread wouldn't properly implement the semantics. >3/ Make sure that the Ada standard library implementation behaves in a >similarly sensible >way, by replacing blocking system calls with nonblocking alternatives. We didn't do that because it leads to unacceptable performance for silly I/O, that is stuff like: Write (File, Char); It would make sense to revisit that. >That is essentially what the Go programming language designers designed >their tasking >model, so it is possible to do it in a newly created programming >language/implementation. >But how hard would it be to retrofit it inside an existing Ada >implementation? This I could >not tell. (1) and (2) describe how Janus/Ada works, with the obvious exception that the number of underlying threads is 1. We didn't do (3) because it makes silly (unbuffered) I/O quite slow. But we don't do much of that anymore (unbuffered I/O is slow by itself -- I/O calls are themselves pretty slow) -- sometime in the 1990s I redid the I/O system to be able to figure out the difference between files (which can and should be buffered) and other kinds of devices (for which buffering can be a nuisance or worse - consider a buffered keyboard!). So this wouldn't be anywhere near the problem it once was. Ergo, I don't think this is a problem at all. But I don't think that really helps the race and deadlock issues that are the real problem with programming with Ada tasks. I'd like to find some help there, too. Randy.