* Looking for implementation idea @ 1999-02-07 0:00 Thomas Handler 1999-02-07 0:00 ` Paul Duquennoy ` (2 more replies) 0 siblings, 3 replies; 25+ messages in thread From: Thomas Handler @ 1999-02-07 0:00 UTC (permalink / raw) Hi everybody! I'm trying to migrate a bundle of applications to Ada95 (running GNAT and Intel-Linux). The problem I'm suffering from at the moment is that most of the apps are heavy I/O bounded and the first one I'm gonna touch has to control typically around 100 devices via serial ports and about 40-50 socket connections. I already had implemented a raw draft of this app using C++ and did all of the work within a single process using select and having a state machine for each device and socket connection. Anyway I saw that using C++ for a team of developers is rather problematic (yes I know many teams are using C++ but for me it's to unsafe - just my opinion, I prefer to sleep well during nights ;-) and so I convinced my boss to have a look on Ada95. Now I'm trying to do a new draft and I would like to use Ada tasks instead of the state machines so that there would be about 150 tasks around. After doing some tests with native LinuxThreads and the FSU Thread lib I think using the FSU rts is the better choice since I will use the Ada tasks an abstraction help rather than as for concurrency and FSU is much more efficient when operating this way. My idea was to implement a kind of IO-Manager that is responsible for doing the central select() call, since FSU Threads block all threads on systems calls. As stated in an earlier post I wanted to use the Asynchronous Transfer of Control (ATC) of Ada95 in the following way: loop select -- doing a kind of awaiting any task registering new IO wishes Event.Wait; -- process the new IO wish then abort -- preparing the file descriptors and timeout value OS_Select(...); -- operate on the result of OS_Select() and inform the other tasks... end select; end loop; The main idea behind this construct was that the IO task will get notified via the Event about any new task wishing to do some IO immediately and it seemed efficient to me (though I have to admit I did no tests yet, so this is just an assumption of mine). But after thinking how Ada (or more exactly GNAT) implements ATC I saw some problems upcoming and did some tests on the above loop. This tests did interrupt OS_Select() very often and I got the expected result - after some hundred interrupts my system was unable to handle any OS_Select even in a new context (i.e. after a logout and a new login), so it seemed the kernel has run out of ressources... Since all applications are considered long run apps this approach must not be used for my projects. The problem is that my idea would work (really?) without using ATC with FSU Threads since all other tasks are blocked within OS_Select(). But when using native Thread support any other task doing some work and finding that he is willing to make IO is unable to communicate this to the IO task since the IO task has ended its OS_Select - another thing that's not acceptable because the function of my app would depend heavily on the semantics of the rts. So at the moment I have no other idea than reverting to my state machines (note that I have no problem with state machines but my thoughts were using the Ada95 abstractions would make life easier for future developers joining my team thus avoiding annoying and hard to find errors) and to forget Ada tasks. I hope I was able to express my problem so that anyone else is able to follow and hopefully this enables this one to give some hints. I expect that there exists an elegant Ada solution for this kind of problem and I'm too much newbie to Ada to find this way my own, so any hints will be greatly appreciated. Thomas Handler ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 Looking for implementation idea Thomas Handler @ 1999-02-07 0:00 ` Paul Duquennoy 1999-02-08 0:00 ` Thomas Handler 1999-02-07 0:00 ` Corey Minyard 1999-02-07 0:00 ` Niklas Holsti 2 siblings, 1 reply; 25+ messages in thread From: Paul Duquennoy @ 1999-02-07 0:00 UTC (permalink / raw) Thomas Handler wrote: > Hi everybody! > > I'm trying to migrate a bundle of applications to Ada95 (running GNAT > and Intel-Linux). > The problem I'm suffering from at the moment is that most of the apps > are heavy I/O bounded and the first one I'm gonna touch has to control > typically around 100 devices via serial ports and about 40-50 socket > connections. > <snip description> I did something like that few years ago. I designed one dispatcher task that did OS_select to get the available data and wad responsible to get full messages (i.e.handle one buffer per socket where the message bytes where stored until a full message was obtained). It then had a rendez-vous with a task from a task array whose index was the socket number. These tasks were generating one answer for each message, so the dispatcher task, which was also in charge of the writes, could know if some work was in progress in the software by the difference between input message count and output message count. If work was in progress, the OS_select was with a null time-out and a delay was used. If no work was in progress, the OS_select could use an appropriate time-out. loop if Pending_Write_Count > 0 then select accept write_message (...) -- doing the "write" requested and decrement pending write count Event.Wait; or delay (xx); end select; -- OS_Select with null time-out -- do reads and dispatch if full messages are got, increment pending write count else -- OS_Select with time-out -- do reads and dispatch if full messages are got, increment pending write count end if; end loop; This method works well if it is possible for the dispatcher task to know wether to use a time-out (blocking the program) or a delay (leaving the other tasks work). Also, a new message from a socket could not be received before the answer of the previous message was written. Otherwise, incomming messages need to be inserted in a socket-specific queue. I hope it helps. > Thomas Handler Paul ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 ` Paul Duquennoy @ 1999-02-08 0:00 ` Thomas Handler 0 siblings, 0 replies; 25+ messages in thread From: Thomas Handler @ 1999-02-08 0:00 UTC (permalink / raw) Paul, thank you for your reply. Paul Duquennoy wrote: > I did something like that few years ago. I designed one dispatcher task that > did OS_select to get the available data and wad responsible to get full > messages (i.e.handle one buffer per socket where the message bytes where > stored until a full message was obtained). It then had a rendez-vous with a > task from a task array whose index was the socket number. These tasks were > generating one answer for each message, so the dispatcher task, which was > also in charge of the writes, could know if some work was in progress in the > software by the difference between input message count and output message > count. If work was in progress, the OS_select was with a null time-out and a > delay was used. If no work was in progress, the OS_select could use an > appropriate time-out. This approach is interesting, I didn't think of it that way. But as far as I can see I have still one problem left: If the underlying task lib doesn't block on OS calls (i.e. when using LinuxThreads) it would be possible that obe of them would do some processing at the time the dispatcher checks if there is some work to do. So the counter is 0 and the dispatcher uses a timeout - the next moment the worker task finishes its calculation and wants to do IO... Anyway I got another piece for my puzzle ;-) Ciao, Thomas Handler ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 Looking for implementation idea Thomas Handler 1999-02-07 0:00 ` Paul Duquennoy @ 1999-02-07 0:00 ` Corey Minyard 1999-02-07 0:00 ` Tom Moran ` (3 more replies) 1999-02-07 0:00 ` Niklas Holsti 2 siblings, 4 replies; 25+ messages in thread From: Corey Minyard @ 1999-02-07 0:00 UTC (permalink / raw) Thomas Handler <th@umundum.vol.at> writes: > Hi everybody! > > I'm trying to migrate a bundle of applications to Ada95 (running GNAT > and Intel-Linux). > The problem I'm suffering from at the moment is that most of the apps > are heavy I/O bounded and the first one I'm gonna touch has to control > typically around 100 devices via serial ports and about 40-50 socket > connections. > I already had implemented a raw draft of this app using C++ and did all > of the work within a single process using select and having a state > machine for each device and socket connection. > > Anyway I saw that using C++ for a team of developers is rather > problematic (yes I know many teams are using C++ but for me it's to > unsafe - just my opinion, I prefer to sleep well during nights ;-) and > so I convinced my boss to have a look on Ada95. Wow, you must be very persuasive! :-) Now about the tasking. Since Ada has tasking built-in, there is a great temptation to use it even when it is not the best way to solve the problem. So I'm going to have to jump on my tasking soapbox. BEGIN SOAPBOX I believe that tasking (or threading) is a heavily overused thing. Not just in Ada, but generally. IMHO, using tasking will have three general effects on your system: 1) It will be more complex 2) It will be less efficient 3) It will be less reliable Now in more detail: 1) It will be more complex - Tasking can simplify the conceptual simplicity of a problem (just have each operation handled by one task, etc.). However, that ignores the need for coordination of shared data access and inter-task interaction, which are much more complex in a tasking scenario. It also makes recovery from overload much more difficult since the load is not in a centralized place. Also, if the number of tasks is dynamic you will have to manage running out of tasking resources. 2) It will be less efficient - The overhead of context switches and inter-task communication are higher than just procecure calls. 3) It will be less reliable - Concurrency make the system more difficult to understand and test completely because of the asyncronous nature of the interactions. It is easy to miss protecting a shared variable is impossible to test for. So am I anti-tasking? Not at all. Tasking is an extremely important concept in design of systems. So why do I think tasking should be used? 1) Preemptability 2) SMP Performance 3) Performance using blocking I/O 4) The OS doesn't support waiting for multiple things in one task Again in more detail: 1) Preemptability - If an event occurs in the system and it must be handled in a timely fashion, tasking is the way to go. 2) SMP Performance - If you want to use all the processors in an SMP box for a single application, you will have to run multiple tasks. One per processor is often sufficient. Note that this usually requires native threads, FSU threads won't give this. 3) Performance using blocking I/O - If your OS doesn't provide non-blocking reading or writing, you might have to use multiple tasks to improve the performance of your system, since the ability to have multiple I/O operation pending at the same time will improve the quality of disk scheduling and allows work to be done while waiting for I/O. If the OS has non-blocking I/O, though, that is the way to go. 4) The OS doesn't support waiting for multiple things in one task - Some just don't (OS/2 comes to mind) and you have to have a task per input source. So, for instance, in your system you might have multiple message sources and message types with different priorities and in an overload you might want to discard the lower-priority stuff before discarding the higher priority. You might also want to discard messages that are older than a few seconds. I would use multi-tasking in this design due to preemptability. You will need a higher-priority task reading input from the system and managing queues to another task. That task would enqueue messages, discard old ones, and detect and manage an overload, so it would have to be higher priority than the processing. Another task would read from the queues and process the items. The only place of concurrency here is the queues, which should be easy to manage. If I had SMP, I might have one task per processor. Then I have to worry about concurrency between tasks if they update shared data. But that is the price to pay for using SMP. Linux does have the ability to do non-blocking operations on the I/O ports you are using and it can wait for multiple I/O operations at the same time, so the other items are not a concern for you. END SOAPBOX Comments on this, of course, are welcome. Hey, I am posting it to a newgroup, what do I expect? One question for the Ada experts: Ada protected types don't work in SMP since they are task priority based, do they? Or maybe I'm missing something. If they don't, maybe we should think about adding a real semaphore to the Ada spec. -- Corey Minyard Internet: minyard@acm.org Work: minyard@nortelnetworks.com UUCP: minyard@wf-rch.cirr.com ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 ` Corey Minyard @ 1999-02-07 0:00 ` Tom Moran 1999-02-07 0:00 ` Corey Minyard 1999-02-07 0:00 ` Larry Kilgallen ` (2 subsequent siblings) 3 siblings, 1 reply; 25+ messages in thread From: Tom Moran @ 1999-02-07 0:00 UTC (permalink / raw) >Not just in Ada, but generally. IMHO, using tasking will have three >general effects on your system: > 1) It will be more complex > 2) It will be less efficient > 3) It will be less reliable (1) depends on how independent the tasks are. It they need to talk to each other, yes, that can get complex. If each port/socket is essentially independent, and each can be run by a separate task that blocks on IO, the design can be very simple. (2) depends on the frequency of context switches. With the compilers I have tried, on MS Windows, switch times are in the tens of microseconds. If you do that a lot, it's bad, but if you do it once per millisecond, you're only talking a few percent of CPU. (3) depends a lot on (1), complexity. If it's simple, it's easy to design it to be reliable, even if timing dependencies make it hard to test it to be reliable. If it's complex, it's harder to design reliability, and perhaps it is worth cutting down on timing dependencies to simply testing. My $.02 ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 ` Tom Moran @ 1999-02-07 0:00 ` Corey Minyard 0 siblings, 0 replies; 25+ messages in thread From: Corey Minyard @ 1999-02-07 0:00 UTC (permalink / raw) tmoran@bix.com (Tom Moran) writes: > >Not just in Ada, but generally. IMHO, using tasking will have three > >general effects on your system: > > > 1) It will be more complex > > > 2) It will be less efficient > > > 3) It will be less reliable > (1) depends on how independent the tasks are. It they need to talk to > each other, yes, that can get complex. If each port/socket is > essentially independent, and each can be run by a separate task that > blocks on IO, the design can be very simple. > (2) depends on the frequency of context switches. With the compilers I > have tried, on MS Windows, switch times are in the tens of > microseconds. If you do that a lot, it's bad, but if you do it once > per millisecond, you're only talking a few percent of CPU. > (3) depends a lot on (1), complexity. If it's simple, it's easy to > design it to be reliable, even if timing dependencies make it hard to > test it to be reliable. If it's complex, it's harder to design > reliability, and perhaps it is worth cutting down on timing > dependencies to simply testing. > My $.02 I realized I should have spoken to this after I replied. If the individual tasks are completely independent, then it might make sense to split them up (ignoring overload and efficiency concerns, of course). Then they are really more like multiple instances of the same application running. Then complexity would probably go down by splitting them up. Thanks for pointing that out. It will almost always be less efficient, even if it only does one more context switch. Just not much less efficient. But when you are talking about handling thousands of messages a second, context switches can become very expensive. I know from experience. -- Corey Minyard Internet: minyard@acm.org Work: minyard@nortelnetworks.com UUCP: minyard@wf-rch.cirr.com ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 ` Corey Minyard 1999-02-07 0:00 ` Tom Moran @ 1999-02-07 0:00 ` Larry Kilgallen 1999-02-08 0:00 ` dewar 1999-02-07 0:00 ` Tucker Taft 1999-02-08 0:00 ` Thomas Handler 3 siblings, 1 reply; 25+ messages in thread From: Larry Kilgallen @ 1999-02-07 0:00 UTC (permalink / raw) In article <m2yamagkdi.fsf@wf-rch.cirr.com>, Corey Minyard <minyard@acm.org> writes: > One question for the Ada experts: Ada protected types don't work in > SMP since they are task priority based, do they? Or maybe I'm missing > something. If they don't, maybe we should think about adding a real > semaphore to the Ada spec. I cannot speak as an Ada expert, but I can say that the natural expectation of a programmer would be that if a protected type mechanism worked on a uniprocessor it would also work correctly on an SMP machine. The availability of extra processors should not have to be a coding concern. Perhaps there is something in the specification of compiler tests that allows one to pass with the caveat that "it only works on uniprocessors", but if a compiler vendor used that approach I would think they lose some of the more desirable customers (those with enough money to buy multiprocessors). Larry Kilgallen ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 ` Larry Kilgallen @ 1999-02-08 0:00 ` dewar 1999-02-08 0:00 ` dennison 0 siblings, 1 reply; 25+ messages in thread From: dewar @ 1999-02-08 0:00 UTC (permalink / raw) In article <1999Feb7.152252.1@eisner>, Kilgallen@eisner.decus.org.nospam wrote: > Perhaps there is something in the specification of > compiler tests that allows one to pass with the caveat > that "it only works on uniprocessors", but if a compiler > vendor used that approach I would think they lose some of > the more desirable customers (those with enough money to > buy multiprocessors). Of course it would be permissible to write an Ada compiler that only worked on uniprocessors, but as Larry says, this is unlikely, since it would not make commercial sense on a machine for which MP configurations were available, and certainly all current Ada 95 technologies support MP where appropriate. However, it should be noted that at the *programmer* level it is quite possible to write programs that will work only on a single processor (indeed some of the ACVCs are in this category, and are marked as being inapplicable on multi-processors). If a program assumes that an active high priority task which does not block will guarantee that low priority tasks make no progress at all (a guarantee that MUST be true on a mono-processor for a compiler that fully implements the real time annex (*)) then you have a program which will work only on a single processor. Whether this is a bug or not depends on whether you want your code to be portable to multi-processors. It should be understood that it is not necessarily terrible to make this assuption if you know you are in an SP environment, since there are paradigms that operate much more efficiently if this approach is taken. Still it is a risky trap. (*) Of course not all compilers fully implement the real time annex, including the important FIFO_Within_Priorities dispatching policy that makes this guarantee. For example, GNATWorks (GNAT with VxWorks) does fully implement this capability, but some other Ada/95 VxWorks products do not. As usual, you have to be careful that the compiler you get has the capabilities you need! Robert Dewar Ada Core Technologies -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-08 0:00 ` dewar @ 1999-02-08 0:00 ` dennison 1999-02-08 0:00 ` robert_dewar 0 siblings, 1 reply; 25+ messages in thread From: dennison @ 1999-02-08 0:00 UTC (permalink / raw) In article <79ldf2$sc3$1@nnrp1.dejanews.com>, dewar@gnat.com wrote: > If a program assumes that an active high priority task > which does not block will guarantee that low priority tasks > make no progress at all (a guarantee that MUST be true on > a mono-processor for a compiler that fully implements the > real time annex (*)) then you have a program which will > work only on a single processor. > (*) Of course not all compilers fully implement the real > time annex, including the important FIFO_Within_Priorities > dispatching policy that makes this guarantee. For example, Isn't it possible that a compiler implementing that annex has a default dispatching policy (not FIFO_Within_Priorities of course) that doesn't make this guarantee? I suppose you could force it to work on said system (again, on a single processor), by specifing FIFO_* policy rather than using the default. T.E.D. -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-08 0:00 ` dennison @ 1999-02-08 0:00 ` robert_dewar 0 siblings, 0 replies; 25+ messages in thread From: robert_dewar @ 1999-02-08 0:00 UTC (permalink / raw) In article <79n3bi$6bu$1@nnrp1.dejanews.com>, dennison@telepath.com wrote: > Isn't it possible that a compiler implementing that annex > has a default dispatching policy (not > FIFO_Within_Priorities of course) that doesn't make this > guarantee? I suppose you could force it to work on said > system (again, > on a single processor), by specifing FIFO_* policy rather > than using the > default. Obviously (or at least I thought it was obvious from my post), the kind of programs I am talking about are not about to be run with a completely system dependent undefined dispatching policy (if they do this they are of course completely system dependent, and not even vaguely portable). So I am of course assuming that you include the appropriate Dispatching_Policy pragma. A program that depends on the dispatching policy and does not specify the policy is a very dubious program! Of course most programs do NOT depend on the dispatching policy and that is just fine. -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 ` Corey Minyard 1999-02-07 0:00 ` Tom Moran 1999-02-07 0:00 ` Larry Kilgallen @ 1999-02-07 0:00 ` Tucker Taft 1999-02-07 0:00 ` Corey Minyard 1999-02-08 0:00 ` dennison 1999-02-08 0:00 ` Thomas Handler 3 siblings, 2 replies; 25+ messages in thread From: Tucker Taft @ 1999-02-07 0:00 UTC (permalink / raw) Corey Minyard (minyard@acm.org) wrote: : ... : One question for the Ada experts: Ada protected types don't work in : SMP since they are task priority based, do they? Ada protected types *do* work on a multi-processor. They boost the priority to prevent unbounded priority inversion, not to lock. On a mono-processor, it turns out that boosting the priority is sufficient to accomplish mutual exclusion. On a multiprocessor, the protected type will also need to acquire a (probably spin) lock. : ... Or maybe I'm missing : something. If they don't, maybe we should think about adding a real : semaphore to the Ada spec. No need. Protected types do the job, in a way that is independent of the number of physical processors. I'm curious -- where did you get the impression that protected types did not work on a multiprocessor? I'm wondering how common is this misconception... : -- : Corey Minyard Internet: minyard@acm.org : Work: minyard@nortelnetworks.com UUCP: minyard@wf-rch.cirr.com -Tuck -- -Tucker Taft stt@averstar.com http://www.averstar.com/~stt/ Technical Director, Distributed IT Solutions (www.averstar.com/tools) AverStar (formerly Intermetrics, Inc.) Burlington, MA USA ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 ` Tucker Taft @ 1999-02-07 0:00 ` Corey Minyard 1999-02-08 0:00 ` robert_dewar 1999-02-08 0:00 ` Tucker Taft 1999-02-08 0:00 ` dennison 1 sibling, 2 replies; 25+ messages in thread From: Corey Minyard @ 1999-02-07 0:00 UTC (permalink / raw) stt@houdini.camb.inmet.com (Tucker Taft) writes: > > I'm curious -- where did you get the impression that protected > types did not work on a multiprocessor? I'm wondering how common > is this misconception... > Thanks for the reply. I would expect the average programmer would think of a protected type like a Java one, something more semaphore-like. I knew that it wasn't (and I know the reasons), but I would expect that when the average programmer puts something in a protected type they would expect it to be "protected" by mutex, which wouldn't happen on an SMP machine, but would on a uniprocessor one. So should I now put spin-locks in all my protected type operations so they will provide mutex on SMP machines (if mutex is what I am looking for, which I expect is their most common use)? -- Corey Minyard Internet: minyard@acm.org Work: minyard@nortelnetworks.com UUCP: minyard@wf-rch.cirr.com ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 ` Corey Minyard @ 1999-02-08 0:00 ` robert_dewar 1999-02-08 0:00 ` Tucker Taft 1 sibling, 0 replies; 25+ messages in thread From: robert_dewar @ 1999-02-08 0:00 UTC (permalink / raw) In article <m2r9s1hpn8.fsf@wf-rch.cirr.com>, minyard@acm.org wrote: > So should I now put spin-locks in all my protected type > operations so they will provide mutex on SMP machines (if > mutex is what I am looking for, which I expect is their > most common use)? This seems very confused, spin-locks if needed are part of the *implementation* of protected types, not something you need to worry about as a programmer. -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 ` Corey Minyard 1999-02-08 0:00 ` robert_dewar @ 1999-02-08 0:00 ` Tucker Taft 1999-02-07 0:00 ` Corey Minyard 1 sibling, 1 reply; 25+ messages in thread From: Tucker Taft @ 1999-02-08 0:00 UTC (permalink / raw) Corey Minyard (minyard@acm.org) wrote: : stt@houdini.camb.inmet.com (Tucker Taft) writes: : > : > I'm curious -- where did you get the impression that protected : > types did not work on a multiprocessor? I'm wondering how common : > is this misconception... : > : Thanks for the reply. I would expect the average programmer would : think of a protected type like a Java one, something more : semaphore-like. I knew that it wasn't (and I know the reasons), but I : would expect that when the average programmer puts something in a : protected type they would expect it to be "protected" by mutex, which : wouldn't happen on an SMP machine, but would on a uniprocessor one. I've lost you here. What "wouldn't happen on an SMP machine?" On an SMP operating system, O/S mutexes work across processors. : So should I now put spin-locks in all my protected type operations so : they will provide mutex on SMP machines (if mutex is what I am looking : for, which I expect is their most common use)? No, now I must have really confused you. The *implementation* of protected types generally uses spin locks on a multi-processor (though other mechanisms are also possible). The programmer need not worry about this at all: mutual exclusion is provided by protected types, even if there are multiple processors. : -- : Corey Minyard Internet: minyard@acm.org : Work: minyard@nortelnetworks.com UUCP: minyard@wf-rch.cirr.com -- -Tucker Taft stt@averstar.com http://www.averstar.com/~stt/ Technical Director, Distributed IT Solutions (www.averstar.com/tools) AverStar (formerly Intermetrics, Inc.) Burlington, MA USA ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-08 0:00 ` Tucker Taft @ 1999-02-07 0:00 ` Corey Minyard 0 siblings, 0 replies; 25+ messages in thread From: Corey Minyard @ 1999-02-07 0:00 UTC (permalink / raw) stt@houdini.camb.inmet.com (Tucker Taft) writes: > : Thanks for the reply. I would expect the average programmer would > : think of a protected type like a Java one, something more > : semaphore-like. I knew that it wasn't (and I know the reasons), but I > : would expect that when the average programmer puts something in a > : protected type they would expect it to be "protected" by mutex, which > : wouldn't happen on an SMP machine, but would on a uniprocessor one. > > I've lost you here. What "wouldn't happen on an SMP machine?" > > On an SMP operating system, O/S mutexes work across processors. > > : So should I now put spin-locks in all my protected type operations so > : they will provide mutex on SMP machines (if mutex is what I am looking > : for, which I expect is their most common use)? > > No, now I must have really confused you. The *implementation* of > protected types generally uses spin locks on a multi-processor (though > other mechanisms are also possible). The programmer need not worry > about this at all: mutual exclusion is provided by protected types, even > if there are multiple processors. > Thanks. That clears things up substantually! I thought that protected types were a priority-only thing, that they had no mutex built-in. They obviously still use priority, but are a more general purpose mutex. I guess I should have looked at the RM (9.5.1), it's pretty clear there. -- Corey Minyard Internet: minyard@acm.org Work: minyard@nortelnetworks.com UUCP: minyard@wf-rch.cirr.com ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 ` Tucker Taft 1999-02-07 0:00 ` Corey Minyard @ 1999-02-08 0:00 ` dennison 1999-02-08 0:00 ` robert_dewar 1999-02-08 0:00 ` Tucker Taft 1 sibling, 2 replies; 25+ messages in thread From: dennison @ 1999-02-08 0:00 UTC (permalink / raw) In article <F6suus.n1x.0.-s@inmet.camb.inmet.com>, stt@houdini.camb.inmet.com (Tucker Taft) wrote: > Corey Minyard (minyard@acm.org) wrote: > > : ... > : One question for the Ada experts: Ada protected types don't work in > : SMP since they are task priority based, do they? > > Ada protected types *do* work on a multi-processor. > > I'm curious -- where did you get the impression that protected > types did not work on a multiprocessor? I'm wondering how common > is this misconception... They look to me like they were *designed* to be usable for synchronization in a parallel shared-memory architecture. Was that actually the case? T.E.D. -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-08 0:00 ` dennison @ 1999-02-08 0:00 ` robert_dewar 1999-02-08 0:00 ` Tucker Taft 1 sibling, 0 replies; 25+ messages in thread From: robert_dewar @ 1999-02-08 0:00 UTC (permalink / raw) In article <79n2dt$5n9$1@nnrp1.dejanews.com>, dennison@telepath.com wrote: > They look to me like they were *designed* to be usable > for synchronization in a parallel shared-memory > architecture. Was that actually the case? To be fair they were really designed for bare board single processor use, hence some of the restrictions. But of course they are usable for the purpose you mention! -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-08 0:00 ` dennison 1999-02-08 0:00 ` robert_dewar @ 1999-02-08 0:00 ` Tucker Taft 1999-02-09 0:00 ` robert_dewar 1 sibling, 1 reply; 25+ messages in thread From: Tucker Taft @ 1999-02-08 0:00 UTC (permalink / raw) dennison@telepath.com wrote: > > In article <F6suus.n1x.0.-s@inmet.camb.inmet.com>, > stt@houdini.camb.inmet.com (Tucker Taft) wrote: > > I'm curious -- where did you get the impression that protected > > types did not work on a multiprocessor? I'm wondering how common > > is this misconception... > > They look to me like they were *designed* to be usable for synchronization in > a parallel shared-memory architecture. Was that actually the case? They were designed to work in whatever environment supported the Ada run-time, be it parallel shared-memory, distributed memory, mono-processor, hypercube, etc. > T.E.D. -Tuck -- -Tucker Taft stt@averstar.com http://www.averstar.com/~stt/ Technical Director, Distributed IT Solutions (www.averstar.com/tools) AverStar (formerly Intermetrics, Inc.) Burlington, MA USA ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-08 0:00 ` Tucker Taft @ 1999-02-09 0:00 ` robert_dewar 1999-02-11 0:00 ` Ehud Lamm 0 siblings, 1 reply; 25+ messages in thread From: robert_dewar @ 1999-02-09 0:00 UTC (permalink / raw) In article <36BF1604.771D16C7@averstar.com>, Tucker Taft <stt@averstar.com> wrote: > > They were designed to work in whatever environment > supported the Ada run-time, be it parallel shared-memory, > distributed memory, mono-processor, hypercube, etc. > -Tuck That may be the official position, but I was there during the discussions :-) In my view, FAR too much emphasis was placed on efficient implementation on bare board monoprocessors, at the expense of generalizing the capabilities and extending the abstraction. The restrictions on protected types cannot be justified from an abstraction point of view, they are pretty much efficiency dictated (I nearly wrote kludges :-) -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-09 0:00 ` robert_dewar @ 1999-02-11 0:00 ` Ehud Lamm 0 siblings, 0 replies; 25+ messages in thread From: Ehud Lamm @ 1999-02-11 0:00 UTC (permalink / raw) On Tue, 9 Feb 1999 robert_dewar@my-dejanews.com wrote: > In my view, FAR too much emphasis was placed on efficient > implementation on bare board monoprocessors, at the expense > of generalizing the capabilities and extending the > abstraction. The restrictions on protected types cannot be > justified from an abstraction point of view, they are > pretty much efficiency dictated (I nearly wrote kludges :-) can you be a little more specific about what you would have done differently? Ehud Lamm mslamm@pluto.mscc.huji.ac.il ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 ` Corey Minyard ` (2 preceding siblings ...) 1999-02-07 0:00 ` Tucker Taft @ 1999-02-08 0:00 ` Thomas Handler 3 siblings, 0 replies; 25+ messages in thread From: Thomas Handler @ 1999-02-08 0:00 UTC (permalink / raw) Corey, I also would like to thank you for your answer. Corey Minyard wrote: > > Anyway I saw that using C++ for a team of developers is rather > > problematic (yes I know many teams are using C++ but for me it's to > > unsafe - just my opinion, I prefer to sleep well during nights ;-) and > > so I convinced my boss to have a look on Ada95. > Wow, you must be very persuasive! :-) I hope so ;-) > Now about the tasking. > Since Ada has tasking built-in, there is a great temptation to use it > even when it is not the best way to solve the problem. So I'm going > to have to jump on my tasking soapbox. > BEGIN SOAPBOX > I believe that tasking (or threading) is a heavily overused thing. > Not just in Ada, but generally. IMHO, using tasking will have three > general effects on your system: > 1) It will be more complex > 2) It will be less efficient > 3) It will be less reliable OK, I will take the shortcut and respond in a short manner ;-) The first thing I have to mention is that I want to use the tasks in a abstraction help manner rather than for concurrency. The other choice I would have is that I could write state machines for each type of device and this state machines would have to be written in a way that allows 'tasking' them, i.e. every state acts for a short time and switches back to a kind of dispatcher written by me - So I think using the already existing tasking scheme provided by Ada is more efficient from any point ;-) (and I know what I'm talking about since I did exactly this within one process in C++). So your #1 (more complex) isn't true for me, since its easier because I don't have to worry about the dispatcher I assume that the rts used by many many people and developed by pros will be more efficient than my own dispatcher, so #2 isn't true either. In fact anything I have read about Ada so far (and I already have read a lot ;-) makes me sure that things will be implemented the most efficient way possible on my system. And #3 is only true if I fail in doing my homework correctly ;-) Ciao, Thomas Handler ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 Looking for implementation idea Thomas Handler 1999-02-07 0:00 ` Paul Duquennoy 1999-02-07 0:00 ` Corey Minyard @ 1999-02-07 0:00 ` Niklas Holsti 1999-02-08 0:00 ` Thomas Handler 2 siblings, 1 reply; 25+ messages in thread From: Niklas Holsti @ 1999-02-07 0:00 UTC (permalink / raw) Thomas Handler wrote: > > Hi everybody! > > I'm trying to migrate a bundle of applications to Ada95 (running GNAT > and Intel-Linux). > The problem I'm suffering from at the moment is that most of the apps > are heavy I/O bounded and the first one I'm gonna touch has to control > typically around 100 devices via serial ports and about 40-50 socket > connections. (snip) > My idea was to implement a kind of IO-Manager that is responsible for > doing the central select() call, since FSU Threads block all threads on > systems calls. If your application is really I/O *bound*, perhaps the additional CPU load of native thread switching could be tolerated. Depends on your response time requirements, of course. > As stated in an earlier post I wanted to use the Asynchronous Transfer > of Control (ATC) of Ada95 in the following way: > > loop > select > -- doing a kind of awaiting any task registering new IO wishes > Event.Wait; > -- process the new IO wish > then abort > -- preparing the file descriptors and timeout value > OS_Select(...); > -- operate on the result of OS_Select() and inform the other > tasks... > end select; > end loop; > > The main idea behind this construct was that the IO task will get > notified via the Event about any new task wishing to do some IO > immediately and it seemed efficient to me (though I have to admit I did > no tests yet, so this is just an assumption of mine). I don't understand: if the OS_Select blocks all tasks (threads), where does the Event.Wait signal come from? Or is the "then" branch only abortable in the "preparing" and "operate" parts? > But after thinking how Ada (or more exactly GNAT) implements ATC I saw > some problems upcoming and did some tests on the above loop. This tests > did interrupt OS_Select() very often and I got the expected result - > after some hundred interrupts my system was unable to handle any > OS_Select even in a new context (i.e. after a logout and a new login), > so it seemed the kernel has run out of ressources... That problem sounds serious enough to report as a bug, either to ACT or FSU. On the general problem, I usually dislike centralising I/O in the way you plan -- it's a single-threading, select()-style approach which is not optimal for a more concurrent tasking design. Perhaps the group could suggest some other structure if we knew a little more about the data flow topology of your application. In another thread discussing FSU vs. native Linux threads, a claim was made that FSU threads support per-thread blocking on the most common I/O calls -- read() and write() were mentioned, I think. This could be another reason to avoid the select(). If you stick with select(), I suggest replacing Event.Wait with an additional internal port (file descriptor) that is used by tasks contacting the I/O manager and included in the select(). In this way, the select() would never be aborted by ATC, and thus the OS resources should not leak. The internal port doesn't need to carry any data; it could be used just to terminate the select(), with some real Ada mechanism used to communicate between the contacting task and the I/O manager, once the latter is awake. Your application sounds a good match for Ada tasking. Please post on your progress. - Niklas ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-07 0:00 ` Niklas Holsti @ 1999-02-08 0:00 ` Thomas Handler 1999-02-09 0:00 ` Niklas Holsti 0 siblings, 1 reply; 25+ messages in thread From: Thomas Handler @ 1999-02-08 0:00 UTC (permalink / raw) Niklas, many thanks for your reply. Niklas Holsti wrote: > > Thomas Handler wrote: > > > > Hi everybody! > > > > I'm trying to migrate a bundle of applications to Ada95 (running GNAT > > and Intel-Linux). > > The problem I'm suffering from at the moment is that most of the apps > > are heavy I/O bounded and the first one I'm gonna touch has to control > > typically around 100 devices via serial ports and about 40-50 socket > > connections. > (snip) > > My idea was to implement a kind of IO-Manager that is responsible for > > doing the central select() call, since FSU Threads block all threads on > > systems calls. > > If your application is really I/O *bound*, perhaps the additional CPU > load of native thread switching could be tolerated. Depends on your > response time requirements, of course. This is something I really have to think of, unfortunately I have no way to make any tests at the moment because I'm awaiting my 100 serial port hardware. But I guess you're completely right with your hint - since serial ports aren't that CPU itensive peripherals (they will be driven with 38400 baud). But there exist some other problems I have to cope with. As an example I have the problem that I have to shut down my application in a friendly manner so that no file corruption within the files of the used database can occur. Therefore it's also a problem that I can't simply abort the OS-Select call via ATC, so I came up with the solution of opening a pipe as a separate communication channel to signal that kind of event. But doing this within 100+ tasks and OS_Selects seems rather awkward (though I have to admit that the solution via pipe itself isn't that pretty ;-) - that's why I'm not sure if I like using it as a replacement for the ATC to abort the select when neccessary. > > I don't understand: if the OS_Select blocks all tasks (threads), where > does the Event.Wait signal come from? Or is the "then" branch only > abortable in the "preparing" and "operate" parts? It's right, when using FSU lib I suggest that Event.Wait will come from nowhere, but what I'm trying is to implement as close to the RM as possible and since the RM doesn't guarantee that a blocking OS call doesn't block all other tasks. So the Event.Wait is for the case when the other tasks aren't blocked like when using LinuxThreads. > > > But after thinking how Ada (or more exactly GNAT) implements ATC I saw > > some problems upcoming and did some tests on the above loop. This tests > > did interrupt OS_Select() very often and I got the expected result - > > after some hundred interrupts my system was unable to handle any > > OS_Select even in a new context (i.e. after a logout and a new login), > > so it seemed the kernel has run out of ressources... > > That problem sounds serious enough to report as a bug, either to ACT > or FSU. It seems obvious to me, just imagine what will happen, when you abort a call that has requested dynamic memory? I guess this memory will be lost. The only thing I can't understand is that ATC doesn't fit into the safety approach of Ada, since you can do rather nasty things the easy way without having any hint (if you don't count on your mistakes you made in the past - that's where experience comes from ;-) > > On the general problem, I usually dislike centralising I/O in the > way you plan -- it's a single-threading, select()-style approach > which is not optimal for a more concurrent tasking design. Perhaps > the group could suggest some other structure if we knew a little more > about the data flow topology of your application. I agree but I think it's easier to control from point of view of the application. The data flow of the application isn't that easy to describe, the application will do access control. The devices are ticket readers that can read barcodes and contactless RFID Tags. These readers are mounted on turnstiles and both (the reader and the turnstile) are connected to the host via a single RS-485 line. One host will be able to control about 128 such combinations. The problem is that the barcodes can be free running and the RFID can not since several RFID readers within one area are disturbing each other - so I have to use some form of synchronisation. Maybe this short overview gives a hint? > > In another thread discussing FSU vs. native Linux threads, a claim > was made that FSU threads support per-thread blocking on the most > common I/O calls -- read() and write() were mentioned, I think. > This could be another reason to avoid the select(). Then I must have missed something. Is it definitely this way that a call to OS read() (i.e. a binding to the read() of theC library) issues per thread blocking? If this is true I would have to change my mind. > > If you stick with select(), I suggest replacing Event.Wait with an > additional internal port (file descriptor) that is used by tasks > contacting the I/O manager and included in the select(). In this > way, the select() would never be aborted by ATC, and thus the > OS resources should not leak. The internal port doesn't need to > carry any data; it could be used just to terminate the select(), > with some real Ada mechanism used to communicate between the > contacting task and the I/O manager, once the latter is awake. I have mentioned this way already above, thank you. > > Your application sounds a good match for Ada tasking. Please post > on your progress. > > - Niklas Since I got that response from that group I assume that there are some people interested in such information (?) and so I will post my progress. Once again many thanks for your reply. Ciao, Thomas Handler ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-08 0:00 ` Thomas Handler @ 1999-02-09 0:00 ` Niklas Holsti 1999-02-10 0:00 ` Thomas Handler 0 siblings, 1 reply; 25+ messages in thread From: Niklas Holsti @ 1999-02-09 0:00 UTC (permalink / raw) Thomas Handler wrote: > > Niklas Holsti wrote: > > > > Thomas Handler wrote: (snipped loop with OS_Select abortable by ATC) > > > But after thinking how Ada (or more exactly GNAT) implements ATC I saw > > > some problems upcoming and did some tests on the above loop. This tests > > > did interrupt OS_Select() very often and I got the expected result - > > > after some hundred interrupts my system was unable to handle any > > > OS_Select even in a new context (i.e. after a logout and a new login), > > > so it seemed the kernel has run out of ressources... > > > > That problem sounds serious enough to report as a bug, either to ACT > > or FSU. > It seems obvious to me, just imagine what will happen, when you abort a > call that has requested dynamic memory? I guess this memory will be > lost. "man select" on my Linux system does not indicate that is allocates dynamic memory. From your description of the problem, it seems that select() allocates some Linux kernel resources which are not returned when the ATC occurs. This means that a user process (I assume your application does not run as root) can kill Linux, which in my view is a serious problem, and should interest at least the Ada on Linux Team. > The only thing I can't understand is that ATC doesn't fit into the > safety approach of Ada, since you can do rather nasty things the easy > way without having any hint (if you don't count on your mistakes you > made in the past - that's where experience comes from ;-) Exactly. ATC abort of Linux calls should not leak memory. Now, I don't know how hard this is to correct, but at the very least there should be some warnings about it in the GNAT documentation (... I have to admit I haven't read the most recent docs). (snips) > > In another thread discussing FSU vs. native Linux threads, a claim > > was made that FSU threads support per-thread blocking on the most > > common I/O calls -- read() and write() were mentioned, I think. > > This could be another reason to avoid the select(). > Then I must have missed something. Is it definitely this way that a call > to OS read() (i.e. a binding to the read() of theC library) issues per > thread blocking? > If this is true I would have to change my mind. In fact I think the discussion I referred to was on the gnat-chat list and not on comp.lang.ada. Perhaps you had best experiment with the FSU functions to find out how they block; sorry I can't provide positive info. - Niklas ^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: Looking for implementation idea 1999-02-09 0:00 ` Niklas Holsti @ 1999-02-10 0:00 ` Thomas Handler 0 siblings, 0 replies; 25+ messages in thread From: Thomas Handler @ 1999-02-10 0:00 UTC (permalink / raw) Niklas, Niklas Holsti wrote: > > > That problem sounds serious enough to report as a bug, either to ACT > > > or FSU. > > It seems obvious to me, just imagine what will happen, when you abort a > > call that has requested dynamic memory? I guess this memory will be > > lost. > "man select" on my Linux system does not indicate that is allocates > dynamic memory. From your description of the problem, it seems that I also had a look into the kernel sources (though I have to admit that I'm far far far away from being someone that knows enough about the linux kernel to find this problem) but there was nothin obvious I saw. > is a serious problem, and should interest at least the Ada on Linux Team. I will resent my original message to the Ada on Linux Team mailing list... > Exactly. ATC abort of Linux calls should not leak memory. Now, I > don't know how hard this is to correct, but at the very least there > should be some warnings about it in the GNAT documentation (... I have > to admit I haven't read the most recent docs). I have had a look on several books I have about Ada (RM, ARM, John Barnes's Prgramming in Ada 95, Concurrency in Ada and the GNAT docs but I haven't found anything describing ATC with respect to OS calls or leaking... > FSU functions to find out how they block; sorry I can't provide > positive info. I have made the tests and it's correct, read() and write() operate non-blocking even with FSU lib, but select() seems to block. Ciao, Thomas Handler ^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~1999-02-11 0:00 UTC | newest] Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 1999-02-07 0:00 Looking for implementation idea Thomas Handler 1999-02-07 0:00 ` Paul Duquennoy 1999-02-08 0:00 ` Thomas Handler 1999-02-07 0:00 ` Corey Minyard 1999-02-07 0:00 ` Tom Moran 1999-02-07 0:00 ` Corey Minyard 1999-02-07 0:00 ` Larry Kilgallen 1999-02-08 0:00 ` dewar 1999-02-08 0:00 ` dennison 1999-02-08 0:00 ` robert_dewar 1999-02-07 0:00 ` Tucker Taft 1999-02-07 0:00 ` Corey Minyard 1999-02-08 0:00 ` robert_dewar 1999-02-08 0:00 ` Tucker Taft 1999-02-07 0:00 ` Corey Minyard 1999-02-08 0:00 ` dennison 1999-02-08 0:00 ` robert_dewar 1999-02-08 0:00 ` Tucker Taft 1999-02-09 0:00 ` robert_dewar 1999-02-11 0:00 ` Ehud Lamm 1999-02-08 0:00 ` Thomas Handler 1999-02-07 0:00 ` Niklas Holsti 1999-02-08 0:00 ` Thomas Handler 1999-02-09 0:00 ` Niklas Holsti 1999-02-10 0:00 ` Thomas Handler
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox