comp.lang.ada
 help / color / mirror / Atom feed
From: stevenselectronicmail@gmail.com
Subject: Re: Lock-Free stack in Ada SPARK
Date: Mon, 13 Feb 2017 12:04:00 -0800 (PST)
Date: 2017-02-13T12:04:00-08:00	[thread overview]
Message-ID: <88cd0d46-d54a-46b5-8760-356d2e96335c@googlegroups.com> (raw)
In-Reply-To: <26f24676-deab-48bc-b180-87d202877053@googlegroups.com>

On Sunday, February 12, 2017 at 2:25:53 PM UTC-8, Robert Eachus wrote:
> I see some dangerous thinking going on here.  Many years ago, at Hopneywell, I pointed out that the plan to grow from two to four CPUs in the DPS6 line was missing a key project schedule issue.  The current OS had lots of deadlock free locks, but they were not livelock free.  The cure was simple in one sense.  Every place that had one lock needed three.  Pairs of CPUs would contend for one of the new locks, and once they had it, could contend for the master lock.
> 
> We were able to automate most of the work, but there were a few dozen RMW instructions that were not part of the standard lock pattern, and converting them to the (new) standard pattern took longer than dealing with the majority. Just about every data table in the OS needed these locks.
> 
> Anyway, I used a tool to create a Markov chain model that allowed us to prove a lot of useful properties about the algorithm used. It also showed that for producer consumer cases, where there was one consumer, like a disk drive controller, we didn't need to do this. (Those waiting might get served out-of-order, but no livelocks or starvation.) We only needed the two-level locks for cases where all contenders were equal.*  The dining philosophers is the classic case--deadlocks occur when all philosophers pick up say the left fork first.  This turns into livelock if the philosophers put down the first fork if they can't pick up the second one.  A lot less likely, but in our OS locks livelock could have occurred several times a day.
> 
> Oh, and another neat trick, what if you need to grab multiple locks, and want to insure that you do not create a deadlock?  Never contend for a lock that has a lower address than a lock you hold.  By assigning addresses carefully you can insure that locking errors will normally be avoided, and will be detected when, for example, the OS tries to acquire a lock for your application while your program holds a (non-OS) lock.
> 
> * Why not make the contenders non-equal?  They were running the same (shared) OS code.  I guess we could have had two different OS images, but this was back when memory was expensive.

I think dealing with live-lock would require wait-free data structures which are significantly more difficult to implement. I am currently working on an approach that uses thread local nodes threaded together into a linked list. The initial registration of a node first time through is not wait-free but once registered the functions could iterate through the list and ensure wait-freedom for all tasks.

  parent reply	other threads:[~2017-02-13 20:04 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-11  3:07 Lock-Free stack in Ada SPARK stevenselectronicmail
2017-02-11  9:57 ` Hadrien Grasland
2017-02-11 10:07   ` Hadrien Grasland
2017-02-11 10:08     ` Hadrien Grasland
2017-02-11 10:14       ` Hadrien Grasland
2017-02-11 17:34         ` stevenselectronicmail
2017-02-11 19:36           ` Jeffrey R. Carter
2017-02-11 20:42             ` stevenselectronicmail
2017-02-11 22:40             ` Dmitry A. Kazakov
2017-02-11 23:02               ` stevenselectronicmail
2017-02-12  8:30                 ` Dmitry A. Kazakov
2017-02-12 12:26             ` Hadrien Grasland
2017-02-12 12:49               ` Dmitry A. Kazakov
2017-02-12 13:19                 ` Hadrien Grasland
2017-02-12 14:57                   ` Dmitry A. Kazakov
2017-02-12 13:04               ` Hadrien Grasland
2017-02-12 22:25 ` Robert Eachus
2017-02-13  8:22   ` Dmitry A. Kazakov
2017-02-13 20:04   ` stevenselectronicmail [this message]
2017-02-21  1:51     ` Robert Eachus
2017-02-21  8:44       ` Dmitry A. Kazakov
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox