comp.lang.ada
 help / color / mirror / Atom feed
* Question about concurrent access to array
@ 2013-02-14 20:39 john
  2013-02-14 21:29 ` Adam Beneschan
  2013-02-14 21:41 ` Niklas Holsti
  0 siblings, 2 replies; 6+ messages in thread
From: john @ 2013-02-14 20:39 UTC (permalink / raw)


I'd like like several tasks to share an unpacked array of a definite discriminant record type.

1.) Is this possible?

2.) Do I need any kind of synchronization if I can guarantee that every task can only write and read to a different area (different range of indices) in the array?

Performance matters a lot in this case, which is why I would prefer not to use a protected object or any other synchronization mechanism. I can't imagine how a problem could occur if every task only accesses his own part of the array, but perhaps there is still one?



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Question about concurrent access to array
  2013-02-14 20:39 Question about concurrent access to array john
@ 2013-02-14 21:29 ` Adam Beneschan
  2013-02-14 21:53   ` Robert A Duff
  2013-02-14 21:41 ` Niklas Holsti
  1 sibling, 1 reply; 6+ messages in thread
From: Adam Beneschan @ 2013-02-14 21:29 UTC (permalink / raw)


On Thursday, February 14, 2013 12:39:01 PM UTC-8, jo...@peppermind.com wrote:
> I'd like like several tasks to share an unpacked array of a definite discriminant record type.
> 
> 1.) Is this possible?

I don't see why it wouldn't be.

> 2.) Do I need any kind of synchronization if I can guarantee that every task can only write and read to a different area (different range of indices) in the array? 
> 
> Performance matters a lot in this case, which is why I would prefer not to use a protected object or any other synchronization mechanism. I can't imagine how a problem could occur if every task only accesses his own part of the array, but perhaps there is still one?

I'd use an alignment aspect (or representation clause) to make sure that the array elements are aligned on some boundary, probably based on the largest integer that your machine can deal with.  Otherwise there's the risk that two consecutive array elements that "belong" to different tasks might share part of a "word" (or something), and the code to write into an element might involve reading the entire word, modifying part of the word, then writing it back.  If you do this, then I don't see how there could be a problem.  But that doesn't mean there isn't one.

                             -- Adam



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Question about concurrent access to array
  2013-02-14 20:39 Question about concurrent access to array john
  2013-02-14 21:29 ` Adam Beneschan
@ 2013-02-14 21:41 ` Niklas Holsti
  1 sibling, 0 replies; 6+ messages in thread
From: Niklas Holsti @ 2013-02-14 21:41 UTC (permalink / raw)


On 13-02-14 22:39 , john@peppermind.com wrote:
> I'd like like several tasks to share an unpacked array of a definite
> discriminant record type.
> 
> 1.) Is this possible?

Yes.

> 2.) Do I need any kind of synchronization if I can guarantee that
> every task can only write and read to a different area (different
> range of indices) in the array?

See section 9.10 in the RM. The crucial part is this:

"If two different objects, including nonoverlapping parts of the same
object, are independently addressable, they can be manipulated
concurrently by two different tasks without synchronization."

For arrays, independent addressability can be specified with the aspect
Independent_Components; see RM C.6, 6.8/3 and 6.9/3.

This is new in Ada 2012. For earlier Ada versions, I don't know how to
specify independent addressability of array components, but the Ada 2005
RM says in 9.10 that components are "normally" independently
addressable, unless packing or Component_Size is specified. To be safe,
I would make sure that the size of an array component is a multiple of
the machine word size, perhaps by adding padding components.

> Performance matters a lot in this case, which is why I would prefer
> not to use a protected object or any other synchronization mechanism.
> I can't imagine how a problem could occur if every task only accesses
> his own part of the array, but perhaps there is still one?

The computation could go wrong if the program for some reason uses
memory accesses that are wider than one array component. For example, if
the array components are 16 bits (unlikely in this case), and are
aligned to 16 bits, a task that updates a given component might still
use 32-bit accesses and could therefore interfere with concurrent
updates of the adjacent component. I'm not sure if Ada standards before
2012 really forbade such interference.

From the performance point of view, if you intend to gain performance by
running tasks in parallel on several cores, you should probably ensure
(by padding) that an array component fits exactly in one or more cache
lines. Alternatively, if a cache line holds more than one component,
then the tasks updating those components should run in the same core if
possible. The goal is to avoid "false sharing" that makes the cores
compete for ownership of cache lines.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
      .      @       .



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Question about concurrent access to array
  2013-02-14 21:29 ` Adam Beneschan
@ 2013-02-14 21:53   ` Robert A Duff
  2013-02-14 23:31     ` Adam Beneschan
  0 siblings, 1 reply; 6+ messages in thread
From: Robert A Duff @ 2013-02-14 21:53 UTC (permalink / raw)


Adam Beneschan <adam@irvine.com> writes:

> I'd use an alignment aspect (or representation clause) to make sure
> that the array elements are aligned on some boundary, probably based
> on the largest integer that your machine can deal with.

That's not necessary for correctness.  It might be necessary to avoid
false sharing of cache lines, as Niklas Holsti mentioned.  Depends on
the machine.

>...Otherwise
> there's the risk that two consecutive array elements that "belong" to
> different tasks might share part of a "word" (or something), and the
> code to write into an element might involve reading the entire word,
> modifying part of the word, then writing it back.

That would be incorrect code generation, assuming there's no Pack
or Component_Size clause.

- Bob



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Question about concurrent access to array
  2013-02-14 21:53   ` Robert A Duff
@ 2013-02-14 23:31     ` Adam Beneschan
  2013-02-15  9:11       ` cjpsimon
  0 siblings, 1 reply; 6+ messages in thread
From: Adam Beneschan @ 2013-02-14 23:31 UTC (permalink / raw)


On Thursday, February 14, 2013 1:53:57 PM UTC-8, Robert A Duff wrote:

> >...Otherwise
> > there's the risk that two consecutive array elements that "belong" to
> > different tasks might share part of a "word" (or something), and the
> > code to write into an element might involve reading the entire word,
> > modifying part of the word, then writing it back.
> 
> That would be incorrect code generation, assuming there's no Pack
> or Component_Size clause.

You're right.  I missed the clause in 9.10 that specifies that.

                                -- Adam




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Question about concurrent access to array
  2013-02-14 23:31     ` Adam Beneschan
@ 2013-02-15  9:11       ` cjpsimon
  0 siblings, 0 replies; 6+ messages in thread
From: cjpsimon @ 2013-02-15  9:11 UTC (permalink / raw)


Le vendredi 15 février 2013 00:31:59 UTC+1, Adam Beneschan a écrit :
> On Thursday, February 14, 2013 1:53:57 PM UTC-8, Robert A Duff wrote:
> 
> 
> 
> > >...Otherwise
> 
> > > there's the risk that two consecutive array elements that "belong" to
> 
> > > different tasks might share part of a "word" (or something), and the
> 
> > > code to write into an element might involve reading the entire word,
> 
> > > modifying part of the word, then writing it back.
> 
> > 

Data locality  is the most important parameter to consider for good speedup.
Sometimes it is preferable that each task works on a copy of the data (in the stack, for example) in order to improve their affinity with the processor and avoid false sharing in the cache. 
It depends on the hardware configuration (multi-processor vs. mulit-core, shared cache or not, cache levels) and volume calculation to be performed on each data.

> 
> > That would be incorrect code generation, assuming there's no Pack
> 
> > or Component_Size clause.
> 
> 
> 
> You're right.  I missed the clause in 9.10 that specifies that.
> 
> 
> 
>                                 -- Adam




^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2013-02-15  9:11 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-14 20:39 Question about concurrent access to array john
2013-02-14 21:29 ` Adam Beneschan
2013-02-14 21:53   ` Robert A Duff
2013-02-14 23:31     ` Adam Beneschan
2013-02-15  9:11       ` cjpsimon
2013-02-14 21:41 ` Niklas Holsti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox