From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 107079,9d9cef47f5d1976c X-Google-Thread: 103376,ec0aac9465177ad0 X-Google-Attributes: gid107079,gid103376,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news2.google.com!postnews.google.com!m7g2000cwm.googlegroups.com!not-for-mail From: "jimmaureenrogers@worldnet.att.net" Newsgroups: sci.math.num-analysis,comp.lang.ada Subject: Re: Multithreaded scientific programming Date: 5 Oct 2006 18:28:42 -0700 Organization: http://groups.google.com Message-ID: <1160098122.621642.60940@m7g2000cwm.googlegroups.com> References: <1159978124.458442.80520@m7g2000cwm.googlegroups.com> <4523e925$1@nntp.unige.ch> <1159998421.978196.161210@c28g2000cwb.googlegroups.com> <1160007272.875375.190830@b28g2000cwb.googlegroups.com> NNTP-Posting-Host: 69.170.65.169 Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" X-Trace: posting.google.com 1160098128 22813 127.0.0.1 (6 Oct 2006 01:28:48 GMT) X-Complaints-To: groups-abuse@google.com NNTP-Posting-Date: Fri, 6 Oct 2006 01:28:48 +0000 (UTC) User-Agent: G2/1.0 X-HTTP-UserAgent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.7) Gecko/20060909 Firefox/1.5.0.7,gzip(gfe),gzip(gfe) Complaints-To: groups-abuse@google.com Injection-Info: m7g2000cwm.googlegroups.com; posting-host=69.170.65.169; posting-account=SqOfxAwAAAAkL81YAPGH1JdBwpUXw9ZG Xref: g2news2.google.com sci.math.num-analysis:6303 comp.lang.ada:6887 Date: 2006-10-05T18:28:42-07:00 List-Id: David B. Chorlian wrote: > Talk about cache, TLB, and other memory management topics which > are the serious problems in large-scale scientific computing. Memory management is one of the serious problems of large-scale computing in general. It is not limited to scientific computing. Memory management for large scale computing can be divided into several sub-topics. I will cover some of those topics. I trust others will comment on the same areas and add comments on areas I do not discuss. Ada provides implicit memory management for large scale computing in the areas of data encapsulation, direct communication between tasks, communication through implicitly protected shared buffers, and communication through unprotected shared variables. Every Ada program has at least one task. The "main" or entry point procedure for the program runs within an "environment" task. All other tasks depend on a master task. Furthermore, if a task depends on a given master, it is defined to depend on the task that executes the master, and (recursively) on any master of that task. The variables declared statically within a task are normally allocated on the stack for that task. Those variables are visible to subprograms and tasks defined within the scope of the task defining the variable. Tasks do not need to be defined within the scope of other tasks. Tasks may be defined by the elaboration of an object declaration in a package depended upon by the environment task. Tasks may directly communicate with each other using the Rendezvous model. The called task declares an entry with zero or more parameters. The calling task calls that entry. All entry calls are implicitly queued. The entry call is completed only when a calling task has made a call and the called task reaches the point in the code where it accepts the entry. This communication mechanism creates synchronous communication between tasks. Multiple tasks may call a given entry in a common called task. The implicit queuing mechanism assures that the calls will be handled according to the queuing policy in place for the program. A calling task may remove itself from an entry queue any time before its call is accepted using selective entry calls, which are essentially conditional calls to other tasks. Many problems are not best served by fully synchronous communication between tasks. Ada provides mechanisms for asynchronous communication between tasks. The Ada protected object provides asynchronous communication between tasks with implicit protection from inappropriate mutual access to the shared data. There are three kinds of access to a protected object: unconditional read/write, conditional read/write, and shared read-only. Protected procedures provide unconditional read/write access. Protected procedures are given exclusive access to the protected object during the execution of the procedure. Protected entries provide conditional read/ write access to the protected object. Like procedures, an entry is given exclusive access to the protected object during its execution. Unlike protected procedures, protected entries only execute when their boundary condition evaluates to TRUE. Protected functions provide read-only access to protected objects. Any number of protected function calls may execute simultaneously on a protected object. All the access restrictions to a protected object are provided implicitly. The programmer cannot "lock" or "unlock" the protected object explicitly. Unprotected shared variables may be implemented in Ada. Ada provides the ability to assign unprotected variables the attributes of "atomic" and "volatile". Such variables do not have any implicit or explicit locking beyond the locking provided in a multi-processor system for atomic variables. Ada concurrency is defined at a relatively high level of abstraction. The abstraction level is much higher than that provided for Pthreads or Windows threads. Ada concurrent programs can be ported across operating systems without changing the source code to accomodate threading conventions. Simply recompile the unchanged source code on the desired target platform. Jim Rogers