Skip Navigation Links | |
Exit Print View | |
Multithreaded Programming Guide Oracle Solaris 11.1 Information Library |
1. Covering Multithreading Basics
4. Programming with Synchronization Objects
5. Programming With the Oracle Solaris Software
6. Programming With Oracle Solaris Threads
Providing for Static Local Variables
Deadlocks Related to Scheduling
Some Basic Guidelines for Threaded Code
Multithreading enables you to take advantage of multiprocessors, including multicore and multithreaded processors, primarily through parallelism and scalability. Programmers should be aware of the differences between the memory models of a multiprocessor and a uniprocessor.
Note - In this manual, whenever multiprocessors are discussed, the context applies also to multicore and multithreaded processors unless noted otherwise.
Memory consistency is directly interrelated to the processor that interrogates memory. For uniprocessors, memory is obviously consistent because only one processor is viewing memory.
To improve multiprocessor performance, memory consistency is relaxed. You cannot always assume that changes made to memory by one processor are immediately reflected in the other processors' views of that memory.
You can avoid this complexity by using synchronization variables when you use shared or global variables.
Memory barrier synchronization is sometimes an efficient way to control parallelism on multiprocessors.
Another multiprocessor issue is efficient synchronization when threads must wait until all threads have reached a common point in their execution.
Note - The issues discussed here are not important when the threads synchronization primitives are always used to access shared memory locations. See Chapter 4, Programming with Synchronization Objects.
Threads synchronize access to shared storage locations by using the threads synchronization routines. With threads synchronization, running a program on a shared-memory multiprocessor has the identical effect of running the program on a uniprocessor.
However, in many situations a programmer might be tempted to take advantage of the multiprocessor and use “tricks” to avoid the synchronization routines. As Example 9-5 and Example 9-6 show, such tricks can be dangerous.
An understanding of the memory models supported by common multiprocessor architectures helps to understand the dangers.
The major multiprocessor components are:
The processors, including cores and their threads, which run the programs
Store buffers, which connect the processors to their caches
Caches, which hold the contents of recently accessed or modified storage locations
Memory, which is the primary storage and is shared by all processors
In the simple traditional model, the multiprocessor behaves as if the processors are connected directly to memory: when one processor stores into a location and another processor immediately loads from the same location, the second processor loads what was stored by the first.
Caches can be used to speed the average memory access. The desired semantics can be achieved when the caches are kept consistent with the other caches.
A problem with this simple approach is that the processor must often be delayed to make certain that the desired semantics are achieved. Many modern multiprocessors use various techniques to prevent such delays, which unfortunately change the semantics of the memory model.
Two of these techniques and their effects are explained in the next two examples.
Consider the purported solution to the producer and consumer problem that is shown in Example 9-5.
Although this program works on current SPARC-based multiprocessors, the program assumes that all multiprocessors have strongly ordered memory. This program is therefore not portable.
Example 9-5 Producer and Consumer Problem: Shared Memory Multiprocessors
|
When this program has exactly one producer and exactly one consumer and is run on a shared-memory multiprocessor, the program appears to be correct. The difference between in and out is the number of items in the buffer.
The producer waits, by repeatedly computing this difference, until room is available in the buffer for a new item. The consumer waits until an item is present in the buffer.
Strongly-ordered memory makes a modification to memory on one processor immediately available to the other processors. For strongly ordered memory, the solution is correct even taking into account that in and out will eventually overflow. The overflow occurs as long as BSIZE is less than the largest integer that can be represented in a word.
Shared-memory multiprocessors do not necessarily have strongly ordered memory. A change to memory by one processor is not necessarily available immediately to the other processors. See what happens when two changes to different memory locations are made by one processor. The other processors do not necessarily detect the changes in the order in which the changes were made because memory modifications do not happen immediately.
First the changes are stored in store buffers that are not visible to the cache.
The processor checks these store buffers to ensure that a program has a consistent view. But, because store buffers are not visible to other processors, a write by one processor does not become visible until the processor writes to cache.
The synchronization primitives use special instructions that flush the store buffers to cache. See Chapter 4, Programming with Synchronization Objects. So, using locks around your shared data ensures memory consistency.
When memory ordering is very relaxed, Example 9-5 has a problem. The consumer might see that in has been incremented by the producer before the consumer sees the change to the corresponding buffer slot.
This memory ordering is called weak ordering because stores made by one processor can appear to happen out of order by another processor. Memory, however, is always consistent from the same processor. To fix this inconsistency, the code should use mutexes to flush the cache.
Because the trend is toward relaxing memory order, programmers must be careful to use locks around all global or shared data.
As demonstrated by Example 9-5 and Example 9-6, locking is essential.
The code in Example 9-6 is an implementation of Peterson's Algorithm, which handles mutual exclusion between two threads. This code tries to guarantee that only one thread is in the critical section. When a thread calls mut_excl(), the thread enters the critical section sometime “soon.”
An assumption here is that a thread exits fairly quickly after entering the critical section.
Example 9-6 Mutual Exclusion for Two Threads
void mut_excl(int me /* 0 or 1 */) { static int loser; static int interested[2] = {0, 0}; int other; /* local variable */ other = 1 - me; interested[me] = 1; loser = me; while (loser == me && interested[other]) ; /* critical section */ interested[me] = 0; }
This algorithm works some of the time when the multiprocessor has strongly ordered memory.
Some multiprocessors, including some SPARC-based multiprocessors, have store buffers. When a thread issues a store instruction, the data is put into a store buffer. The buffer contents are eventually sent to the cache, but not necessarily right away. The caches on each of the processors maintain a consistent view of memory, but modified data does not reach the cache right away.
When multiple memory locations are stored into, the changes reach the cache and memory in the correct order, but possibly after a delay. SPARC-based multiprocessors with this property are said to have total store order (TSO).
Suppose you have a situation where one processor stores into location A and loads from location B. Another processor stores into location B and loads from location A. Either the first processor fetches the newly-modified value from location B, or the second processor fetches the newly-modified value from location A, or both. However, the case in which both processors load the old values cannot happen.
Moreover, with the delays caused by load and store buffers, the “impossible case” can happen.
What could happen with Peterson's algorithm is that two threads running on separate processors both enter the critical section. Each thread stores into its own slot of the particular array and then loads from the other slot. Both threads read the old values (0), each thread assumes that the other party is not present, and both enter the critical section. This kind of problem might not be detected when you test a program, but only occurs much later.
To avoid this problem use the threads synchronization primitives, whose implementations issue special instructions, to force the writing of the store buffers to the cache. See Chapter 4, Programming with Synchronization Objects.
In many applications, and especially numerical applications, while part of the algorithm can be parallelized, other parts are inherently sequential, as shown in the following table. The algorithm can use barrier synchronization to coordinate the parallel and sequential portions.
Table 9-1 Multithreaded Cooperation Through Barrier Synchronization
|
For example, you might produce a set of matrixes with a strictly linear computation, and perform operations on the matrixes that use a parallel algorithm. You then use the results of these operations to produce another set of matrixes, operate on these matrixes in parallel, and so on.
The nature of the parallel algorithms for such a computation is that little synchronization is required during the computation. But, synchronization of all the threads is required to ensure that the sequential computation is finished before the parallel computation begins.
The barrier forces all the threads that are doing the parallel computation to wait until all involved threads have reached the barrier. When the threads have reached the barrier, the threads are released and begin computing together.