summaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2014-06-03 12:57:53 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2014-06-03 12:57:53 -0700
commit776edb59317ada867dfcddde40b55648beeb0078 (patch)
treef6a6136374642323cfefd7d6399ea429f9018ade /Documentation
parent59a3d4c3631e553357b7305dc09db1990aa6757c (diff)
parent3cf2f34e1a3d4d5ff209d087925cf950e52f4805 (diff)
downloadlinux-776edb59317ada867dfcddde40b55648beeb0078.tar.bz2
Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into next
Pull core locking updates from Ingo Molnar: "The main changes in this cycle were: - reduced/streamlined smp_mb__*() interface that allows more usecases and makes the existing ones less buggy, especially in rarer architectures - add rwsem implementation comments - bump up lockdep limits" * 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (33 commits) rwsem: Add comments to explain the meaning of the rwsem's count field lockdep: Increase static allocations arch: Mass conversion of smp_mb__*() arch,doc: Convert smp_mb__*() arch,xtensa: Convert smp_mb__*() arch,x86: Convert smp_mb__*() arch,tile: Convert smp_mb__*() arch,sparc: Convert smp_mb__*() arch,sh: Convert smp_mb__*() arch,score: Convert smp_mb__*() arch,s390: Convert smp_mb__*() arch,powerpc: Convert smp_mb__*() arch,parisc: Convert smp_mb__*() arch,openrisc: Convert smp_mb__*() arch,mn10300: Convert smp_mb__*() arch,mips: Convert smp_mb__*() arch,metag: Convert smp_mb__*() arch,m68k: Convert smp_mb__*() arch,m32r: Convert smp_mb__*() arch,ia64: Convert smp_mb__*() ...
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/atomic_ops.txt31
-rw-r--r--Documentation/memory-barriers.txt42
2 files changed, 23 insertions, 50 deletions
diff --git a/Documentation/atomic_ops.txt b/Documentation/atomic_ops.txt
index d9ca5be9b471..68542fe13b85 100644
--- a/Documentation/atomic_ops.txt
+++ b/Documentation/atomic_ops.txt
@@ -285,15 +285,13 @@ If a caller requires memory barrier semantics around an atomic_t
operation which does not return a value, a set of interfaces are
defined which accomplish this:
- void smp_mb__before_atomic_dec(void);
- void smp_mb__after_atomic_dec(void);
- void smp_mb__before_atomic_inc(void);
- void smp_mb__after_atomic_inc(void);
+ void smp_mb__before_atomic(void);
+ void smp_mb__after_atomic(void);
-For example, smp_mb__before_atomic_dec() can be used like so:
+For example, smp_mb__before_atomic() can be used like so:
obj->dead = 1;
- smp_mb__before_atomic_dec();
+ smp_mb__before_atomic();
atomic_dec(&obj->ref_count);
It makes sure that all memory operations preceding the atomic_dec()
@@ -302,15 +300,10 @@ operation. In the above example, it guarantees that the assignment of
"1" to obj->dead will be globally visible to other cpus before the
atomic counter decrement.
-Without the explicit smp_mb__before_atomic_dec() call, the
+Without the explicit smp_mb__before_atomic() call, the
implementation could legally allow the atomic counter update visible
to other cpus before the "obj->dead = 1;" assignment.
-The other three interfaces listed are used to provide explicit
-ordering with respect to memory operations after an atomic_dec() call
-(smp_mb__after_atomic_dec()) and around atomic_inc() calls
-(smp_mb__{before,after}_atomic_inc()).
-
A missing memory barrier in the cases where they are required by the
atomic_t implementation above can have disastrous results. Here is
an example, which follows a pattern occurring frequently in the Linux
@@ -487,12 +480,12 @@ Finally there is the basic operation:
Which returns a boolean indicating if bit "nr" is set in the bitmask
pointed to by "addr".
-If explicit memory barriers are required around clear_bit() (which
-does not return a value, and thus does not need to provide memory
-barrier semantics), two interfaces are provided:
+If explicit memory barriers are required around {set,clear}_bit() (which do
+not return a value, and thus does not need to provide memory barrier
+semantics), two interfaces are provided:
- void smp_mb__before_clear_bit(void);
- void smp_mb__after_clear_bit(void);
+ void smp_mb__before_atomic(void);
+ void smp_mb__after_atomic(void);
They are used as follows, and are akin to their atomic_t operation
brothers:
@@ -500,13 +493,13 @@ brothers:
/* All memory operations before this call will
* be globally visible before the clear_bit().
*/
- smp_mb__before_clear_bit();
+ smp_mb__before_atomic();
clear_bit( ... );
/* The clear_bit() will be visible before all
* subsequent memory operations.
*/
- smp_mb__after_clear_bit();
+ smp_mb__after_atomic();
There are two special bitops with lock barrier semantics (acquire/release,
same as spinlocks). These operate in the same way as their non-_lock/unlock
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 556f951f8626..46412bded104 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1583,20 +1583,21 @@ There are some more advanced barrier functions:
insert anything more than a compiler barrier in a UP compilation.
- (*) smp_mb__before_atomic_dec();
- (*) smp_mb__after_atomic_dec();
- (*) smp_mb__before_atomic_inc();
- (*) smp_mb__after_atomic_inc();
+ (*) smp_mb__before_atomic();
+ (*) smp_mb__after_atomic();
- These are for use with atomic add, subtract, increment and decrement
- functions that don't return a value, especially when used for reference
- counting. These functions do not imply memory barriers.
+ These are for use with atomic (such as add, subtract, increment and
+ decrement) functions that don't return a value, especially when used for
+ reference counting. These functions do not imply memory barriers.
+
+ These are also used for atomic bitop functions that do not return a
+ value (such as set_bit and clear_bit).
As an example, consider a piece of code that marks an object as being dead
and then decrements the object's reference count:
obj->dead = 1;
- smp_mb__before_atomic_dec();
+ smp_mb__before_atomic();
atomic_dec(&obj->ref_count);
This makes sure that the death mark on the object is perceived to be set
@@ -1606,27 +1607,6 @@ There are some more advanced barrier functions:
operations" subsection for information on where to use these.
- (*) smp_mb__before_clear_bit(void);
- (*) smp_mb__after_clear_bit(void);
-
- These are for use similar to the atomic inc/dec barriers. These are
- typically used for bitwise unlocking operations, so care must be taken as
- there are no implicit memory barriers here either.
-
- Consider implementing an unlock operation of some nature by clearing a
- locking bit. The clear_bit() would then need to be barriered like this:
-
- smp_mb__before_clear_bit();
- clear_bit( ... );
-
- This prevents memory operations before the clear leaking to after it. See
- the subsection on "Locking Functions" with reference to RELEASE operation
- implications.
-
- See Documentation/atomic_ops.txt for more information. See the "Atomic
- operations" subsection for information on where to use these.
-
-
MMIO WRITE BARRIER
------------------
@@ -2283,11 +2263,11 @@ operations:
change_bit();
With these the appropriate explicit memory barrier should be used if necessary
-(smp_mb__before_clear_bit() for instance).
+(smp_mb__before_atomic() for instance).
The following also do _not_ imply memory barriers, and so may require explicit
-memory barriers under some circumstances (smp_mb__before_atomic_dec() for
+memory barriers under some circumstances (smp_mb__before_atomic() for
instance):
atomic_add();