diff options
author | Andrea Parri <parri.andrea@gmail.com> | 2018-02-20 19:45:56 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2018-02-21 10:12:29 +0100 |
commit | cb13b424e986aed68d74cbaec3449ea23c50e167 (patch) | |
tree | 853b0d99f04da0e569ea74f016686f0b7d51800e /arch/alpha | |
parent | 88e77dc6a354095ddaaae715bc0d3b55702fa3db (diff) | |
download | linux-cb13b424e986aed68d74cbaec3449ea23c50e167.tar.bz2 |
locking/xchg/alpha: Add unconditional memory barrier to cmpxchg()
Continuing along with the fight against smp_read_barrier_depends() [1]
(or rather, against its improper use), add an unconditional barrier to
cmpxchg. This guarantees that dependency ordering is preserved when a
dependency is headed by an unsuccessful cmpxchg. As it turns out, the
change could enable further simplification of LKMM as proposed in [2].
[1] https://marc.info/?l=linux-kernel&m=150884953419377&w=2
https://marc.info/?l=linux-kernel&m=150884946319353&w=2
https://marc.info/?l=linux-kernel&m=151215810824468&w=2
https://marc.info/?l=linux-kernel&m=151215816324484&w=2
[2] https://marc.info/?l=linux-kernel&m=151881978314872&w=2
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-alpha@vger.kernel.org
Link: http://lkml.kernel.org/r/1519152356-4804-1-git-send-email-parri.andrea@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/alpha')
-rw-r--r-- | arch/alpha/include/asm/xchg.h | 15 |
1 files changed, 7 insertions, 8 deletions
diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h index 68dfb3cb7145..e2660866ce97 100644 --- a/arch/alpha/include/asm/xchg.h +++ b/arch/alpha/include/asm/xchg.h @@ -128,10 +128,9 @@ ____xchg(, volatile void *ptr, unsigned long x, int size) * store NEW in MEM. Return the initial value in MEM. Success is * indicated by comparing RETURN with OLD. * - * The memory barrier should be placed in SMP only when we actually - * make the change. If we don't change anything (so if the returned - * prev is equal to old) then we aren't acquiring anything new and - * we don't need any memory barrier as far I can tell. + * The memory barrier is placed in SMP unconditionally, in order to + * guarantee that dependency ordering is preserved when a dependency + * is headed by an unsuccessful operation. */ static inline unsigned long @@ -150,8 +149,8 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new) " or %1,%2,%2\n" " stq_c %2,0(%4)\n" " beq %2,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" @@ -177,8 +176,8 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new) " or %1,%2,%2\n" " stq_c %2,0(%4)\n" " beq %2,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" @@ -200,8 +199,8 @@ ____cmpxchg(_u32, volatile int *m, int old, int new) " mov %4,%1\n" " stl_c %1,%2\n" " beq %1,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" @@ -223,8 +222,8 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new) " mov %4,%1\n" " stq_c %1,%2\n" " beq %1,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" |