diff options
author | Mark Rutland <mark.rutland@arm.com> | 2018-06-21 13:13:09 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2018-06-21 14:22:33 +0200 |
commit | eccc2da8c03f316bba202e15af2be4615f461900 (patch) | |
tree | 1efd0bf1e8620a33ed58cd74cacc94ffc8cf111f /arch/h8300 | |
parent | bef828204a1bc7a0fd3a24551c4265e9c2ab95ed (diff) | |
download | linux-eccc2da8c03f316bba202e15af2be4615f461900.tar.bz2 |
atomics/treewide: Make atomic_fetch_add_unless() optional
Several architectures these have a near-identical implementation based
on atomic_read() and atomic_cmpxchg() which we can instead define in
<linux/atomic.h>, so let's do so, using something close to the existing
x86 implementation with try_cmpxchg().
Where an architecture provides its own atomic_fetch_add_unless(), it
must define a preprocessor symbol for it. The instrumented atomics are
updated accordingly.
Note that arch/arc's existing atomic_fetch_add_unless() had redundant
barriers, as these are already present in its atomic_cmpxchg()
implementation.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Palmer Dabbelt <palmer@sifive.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vineet Gupta <vgupta@synopsys.com>
Link: https://lore.kernel.org/lkml/20180621121321.4761-7-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/h8300')
-rw-r--r-- | arch/h8300/include/asm/atomic.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/h8300/include/asm/atomic.h b/arch/h8300/include/asm/atomic.h index 5c856887fdf2..710364946308 100644 --- a/arch/h8300/include/asm/atomic.h +++ b/arch/h8300/include/asm/atomic.h @@ -106,5 +106,6 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) arch_local_irq_restore(flags); return ret; } +#define atomic_fetch_add_unless atomic_fetch_add_unless #endif /* __ARCH_H8300_ATOMIC __ */ |