diff options
author | Boqun Feng <boqun.feng@gmail.com> | 2021-09-09 12:59:19 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2021-09-15 17:49:16 +0200 |
commit | 81121524f1c798c9481bd7900450b72ee7ac2eef (patch) | |
tree | 727cf795edb63c044e2a4cd5a01bf322be6b1e3f /fs/dax.c | |
parent | 616be87eac9fa2ab2dca1069712f7236e50f3bf6 (diff) | |
download | linux-81121524f1c798c9481bd7900450b72ee7ac2eef.tar.bz2 |
locking/rwbase: Take care of ordering guarantee for fastpath reader
Readers of rwbase can lock and unlock without taking any inner lock, if
that happens, we need the ordering provided by atomic operations to
satisfy the ordering semantics of lock/unlock. Without that, considering
the follow case:
{ X = 0 initially }
CPU 0 CPU 1
===== =====
rt_write_lock();
X = 1
rt_write_unlock():
atomic_add(READER_BIAS - WRITER_BIAS, ->readers);
// ->readers is READER_BIAS.
rt_read_lock():
if ((r = atomic_read(->readers)) < 0) // True
atomic_try_cmpxchg(->readers, r, r + 1); // succeed.
<acquire the read lock via fast path>
r1 = X; // r1 may be 0, because nothing prevent the reordering
// of "X=1" and atomic_add() on CPU 1.
Therefore audit every usage of atomic operations that may happen in a
fast path, and add necessary barriers.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20210909110203.953991276@infradead.org
Diffstat (limited to 'fs/dax.c')
0 files changed, 0 insertions, 0 deletions