diff options
author | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2017-01-24 08:51:34 -0800 |
---|---|---|
committer | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2017-01-25 12:54:22 -0800 |
commit | 7f554a3d05bea9f6b7bf8e0b041d09447f82d74a (patch) | |
tree | d679d50032981ee84479dfaea34716ab1ac57d6d /kernel/rcu | |
parent | 418b2977b34378f67c46930c72a776f94e7bf903 (diff) | |
download | linux-7f554a3d05bea9f6b7bf8e0b041d09447f82d74a.tar.bz2 |
srcu: Reduce probability of SRCU ->unlock_count[] counter overflow
Because there are no memory barriers between the srcu_flip() ->completed
increment and the summation of the read-side ->unlock_count[] counters,
both the compiler and the CPU can reorder the summation with the
->completed increment. If the updater is preempted long enough during
this process, the read-side counters could overflow, resulting in a
too-short grace period.
This commit therefore adds a memory barrier just after the ->completed
increment, ensuring that if the summation misses an increment of
->unlock_count[] from __srcu_read_unlock(), the next __srcu_read_lock()
will see the new value of ->completed, thus bounding the number of
->unlock_count[] increments that can be missed to NR_CPUS. The actual
overflow computation is more complex due to the possibility of nesting
of __srcu_read_lock().
Reported-by: Lance Roy <ldr709@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Diffstat (limited to 'kernel/rcu')
-rw-r--r-- | kernel/rcu/srcu.c | 11 |
1 files changed, 10 insertions, 1 deletions
diff --git a/kernel/rcu/srcu.c b/kernel/rcu/srcu.c index 665bc9951523..e773129c8b08 100644 --- a/kernel/rcu/srcu.c +++ b/kernel/rcu/srcu.c @@ -320,7 +320,16 @@ static bool try_check_zero(struct srcu_struct *sp, int idx, int trycount) */ static void srcu_flip(struct srcu_struct *sp) { - sp->completed++; + WRITE_ONCE(sp->completed, sp->completed + 1); + + /* + * Ensure that if the updater misses an __srcu_read_unlock() + * increment, that task's next __srcu_read_lock() will see the + * above counter update. Note that both this memory barrier + * and the one in srcu_readers_active_idx_check() provide the + * guarantee for __srcu_read_lock(). + */ + smp_mb(); /* D */ /* Pairs with C. */ } /* |