summaryrefslogtreecommitdiffstats
path: root/kernel/rcutree.c
diff options
context:
space:
mode:
authorPaul E. McKenney <paul.mckenney@linaro.org>2011-11-23 13:38:58 -0800
committerPaul E. McKenney <paulmck@linux.vnet.ibm.com>2011-12-11 10:32:04 -0800
commitc92b131bdcf89bf79870f1631d07547241a98f6c (patch)
tree618f38e1f4e6f71e056b240613a89915730b03f9 /kernel/rcutree.c
parent3ad0decf98d97b9039d8ed47cee287366b929cdf (diff)
downloadlinux-c92b131bdcf89bf79870f1631d07547241a98f6c.tar.bz2
rcu: Remove dynticks false positives and RCU failures
Assertions in rcu_init_percpu_data() unknowingly relied on outgoing CPUs being turned off before reaching the idle loop. Unfortunately, when running under kvm/qemu on x86, CPUs really can get to idle before begin shut off. These CPUs are then born in dyntick-idle mode from an RCU perspective, which results in splats in rcu_init_percpu_data() and in RCU wrongly ignoring those CPUs despite them being active. This in turn can cause RCU to end grace periods prematurely, potentially freeing up memory that the newly onlined CPUs were still using. This is most decidedly not what we need to see in an RCU implementation. This commit therefore replaces the assertions in rcu_init_percpu_data() with code that forces RCU's dyntick-idle view of newly onlined CPUs to match reality. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Diffstat (limited to 'kernel/rcutree.c')
-rw-r--r--kernel/rcutree.c5
1 files changed, 3 insertions, 2 deletions
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 13fab4a9f9fb..aab9ed504b17 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -2054,8 +2054,9 @@ rcu_init_percpu_data(int cpu, struct rcu_state *rsp, int preemptible)
rdp->qlen_last_fqs_check = 0;
rdp->n_force_qs_snap = rsp->n_force_qs;
rdp->blimit = blimit;
- WARN_ON_ONCE(rdp->dynticks->dynticks_nesting != DYNTICK_TASK_NESTING);
- WARN_ON_ONCE((atomic_read(&rdp->dynticks->dynticks) & 0x1) != 1);
+ rdp->dynticks->dynticks_nesting = DYNTICK_TASK_NESTING;
+ atomic_set(&rdp->dynticks->dynticks,
+ (atomic_read(&rdp->dynticks->dynticks) & ~0x1) + 1);
raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
/*