diff options
author | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2010-01-04 15:09:09 -0800 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-01-13 09:06:05 +0100 |
commit | 46a1e34eda805501a8b32f26394faa435149f6d1 (patch) | |
tree | 56359434e348fce6ffc8701fb4948dee1cb4c91f /kernel/rcutree.c | |
parent | 45f014c52eef022873b19d6a20eb0ec9668f2b09 (diff) | |
download | linux-46a1e34eda805501a8b32f26394faa435149f6d1.tar.bz2 |
rcu: Make force_quiescent_state() start grace period if needed
Grace periods cannot be started while force_quiescent_state() is
active. This is OK in that the affected CPUs will try again
later, but it does induce needless grace-period delays. This
patch causes rcu_start_gp() to record a failed attempt to start
a grace period. When force_quiescent_state() prepares to return,
it then starts the grace period if there was such a failed
attempt.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: laijs@cn.fujitsu.com
Cc: dipankar@in.ibm.com
Cc: mathieu.desnoyers@polymtl.ca
Cc: josh@joshtriplett.org
Cc: dvhltc@us.ibm.com
Cc: niv@us.ibm.com
Cc: peterz@infradead.org
Cc: rostedt@goodmis.org
Cc: Valdis.Kletnieks@vt.edu
Cc: dhowells@redhat.com
LKML-Reference: <12626465501854-git-send-email->
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/rcutree.c')
-rw-r--r-- | kernel/rcutree.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/kernel/rcutree.c b/kernel/rcutree.c index d9202857d3ad..55e8f6ef8195 100644 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c @@ -660,6 +660,8 @@ rcu_start_gp(struct rcu_state *rsp, unsigned long flags) struct rcu_node *rnp = rcu_get_root(rsp); if (!cpu_needs_another_gp(rsp, rdp) || rsp->fqs_active) { + if (cpu_needs_another_gp(rsp, rdp)) + rsp->fqs_need_gp = 1; if (rnp->completed == rsp->completed) { spin_unlock_irqrestore(&rnp->lock, flags); return; @@ -1239,6 +1241,12 @@ static void force_quiescent_state(struct rcu_state *rsp, int relaxed) break; } rsp->fqs_active = 0; + if (rsp->fqs_need_gp) { + spin_unlock(&rsp->fqslock); /* irqs remain disabled */ + rsp->fqs_need_gp = 0; + rcu_start_gp(rsp, flags); /* releases rnp->lock */ + return; + } spin_unlock(&rnp->lock); /* irqs remain disabled */ unlock_fqs_ret: spin_unlock_irqrestore(&rsp->fqslock, flags); |