diff options
author | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2010-04-14 17:39:26 -0700 |
---|---|---|
committer | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2010-05-10 11:08:35 -0700 |
commit | d21670acab9fcb4bc74a40b68a6941059234c55c (patch) | |
tree | 6a4c054bc4dbadf0524b4e221889a8da558dbdaf /kernel/rcutree.c | |
parent | 4a90a0681cf6cd21cd444184302aa045156486b3 (diff) | |
download | linux-d21670acab9fcb4bc74a40b68a6941059234c55c.tar.bz2 |
rcu: reduce the number of spurious RCU_SOFTIRQ invocations
Lai Jiangshan noted that up to 10% of the RCU_SOFTIRQ are spurious, and
traced this down to the fact that the current grace-period machinery
will uselessly raise RCU_SOFTIRQ when a given CPU needs to go through
a quiescent state, but has not yet done so. In this situation, there
might well be nothing that RCU_SOFTIRQ can do, and the overhead can be
worth worrying about in the ksoftirqd case. This patch therefore avoids
raising RCU_SOFTIRQ in this situation.
Changes since v1 (http://lkml.org/lkml/2010/3/30/122 from Lai Jiangshan):
o Omit the rcu_qs_pending() prechecks, as they aren't that
much less expensive than the quiescent-state checks.
o Merge with the set_need_resched() patch that reduces IPIs.
o Add the new n_rp_report_qs field to the rcu_pending tracing output.
o Update the tracing documentation accordingly.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Diffstat (limited to 'kernel/rcutree.c')
-rw-r--r-- | kernel/rcutree.c | 11 |
1 files changed, 6 insertions, 5 deletions
diff --git a/kernel/rcutree.c b/kernel/rcutree.c index c60fd74e7ec9..ba6996943e28 100644 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c @@ -1161,8 +1161,6 @@ static void rcu_do_batch(struct rcu_state *rsp, struct rcu_data *rdp) */ void rcu_check_callbacks(int cpu, int user) { - if (!rcu_pending(cpu)) - return; /* if nothing for RCU to do. */ if (user || (idle_cpu(cpu) && rcu_scheduler_active && !in_softirq() && hardirq_count() <= (1 << HARDIRQ_SHIFT))) { @@ -1194,7 +1192,8 @@ void rcu_check_callbacks(int cpu, int user) rcu_bh_qs(cpu); } rcu_preempt_check_callbacks(cpu); - raise_softirq(RCU_SOFTIRQ); + if (rcu_pending(cpu)) + raise_softirq(RCU_SOFTIRQ); } #ifdef CONFIG_SMP @@ -1534,18 +1533,20 @@ static int __rcu_pending(struct rcu_state *rsp, struct rcu_data *rdp) check_cpu_stall(rsp, rdp); /* Is the RCU core waiting for a quiescent state from this CPU? */ - if (rdp->qs_pending) { + if (rdp->qs_pending && !rdp->passed_quiesc) { /* * If force_quiescent_state() coming soon and this CPU * needs a quiescent state, and this is either RCU-sched * or RCU-bh, force a local reschedule. */ + rdp->n_rp_qs_pending++; if (!rdp->preemptable && ULONG_CMP_LT(ACCESS_ONCE(rsp->jiffies_force_qs) - 1, jiffies)) set_need_resched(); - rdp->n_rp_qs_pending++; + } else if (rdp->qs_pending && rdp->passed_quiesc) { + rdp->n_rp_report_qs++; return 1; } |