diff options
author | Josh Don <joshdon@google.com> | 2022-06-08 19:55:15 -0700 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2022-06-13 10:30:01 +0200 |
commit | 792b9f65a568f48c50b3175536db9cde5a1edcc0 (patch) | |
tree | 293519728cff7baa4abe70544f3143473e8b667c | |
parent | 2ed81e765417ec2526f901366167a13294ef09ce (diff) | |
download | linux-792b9f65a568f48c50b3175536db9cde5a1edcc0.tar.bz2 |
sched: Allow newidle balancing to bail out of load_balance
While doing newidle load balancing, it is possible for new tasks to
arrive, such as with pending wakeups. newidle_balance() already accounts
for this by exiting the sched_domain load_balance() iteration if it
detects these cases. This is very important for minimizing wakeup
latency.
However, if we are already in load_balance(), we may stay there for a
while before returning back to newidle_balance(). This is most
exacerbated if we enter a 'goto redo' loop in the LBF_ALL_PINNED case. A
very straightforward workaround to this is to adjust should_we_balance()
to bail out if we're doing a CPU_NEWLY_IDLE balance and new tasks are
detected.
This was tested with the following reproduction:
- two threads that take turns sleeping and waking each other up are
affined to two cores
- a large number of threads with 100% utilization are pinned to all
other cores
Without this patch, wakeup latency was ~120us for the pair of threads,
almost entirely spent in load_balance(). With this patch, wakeup latency
is ~6us.
Signed-off-by: Josh Don <joshdon@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20220609025515.2086253-1-joshdon@google.com
-rw-r--r-- | kernel/sched/fair.c | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 7d8ef01669a5..8bed75757e65 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9824,9 +9824,15 @@ static int should_we_balance(struct lb_env *env) /* * In the newly idle case, we will allow all the CPUs * to do the newly idle load balance. + * + * However, we bail out if we already have tasks or a wakeup pending, + * to optimize wakeup latency. */ - if (env->idle == CPU_NEWLY_IDLE) + if (env->idle == CPU_NEWLY_IDLE) { + if (env->dst_rq->nr_running > 0 || env->dst_rq->ttwu_pending) + return 0; return 1; + } /* Try to find first idle CPU */ for_each_cpu_and(cpu, group_balance_mask(sg), env->cpus) { |