summaryrefslogtreecommitdiffstats
path: root/fs/locks.c
diff options
context:
space:
mode:
authorTianchen Ding <dtcccc@linux.alibaba.com>2022-11-04 10:36:01 +0800
committerPeter Zijlstra <peterz@infradead.org>2022-11-16 10:13:05 +0100
commitd6962c4fe8f96f7d384d6489b6b5ab5bf3e35991 (patch)
tree8e5af4329e2c841b6c13462dffc1cdfdb3f59439 /fs/locks.c
parent52b33d87b9197c51e8ffdc61873739d90dd0a16f (diff)
downloadlinux-d6962c4fe8f96f7d384d6489b6b5ab5bf3e35991.tar.bz2
sched: Clear ttwu_pending after enqueue_task()
We found a long tail latency in schbench whem m*t is close to nr_cpus. (e.g., "schbench -m 2 -t 16" on a machine with 32 cpus.) This is because when the wakee cpu is idle, rq->ttwu_pending is cleared too early, and idle_cpu() will return true until the wakee task enqueued. This will mislead the waker when selecting idle cpu, and wake multiple worker threads on the same wakee cpu. This situation is enlarged by commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU on wakelist if wakee cpu is idle") because it tends to use wakelist. Here is the result of "schbench -m 2 -t 16" on a VM with 32vcpu (Intel(R) Xeon(R) Platinum 8369B). Latency percentiles (usec): base base+revert_f3dd3f674555 base+this_patch 50.0000th: 9 13 9 75.0000th: 12 19 12 90.0000th: 15 22 15 95.0000th: 18 24 17 *99.0000th: 27 31 24 99.5000th: 3364 33 27 99.9000th: 12560 36 30 We also tested on unixbench and hackbench, and saw no performance change. Signed-off-by: Tianchen Ding <dtcccc@linux.alibaba.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Mel Gorman <mgorman@suse.de> Link: https://lkml.kernel.org/r/20221104023601.12844-1-dtcccc@linux.alibaba.com
Diffstat (limited to 'fs/locks.c')
0 files changed, 0 insertions, 0 deletions