diff options
author | David Jeffery <djeffery@redhat.com> | 2022-01-31 15:33:37 -0500 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2022-02-16 19:46:20 -0700 |
commit | 8f5fea65b06de1cc51d4fc23fb4d378d1abd6ed7 (patch) | |
tree | 14b56b2ad90ec08017df5ee33e983113dc89102d /block | |
parent | 24b45e6c25173abcf8d5e82285212b47f2b0f86b (diff) | |
download | linux-8f5fea65b06de1cc51d4fc23fb4d378d1abd6ed7.tar.bz2 |
blk-mq: avoid extending delays of active hctx from blk_mq_delay_run_hw_queues
When blk_mq_delay_run_hw_queues sets an hctx to run in the future, it can
reset the delay length for an already pending delayed work run_work. This
creates a scenario where multiple hctx may have their queues set to run,
but if one runs first and finds nothing to do, it can reset the delay of
another hctx and stall the other hctx's ability to run requests.
To avoid this I/O stall when an hctx's run_work is already pending,
leave it untouched to run at its current designated time rather than
extending its delay. The work will still run which keeps closed the race
calling blk_mq_delay_run_hw_queues is needed for while also avoiding the
I/O stall.
Signed-off-by: David Jeffery <djeffery@redhat.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20220131203337.GA17666@redhat
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block')
-rw-r--r-- | block/blk-mq.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c index 7ca0b47246a6..a05ce7725031 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2180,6 +2180,14 @@ void blk_mq_delay_run_hw_queues(struct request_queue *q, unsigned long msecs) if (blk_mq_hctx_stopped(hctx)) continue; /* + * If there is already a run_work pending, leave the + * pending delay untouched. Otherwise, a hctx can stall + * if another hctx is re-delaying the other's work + * before the work executes. + */ + if (delayed_work_pending(&hctx->run_work)) + continue; + /* * Dispatch from this hctx either if there's no hctx preferred * by IO scheduler or if it has requests that bypass the * scheduler. |