diff options
author | Jens Axboe <axboe@fb.com> | 2017-02-03 09:48:28 -0700 |
---|---|---|
committer | Jens Axboe <axboe@fb.com> | 2017-02-03 09:48:28 -0700 |
commit | e4d750c97794ea2bab793d4c518b1f4006364588 (patch) | |
tree | 8b351d08b1d81986402964a243268e70bb31a6a9 /block/mq-deadline.c | |
parent | b973cb7e89fe3dcc2bc72c5b3aa7a3bfd9d0e6d5 (diff) | |
download | linux-e4d750c97794ea2bab793d4c518b1f4006364588.tar.bz2 |
block: free merged request in the caller
If we end up doing a request-to-request merge when we have completed
a bio-to-request merge, we free the request from deep down in that
path. For blk-mq-sched, the merge path has to hold the appropriate
lock, but we don't need it for freeing the request. And in fact
holding the lock is problematic, since we are now calling the
mq sched put_rq_private() hook with the lock held. Other call paths
do not hold this lock.
Fix this inconsistency by ensuring that the caller frees a merged
request. Then we can do it outside of the lock, making it both more
efficient and fixing the blk-mq-sched problem of invoking parts of
the scheduler with an unknown lock state.
Reported-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Diffstat (limited to 'block/mq-deadline.c')
-rw-r--r-- | block/mq-deadline.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 8f91f21e8663..d68d9c273a66 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -371,12 +371,16 @@ static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio) { struct request_queue *q = hctx->queue; struct deadline_data *dd = q->elevator->elevator_data; - int ret; + struct request *free = NULL; + bool ret; spin_lock(&dd->lock); - ret = blk_mq_sched_try_merge(q, bio); + ret = blk_mq_sched_try_merge(q, bio, &free); spin_unlock(&dd->lock); + if (free) + blk_mq_free_request(free); + return ret; } |