diff options
author | Jens Axboe <axboe@kernel.dk> | 2018-11-27 17:02:25 -0700 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2018-11-29 10:11:56 -0700 |
commit | d666ba98f849ad44c4405ecc2180390ebe80f4f9 (patch) | |
tree | 90c90e15f230ca59ec094620b3e08f51af9d035f /block/blk-mq.c | |
parent | ce5b009cff1961137127edf91f44effd0eec8ffd (diff) | |
download | linux-d666ba98f849ad44c4405ecc2180390ebe80f4f9.tar.bz2 |
blk-mq: add mq_ops->commit_rqs()
blk-mq passes information to the hardware about any given request being
the last that we will issue in this sequence. The point is that hardware
can defer costly doorbell type writes to the last request. But if we run
into errors issuing a sequence of requests, we may never send the request
with bd->last == true set. For that case, we need a hook that tells the
hardware that nothing else is coming right now.
For failures returned by the drivers ->queue_rq() hook, the driver is
responsible for flushing pending requests, if it uses bd->last to
optimize that part. This works like before, no changes there.
Reviewed-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-mq.c')
-rw-r--r-- | block/blk-mq.c | 16 |
1 files changed, 16 insertions, 0 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c index 2a1a653a8054..d8534107bb6f 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1259,6 +1259,14 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, if (!list_empty(list)) { bool needs_restart; + /* + * If we didn't flush the entire list, we could have told + * the driver there was more coming, but that turned out to + * be a lie. + */ + if (q->mq_ops->commit_rqs) + q->mq_ops->commit_rqs(hctx); + spin_lock(&hctx->lock); list_splice_init(list, &hctx->dispatch); spin_unlock(&hctx->lock); @@ -1865,6 +1873,14 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, blk_mq_end_request(rq, ret); } } + + /* + * If we didn't flush the entire list, we could have told + * the driver there was more coming, but that turned out to + * be a lie. + */ + if (!list_empty(list) && hctx->queue->mq_ops->commit_rqs) + hctx->queue->mq_ops->commit_rqs(hctx); } static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq) |