diff options
author | Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> | 2020-10-20 16:14:04 -0700 |
---|---|---|
committer | Christoph Hellwig <hch@lst.de> | 2020-10-22 15:28:02 +0200 |
commit | 150dfb6c834c9e0e92db7794530b09fd2b9f05c8 (patch) | |
tree | 29a45c91bc37a770898cb81c61a45021be6319f1 /drivers/nvme | |
parent | 5e063101ffacf7c14797f5185c58a967ca83c79f (diff) | |
download | linux-150dfb6c834c9e0e92db7794530b09fd2b9f05c8.tar.bz2 |
nvmet: don't use BLK_MQ_REQ_NOWAIT for passthru
By default, we set the passthru request allocation flag such that it
returns the error in the following code path and we fail the I/O when
BLK_MQ_REQ_NOWAIT is used for request allocation :-
nvme_alloc_request()
blk_mq_alloc_request()
blk_mq_queue_enter()
if (flag & BLK_MQ_REQ_NOWAIT)
return -EBUSY; <-- return if busy.
On some controllers using BLK_MQ_REQ_NOWAIT ends up in I/O error where
the controller is perfectly healthy and not in a degraded state.
Block layer request allocation does allow us to wait instead of
immediately returning the error when we BLK_MQ_REQ_NOWAIT flag is not
used. This has shown to fix the I/O error problem reported under
heavy random write workload.
Remove the BLK_MQ_REQ_NOWAIT parameter for passthru request allocation
which resolves this issue.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'drivers/nvme')
-rw-r--r-- | drivers/nvme/target/passthru.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index 0814cba8298a..8ee94f056898 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -244,7 +244,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req) q = ns->queue; } - rq = nvme_alloc_request(q, req->cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY); + rq = nvme_alloc_request(q, req->cmd, 0, NVME_QID_ANY); if (IS_ERR(rq)) { status = NVME_SC_INTERNAL; goto out_put_ns; |