diff options
author | Christoph Hellwig <hch@lst.de> | 2019-06-05 21:08:27 +0200 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2019-06-05 13:18:39 -0600 |
commit | cf1db7fc8c2d31222701bd5c01b9cbaf89d8e7ce (patch) | |
tree | f44f4db009d754ea70376bf3b8a75f8c7594b3c8 | |
parent | bb6f59af309c69643b6b07d9372c01a1cc0792e7 (diff) | |
download | linux-cf1db7fc8c2d31222701bd5c01b9cbaf89d8e7ce.tar.bz2 |
mmc: also set max_segment_size in the device
If we only set the max_segment_size on the queue an IOMMU merge might
create bigger segments again, so limit the IOMMU merges as well.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
-rw-r--r-- | drivers/mmc/core/queue.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index b5b9c6142f08..92900a095796 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -377,6 +377,8 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) blk_queue_max_segment_size(mq->queue, round_down(host->max_seg_size, block_size)); + dma_set_max_seg_size(mmc_dev(host), queue_max_segment_size(mq->queue)); + INIT_WORK(&mq->recovery_work, mmc_mq_recovery_handler); INIT_WORK(&mq->complete_work, mmc_blk_mq_complete_work); |