summaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2021-02-10bcache: consider the fragmentation when update the writeback ratedongdong tao4-0/+73
Current way to calculate the writeback rate only considered the dirty sectors, this usually works fine when the fragmentation is not high, but it will give us unreasonable small rate when we are under a situation that very few dirty sectors consumed a lot dirty buckets. In some case, the dirty bucekts can reached to CUTOFF_WRITEBACK_SYNC while the dirty data(sectors) not even reached the writeback_percent, the writeback rate will still be the minimum value (4k), thus it will cause all the writes to be stucked in a non-writeback mode because of the slow writeback. We accelerate the rate in 3 stages with different aggressiveness, the first stage starts when dirty buckets percent reach above BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW (50), the second is BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID (57), the third is BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH (64). By default the first stage tries to writeback the amount of dirty data in one bucket (on average) in (1 / (dirty_buckets_percent - 50)) second, the second stage tries to writeback the amount of dirty data in one bucket in (1 / (dirty_buckets_percent - 57)) * 100 millisecond, the third stage tries to writeback the amount of dirty data in one bucket in (1 / (dirty_buckets_percent - 64)) millisecond. the initial rate at each stage can be controlled by 3 configurable parameters writeback_rate_fp_term_{low|mid|high}, they are by default 1, 10, 1000, the hint of IO throughput that these values are trying to achieve is described by above paragraph, the reason that I choose those value as default is based on the testing and the production data, below is some details: A. When it comes to the low stage, there is still a bit far from the 70 threshold, so we only want to give it a little bit push by setting the term to 1, it means the initial rate will be 170 if the fragment is 6, it is calculated by bucket_size/fragment, this rate is very small, but still much reasonable than the minimum 8. For a production bcache with unheavy workload, if the cache device is bigger than 1 TB, it may take hours to consume 1% buckets, so it is very possible to reclaim enough dirty buckets in this stage, thus to avoid entering the next stage. B. If the dirty buckets ratio didn't turn around during the first stage, it comes to the mid stage, then it is necessary for mid stage to be more aggressive than low stage, so i choose the initial rate to be 10 times more than low stage, that means 1700 as the initial rate if the fragment is 6. This is some normal rate we usually see for a normal workload when writeback happens because of writeback_percent. C. If the dirty buckets ratio didn't turn around during the low and mid stages, it comes to the third stage, and it is the last chance that we can turn around to avoid the horrible cutoff writeback sync issue, then we choose 100 times more aggressive than the mid stage, that means 170000 as the initial rate if the fragment is 6. This is also inferred from a production bcache, I've got one week's writeback rate data from a production bcache which has quite heavy workloads, again, the writeback is triggered by the writeback percent, the highest rate area is around 100000 to 240000, so I believe this kind aggressiveness at this stage is reasonable for production. And it should be mostly enough because the hint is trying to reclaim 1000 bucket per second, and from that heavy production env, it is consuming 50 bucket per second on average in one week's data. Option writeback_consider_fragment is to control whether we want this feature to be on or off, it's on by default. Lastly, below is the performance data for all the testing result, including the data from production env: https://docs.google.com/document/d/1AmbIEa_2MhB9bqhC3rfga9tp7n9YX9PLn0jSUxscVW0/edit?usp=sharing Signed-off-by: dongdong tao <dongdong.tao@canonical.com> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-04block: remove skd driverDamien Le Moal5-4010/+0
The STEC S1220 PCIe SSD cards are EOL since 2014 and not supported by the vendor anymore. As the skd driver for this SSD is starting to cause problems with improvements to the block layer, stop supporting it in newer kernel versions. Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-04Merge branch 'md-next' of ↵Jens Axboe1-1/+1
https://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-5.12/drivers Pull MD fix from Song. * 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md: md/raid5: cast chunk_sectors to sector_t value
2021-02-04Merge tag 'floppy-for-5.12' of https://github.com/evdenis/linux-floppy into ↵Jens Axboe1-15/+15
for-5.12/drivers Pull floppy fix from Denis: "Floppy patch for 5.12 - O_NDELAY/O_NONBLOCK fix for floppy from Jiri Kosina. libblkid is using O_NONBLOCK when probing devices. This leads to pollution of kernel log with error messages from floppy driver. Also the driver fails a mount prior to being opened without O_NONBLOCK at least once. The patch fixes the issues." Signed-off-by: Denis Efremov <efremov@linux.com> * tag 'floppy-for-5.12' of https://github.com/evdenis/linux-floppy: floppy: reintroduce O_NDELAY fix
2021-02-04floppy: reintroduce O_NDELAY fixJiri Kosina1-15/+15
This issue was originally fixed in 09954bad4 ("floppy: refactor open() flags handling"). The fix as a side-effect, however, introduce issue for open(O_ACCMODE) that is being used for ioctl-only open. I wrote a fix for that, but instead of it being merged, full revert of 09954bad4 was performed, re-introducing the O_NDELAY / O_NONBLOCK issue, and it strikes again. This is a forward-port of the original fix to current codebase; the original submission had the changelog below: ==== Commit 09954bad4 ("floppy: refactor open() flags handling"), as a side-effect, causes open(/dev/fdX, O_ACCMODE) to fail. It turns out that this is being used setfdprm userspace for ioctl-only open(). Reintroduce back the original behavior wrt !(FMODE_READ|FMODE_WRITE) modes, while still keeping the original O_NDELAY bug fixed. Link: https://lore.kernel.org/r/nycvar.YFH.7.76.2101221209060.5622@cbobk.fhfr.pm Cc: stable@vger.kernel.org Reported-by: Wim Osterholt <wim@djo.tudelft.nl> Tested-by: Wim Osterholt <wim@djo.tudelft.nl> Reported-and-tested-by: Kurt Garloff <kurt@garloff.de> Fixes: 09954bad4 ("floppy: refactor open() flags handling") Fixes: f2791e7ead ("Revert "floppy: refactor open() flags handling"") Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Denis Efremov <efremov@linux.com>
2021-02-03md/raid5: cast chunk_sectors to sector_t valueGuoqing Jiang1-1/+1
Currently, raid5 calculates dev_sectors from chunk_sectors without proper cast, which is problematic. Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com> Signed-off-by: Song Liu <songliubraving@fb.com>
2021-02-02Merge tag 'nvme-5.21-2020-02-02' of git://git.infradead.org/nvme into ↵Jens Axboe13-144/+243
for-5.12/drivers Pull NVMe updates from Christoph: "nvme updates for 5.12: - failed reconnect fixes (Chao Leng) - various tracing improvements (Michal Krakowiak, Johannes Thumshirn) - switch the nvmet-fc assoc_list to use RCU protection (Leonid Ravich) - resync the status codes with the latest spec (Max Gurtovoy) - minor nvme-tcp improvements (Sagi Grimberg) - various cleanups (Rikard Falkeborn, Minwoo Im, Chaitanya Kulkarni, Israel Rukshin)" * tag 'nvme-5.21-2020-02-02' of git://git.infradead.org/nvme: (22 commits) nvme-tcp: use cancel tagset helper for tear down nvme-rdma: use cancel tagset helper for tear down nvme-tcp: add clean action for failed reconnection nvme-rdma: add clean action for failed reconnection nvme-core: add cancel tagset helpers nvme-core: get rid of the extra space nvme: add tracing of zns commands nvme: parse format nvm command details when tracing nvme: update enumerations for status codes nvmet: add lba to sect conversion helpers nvmet: remove extra variable in identify ns nvmet: remove extra variable in id-desclist nvmet: remove extra variable in smart log nsid nvme: refactor ns->ctrl by request nvme-tcp: pass multipage bvec to request iov_iter nvme-tcp: get rid of unused helper function nvme-tcp: fix wrong setting of request iov_iter nvme: support command retry delay for admin command nvme: constify static attribute_group structs nvmet-fc: use RCU proctection for assoc_list ...
2021-02-02nvme-tcp: use cancel tagset helper for tear downChao Leng1-10/+2
Use nvme_cancel_tagset and nvme_cancel_admin_tagset to clean code for tear down process. Signed-off-by: Chao Leng <lengchao@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme-rdma: use cancel tagset helper for tear downChao Leng1-10/+2
Use nvme_cancel_tagset and nvme_cancel_admin_tagset to clean code for tear down process. Signed-off-by: Chao Leng <lengchao@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme-tcp: add clean action for failed reconnectionChao Leng1-2/+16
If reconnect failed after start io queues, the queues will be unquiesced and new requests continue to be delivered. Reconnection error handling process directly free queues without cancel suspend requests. The suppend request will time out, and then crash due to use the queue after free. Add sync queues and cancel suppend requests for reconnection error handling. Signed-off-by: Chao Leng <lengchao@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme-rdma: add clean action for failed reconnectionChao Leng1-2/+16
A crash happens when inject failed reconnection. If reconnect failed after start io queues, the queues will be unquiesced and new requests continue to be delivered. Reconnection error handling process directly free queues without cancel suspend requests. The suppend request will time out, and then crash due to use the queue after free. Add sync queues and cancel suppend requests for reconnection error handling. Signed-off-by: Chao Leng <lengchao@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme-core: add cancel tagset helpersChao Leng2-0/+22
Add nvme_cancel_tagset and nvme_cancel_admin_tagset for tear down and reconnection error handling. Signed-off-by: Chao Leng <lengchao@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme-core: get rid of the extra spaceChaitanya Kulkarni1-1/+1
Remove the extra space in the nvme_free_cels() when calling xa_for_each loop which is not a common practice (except drivers/infiniband/core/ not sure why). Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme: add tracing of zns commandsJohannes Thumshirn2-1/+39
When support for the NVMe ZNS commands was merged, tracing of these has been omitted. Add nvme_cmd_zone_mgmt_send, nvme_cmd_zone_mgmt_recv as well as nvme_cmd_zone_append to the nvme driver's tracing facility. Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme: parse format nvm command details when tracingMichal Krakowiak1-0/+19
Add detailed parsing of format nvm admin command to make the trace log more consistent and human-readable. Signed-off-by: Michal Krakowiak <michal.krakowiak@intel.com> Acked-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme: update enumerations for status codesMax Gurtovoy1-4/+20
All the updates are mentioned in the ratified NVMe 1.4 spec. Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvmet: add lba to sect conversion helpersChaitanya Kulkarni2-5/+13
In this preparation patch, we add helpers to convert lbas to sectors & sectors to lba. This is needed to eliminate code duplication in the ZBD backend. Use these helpers in the block device backend. Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvmet: remove extra variable in identify nsChaitanya Kulkarni1-16/+15
We remove the extra local variable struct nvmet_ns in nvmet_execute_identify_ns() since req already has ns member that can be reused, this also eliminates the explicit call to nvmet_put_namespace() which is already present in the request completion path. Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvmet: remove extra variable in id-desclistChaitanya Kulkarni1-11/+9
We remove the extra local variable struct nvmet_ns in nvmet_execute_identify_desclist() since req already has ns member that can be reused, this also eliminates the explicit call to nvmet_put_namespace() which is already present in the request completion path. Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvmet: remove extra variable in smart log nsidChaitanya Kulkarni1-11/+9
We remove the extra local variable struct nvmet_ns in nvmet_get_smart_log_nsid() since req already has ns member that can be reused, this also eliminates the explicit call to nvmet_put_namespace() which is already present in the request completion path. Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme: refactor ns->ctrl by requestMinwoo Im1-3/+3
Just for current code in nvme_cleanup_cmd(), we don't have to get namespace instance, but we need controller instance. Controller instance can be retrieved by namespace instance, but it can be directly accessed by nvme_request instance from request. ctrl = nvme_req(req)->ctrl; We don't have to go around namespace instance from request instance through gendisk. Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme-tcp: pass multipage bvec to request iov_iterSagi Grimberg1-4/+9
iov_iter uses the right helpers so we should be able to pass in a multipage bvec. Right now the iov_iter is initialized with more segments that it needs which doesn't fail because the iov_iter is capped by byte count, but it is better to use a full multipage bvec iter. Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme-tcp: get rid of unused helper functionSagi Grimberg1-5/+0
Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme-tcp: fix wrong setting of request iov_iterSagi Grimberg1-5/+2
We might set the iov_iter direction wrong, which is harmless for this use-case, but get it right. Also this makes the code slightly cleaner. Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme: support command retry delay for admin commandMinwoo Im1-3/+2
The controller can request a delay retrying a failed command by setting the Command Retry Delay (CRD) field in the Completion Queue Entry. Currentlty this features is only applied to commands on the I/O queue, but not to commands on the admin queue. Retreive the nvme_ctrl from the request so that no namespace is required and apply the feature to all commands. Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvme: constify static attribute_group structsRikard Falkeborn3-4/+4
The only usage of these is to put their addresses in arrays of pointers to const attribute_groups. Make them const to allow the compiler to put them in read-only memory. Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvmet-fc: use RCU proctection for assoc_listLeonid Ravich1-43/+38
searching assoc_list protected by rcu_read_lock if list not changed inline. and according to the rcu list rules. queue array embedded into nvmet_fc_tgt_assoc protected by rcu_read_lock according to rcu dereference/assign rules. queue and assoc object freed after grace period by call_rcu. tgtport lock taken for changing assoc_list. Reviewed-by: Eldad Zinger <Eldad.Zinger@dell.com> Reviewed-by: Elad Grupi <Elad.Grupi@dell.com> Reviewed-by: James Smart <james.smart@broadcom.com> Signed-off-by: Leonid Ravich <Leonid.Ravich@emc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvmet: Fix nvmet_is_port_enabled indentationIsrael Rukshin1-1/+1
Remove extra tab. Signed-off-by: Israel Rukshin <israelr@nvidia.com> Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-02-02nvmet: Use nvmet_is_port_enabled helper for pi_enableIsrael Rukshin1-3/+1
Remove code duplication. Signed-off-by: Israel Rukshin <israelr@nvidia.com> Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-01-31drbd: Avoid comma separated statementsJoe Perches1-2/+4
Use semicolons and braces. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-26rsxx: remove redundant NULL checkYang Li1-2/+1
Fix below warnings reported by coccicheck: ./drivers/block/rsxx/dma.c:948:3-8: WARNING: NULL check before some freeing functions is not needed. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <abaci-bugfix@linux.alibaba.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-26zram: fix NULL check before some freeing functions is not neededTian Tao1-2/+1
fixed the below warning: /drivers/block/zram/zram_drv.c:534:2-8: WARNING: NULL check before some freeing functions is not needed. Signed-off-by: Tian Tao <tiantao6@hisilicon.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-26drbd: remove unused argument from drbd_request_prepare and __drbd_make_requestGuoqing Jiang3-10/+6
We can remove start_jif since it is not used by drbd_request_prepare, then remove it from __drbd_make_request further. Cc: Philipp Reisner <philipp.reisner@linbit.com> Cc: Lars Ellenberg <lars.ellenberg@linbit.com> Cc: drbd-dev@lists.linbit.com Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-26mtip32xx: prefer pcie_capability_read_word()Bjorn Helgaas1-8/+3
Replace pci_read_config_word() with pcie_capability_read_word(). pcie_capability_read_word() takes care of a few special cases when reading the PCIe capability. See 8c0d3a02c130 ("PCI: Add accessors for PCI Express Capability"). Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-26mtip32xx: use PCI #defines instead of numbersBjorn Helgaas1-2/+2
Use PCI #defines for PCIe Device Control register values instead of hard-coding bit positions. No functional change intended. Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-26loop: scale loop device by introducing per device lockPavel Tatashin2-40/+54
Currently, loop device has only one global lock: loop_ctl_mutex. This becomes hot in scenarios where many loop devices are used. Scale it by introducing per-device lock: lo_mutex that protects modifications of all fields in struct loop_device. Keep loop_ctl_mutex to protect global data: loop_index_idr, loop_lookup, loop_add. The new lock ordering requirement is that loop_ctl_mutex must be taken before lo_mutex. Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: Tyler Hicks <tyhicks@linux.microsoft.com> Reviewed-by: Petr Vorel <pvorel@suse.cz> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-26bdev: Do not return EBUSY if bdev discard races with writeJan Kara1-6/+4
blkdev_fallocate() tries to detect whether a discard raced with an overlapping write by calling invalidate_inode_pages2_range(). However this check can give both false negatives (when writing using direct IO or when writeback already writes out the written pagecache range) and false positives (when write is not actually overlapping but ends in the same page when blocksize < pagesize). This actually causes issues for qemu which is getting confused by EBUSY errors. Fix the problem by removing this conflicting write detection since it is inherently racy and thus of little use anyway. Reported-by: Maxim Levitsky <mlevitsk@redhat.com> CC: "Darrick J. Wong" <darrick.wong@oracle.com> Link: https://lore.kernel.org/qemu-devel/20201111153913.41840-1-mlevitsk@redhat.com Signed-off-by: Jan Kara <jack@suse.cz> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-26block: inherit BIO_REMAPPED when cloning biosChristoph Hellwig3-0/+6
Cloned bios are can be used to on the same device, in which case we need to inherit the BIO_REMAPPED flag to avoid a double partition remap. When the cloned bios are used on another device, bio_set_dev will clear the flag. Fixes: 309dca309fc3 ("block: store a block_device pointer in struct bio") Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-26bcache: use bio_set_dev to assign ->bi_bdevChristoph Hellwig1-1/+1
Always use the bio_set_dev helper to assign ->bi_bdev to make sure other state related to the device is uptodate. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-26nvme: use bio_set_dev to assign ->bi_bdevChristoph Hellwig3-4/+4
Always use the bio_set_dev helper to assign ->bi_bdev to make sure other state related to the device is uptodate. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-25bfq: bfq_check_waker() should be staticJens Axboe1-1/+2
It's only used in the same file, mark is appropriately static. Fixes: 71217df39dc6 ("block, bfq: make waker-queue detection more robust") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-25block, bfq: make waker-queue detection more robustPaolo Valente2-110/+108
In the presence of many parallel I/O flows, the detection of waker bfq_queues suffers from false positives. This commits addresses this issue by making the filtering of actual wakers more selective. In more detail, a candidate waker must be found to meet waker requirements three times before being promoted to actual waker. Tested-by: Jan Kara <jack@suse.cz> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-25block, bfq: save also injection state on queue mergingPaolo Valente2-0/+13
To prevent injection information from being lost on bfq_queue merging, also the amount of service that a bfq_queue receives must be saved and restored when the bfq_queue is merged and split, respectively. Tested-by: Jan Kara <jack@suse.cz> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-25block, bfq: save also weight-raised service on queue mergingPaolo Valente2-0/+3
To prevent weight-raising information from being lost on bfq_queue merging, also the amount of service that a bfq_queue receives must be saved and restored when the bfq_queue is merged and split, respectively. Tested-by: Jan Kara <jack@suse.cz> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-25block, bfq: fix switch back from soft-rt weitgh-raisingPaolo Valente1-2/+20
A bfq_queue may happen to be deemed as soft real-time while it is still enjoying interactive weight-raising. If this happens because of a false positive, then the bfq_queue is likely to loose its soft real-time status soon. Upon losing such a status, the bfq_queue must get back its interactive weight-raising, if its interactive period is not over yet. But this case is not handled. This commit corrects this error. Tested-by: Jan Kara <jack@suse.cz> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-25block, bfq: re-evaluate convenience of I/O plugging on rq arrivalsPaolo Valente1-5/+19
Upon an I/O-dispatch attempt, BFQ may detect that it was better to plug I/O dispatch, and to wait for a new request to arrive for the currently in-service queue. But the arrival of a new request for an empty bfq_queue, and thus the switch from idle to busy of the bfq_queue, may cause the scenario to change, and make plugging no longer needed for service guarantees, or more convenient for throughput. In this case, keeping I/O-dispatch plugged would certainly lower throughput. To address this issue, this commit makes such a check, and stops plugging I/O if it is better to stop plugging I/O. Tested-by: Jan Kara <jack@suse.cz> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-25block, bfq: replace mechanism for evaluating I/O intensityPaolo Valente2-27/+52
Some BFQ mechanisms make their decisions on a bfq_queue basing also on whether the bfq_queue is I/O bound. In this respect, the current logic for evaluating whether a bfq_queue is I/O bound is rather rough. This commits replaces this logic with a more effective one. The new logic measures the percentage of time during which a bfq_queue is active, and marks the bfq_queue as I/O bound if the latter if this percentage is above a fixed threshold. Tested-by: Jan Kara <jack@suse.cz> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-25block: skip bio_check_eod for partition-remapped biosChristoph Hellwig1-5/+6
When an already remapped bio is resubmitted (e.g. by blk_queue_split), bio_check_eod will compare the remapped bi_sector against the size of the partition, leading to spurious I/O failures. Skip the EOD check in this case. Fixes: 309dca309fc3 ("block: store a block_device pointer in struct bio") Reported-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-25bio: don't copy bvec for direct IOPavel Begunkov3-39/+42
The block layer spends quite a while in blkdev_direct_IO() to copy and initialise bio's bvec. However, if we've already got a bvec in the input iterator it might be reused in some cases, i.e. when new ITER_BVEC_FLAG_FIXED flag is set. Simple tests show considerable performance boost, and it also reduces memory footprint. Suggested-by: Matthew Wilcox <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-25bio: add a helper calculating nr segments to allocPavel Begunkov3-8/+18
Add a helper function calculating the number of bvec segments we need to allocate to construct a bio. It doesn't change anything functionally, but will be used to not duplicate special cases in the future. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>