diff options
author | Bart Van Assche <bart.vanassche@wdc.com> | 2018-03-19 11:46:13 -0700 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2018-03-19 12:50:10 -0600 |
commit | 818e0fa293ca836eba515615c64680ea916fd7cd (patch) | |
tree | e3f01a690b1d58a1e316b9f7588c04a43a9a972c | |
parent | 5f2b18ec8e1643410a2369f06888951cdedea0bf (diff) | |
download | linux-818e0fa293ca836eba515615c64680ea916fd7cd.tar.bz2 |
block: Change a rcu_read_{lock,unlock}_sched() pair into rcu_read_{lock,unlock}()
scsi_device_quiesce() uses synchronize_rcu() to guarantee that the
effect of blk_set_preempt_only() will be visible for percpu_ref_tryget()
calls that occur after the queue unfreeze by using the approach
explained in https://lwn.net/Articles/573497/. The rcu read lock and
unlock calls in blk_queue_enter() form a pair with the synchronize_rcu()
call in scsi_device_quiesce(). Both scsi_device_quiesce() and
blk_queue_enter() must either use regular RCU or RCU-sched.
Since neither the RCU-protected code in blk_queue_enter() nor
blk_queue_usage_counter_release() sleeps, regular RCU protection
is sufficient. Note: scsi_device_quiesce() does not have to be
modified since it already uses synchronize_rcu().
Reported-by: Tejun Heo <tj@kernel.org>
Fixes: 3a0a529971ec ("block, scsi: Make SCSI quiesce and resume work reliably")
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Oleksandr Natalenko <oleksandr@natalenko.name>
Cc: Martin Steigerwald <martin@lichtvoll.de>
Cc: stable@vger.kernel.org # v4.15
Signed-off-by: Jens Axboe <axboe@kernel.dk>
-rw-r--r-- | block/blk-core.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/block/blk-core.c b/block/blk-core.c index 5e88c579e896..a0f675f84f86 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -917,7 +917,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) bool success = false; int ret; - rcu_read_lock_sched(); + rcu_read_lock(); if (percpu_ref_tryget_live(&q->q_usage_counter)) { /* * The code that sets the PREEMPT_ONLY flag is @@ -930,7 +930,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) percpu_ref_put(&q->q_usage_counter); } } - rcu_read_unlock_sched(); + rcu_read_unlock(); if (success) return 0; |