diff options
author | Bart Van Assche <bart.vanassche@sandisk.com> | 2016-08-31 15:17:49 -0700 |
---|---|---|
committer | Mike Snitzer <snitzer@redhat.com> | 2016-09-14 13:56:38 -0400 |
commit | 3b785fbcf81c3533772c52b717f77293099498d3 (patch) | |
tree | 365626385e69b8ce18ae85e0177e2f1ea61995e9 | |
parent | 8dc23658b7aaa8b6b0609c81c8ad75e98b612801 (diff) | |
download | linux-3b785fbcf81c3533772c52b717f77293099498d3.tar.bz2 |
dm: mark request_queue dead before destroying the DM device
This avoids that new requests are queued while __dm_destroy() is in
progress.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
-rw-r--r-- | drivers/md/dm.c | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/drivers/md/dm.c b/drivers/md/dm.c index c0742129a9cb..0f2928b3136b 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1873,6 +1873,7 @@ EXPORT_SYMBOL_GPL(dm_device_name); static void __dm_destroy(struct mapped_device *md, bool wait) { + struct request_queue *q = dm_get_md_queue(md); struct dm_table *map; int srcu_idx; @@ -1883,6 +1884,10 @@ static void __dm_destroy(struct mapped_device *md, bool wait) set_bit(DMF_FREEING, &md->flags); spin_unlock(&_minor_lock); + spin_lock_irq(q->queue_lock); + queue_flag_set(QUEUE_FLAG_DYING, q); + spin_unlock_irq(q->queue_lock); + if (dm_request_based(md) && md->kworker_task) flush_kthread_worker(&md->kworker); |