Age | Commit message (Collapse) | Author | Files | Lines |
|
vscsi->num_queues counts the number of request virtqueue which does not
include the control and event virtqueue. It is wrong to subtract
VIRTIO_SCSI_VQ_BASE from vscsi->num_queues.
This patch fixes the following panic.
(qemu) device_del scsi0
BUG: unable to handle kernel NULL pointer dereference at 0000000000000020
IP: [<ffffffff8179b29f>] __virtscsi_set_affinity+0x6f/0x120
PGD 0
Oops: 0000 [#1] SMP
Modules linked in:
CPU: 0 PID: 659 Comm: kworker/0:1 Not tainted 3.11.0-rc2+ #1172
Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
Workqueue: kacpi_hotplug _handle_hotplug_event_func
task: ffff88007bee1cc0 ti: ffff88007bfe4000 task.ti: ffff88007bfe4000
RIP: 0010:[<ffffffff8179b29f>] [<ffffffff8179b29f>] __virtscsi_set_affinity+0x6f/0x120
RSP: 0018:ffff88007bfe5a38 EFLAGS: 00010202
RAX: 0000000000000010 RBX: ffff880077fd0d28 RCX: 0000000000000050
RDX: 0000000000000000 RSI: 0000000000000246 RDI: 0000000000000000
RBP: ffff88007bfe5a58 R08: ffff880077f6ff00 R09: 0000000000000001
R10: ffffffff8143e673 R11: 0000000000000001 R12: 0000000000000001
R13: ffff880077fd0800 R14: 0000000000000000 R15: ffff88007bf489b0
FS: 0000000000000000(0000) GS:ffff88007ea00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000000000000020 CR3: 0000000079f8b000 CR4: 00000000000006f0
Stack:
ffff880077fd0d28 0000000000000000 ffff880077fd0800 0000000000000008
ffff88007bfe5a78 ffffffff8179b37d ffff88007bccc800 ffff88007bccc800
ffff88007bfe5a98 ffffffff8179b3b6 ffff88007bccc800 ffff880077fd0d28
Call Trace:
[<ffffffff8179b37d>] virtscsi_set_affinity+0x2d/0x40
[<ffffffff8179b3b6>] virtscsi_remove_vqs+0x26/0x50
[<ffffffff8179c7d2>] virtscsi_remove+0x82/0xa0
[<ffffffff814cb6b2>] virtio_dev_remove+0x22/0x70
[<ffffffff8167ca49>] __device_release_driver+0x69/0xd0
[<ffffffff8167cb9d>] device_release_driver+0x2d/0x40
[<ffffffff8167bb96>] bus_remove_device+0x116/0x150
[<ffffffff81679936>] device_del+0x126/0x1e0
[<ffffffff81679a06>] device_unregister+0x16/0x30
[<ffffffff814cb889>] unregister_virtio_device+0x19/0x30
[<ffffffff814cdad6>] virtio_pci_remove+0x36/0x80
[<ffffffff81464ae7>] pci_device_remove+0x37/0x70
[<ffffffff8167ca49>] __device_release_driver+0x69/0xd0
[<ffffffff8167cb9d>] device_release_driver+0x2d/0x40
[<ffffffff8167bb96>] bus_remove_device+0x116/0x150
[<ffffffff81679936>] device_del+0x126/0x1e0
[<ffffffff8145edfc>] pci_stop_bus_device+0x9c/0xb0
[<ffffffff8145f036>] pci_stop_and_remove_bus_device+0x16/0x30
[<ffffffff81474a9e>] acpiphp_disable_slot+0x8e/0x150
[<ffffffff81474f6a>] hotplug_event_func+0xba/0x1a0
[<ffffffff814906c8>] ? acpi_os_release_object+0xe/0x12
[<ffffffff81475911>] _handle_hotplug_event_func+0x31/0x70
[<ffffffff810b5333>] process_one_work+0x183/0x500
[<ffffffff810b66e2>] worker_thread+0x122/0x400
[<ffffffff810b65c0>] ? manage_workers+0x2d0/0x2d0
[<ffffffff810bc5de>] kthread+0xce/0xe0
[<ffffffff810bc510>] ? kthread_freezable_should_stop+0x70/0x70
[<ffffffff81ca045c>] ret_from_fork+0x7c/0xb0
[<ffffffff810bc510>] ? kthread_freezable_should_stop+0x70/0x70
Code: 01 00 00 00 74 59 45 31 e4 83 bb c8 01 00 00 02 74 46 66 2e 0f 1f 84 00 00 00 00 00 49 63 c4 48 c1 e0 04 48 8b bc 0
3 10 02 00 00 <48> 8b 47 20 48 8b 80 d0 01 00 00 48 8b 40 50 48 85 c0 74 07 be
RIP [<ffffffff8179b29f>] __virtscsi_set_affinity+0x6f/0x120
RSP <ffff88007bfe5a38>
CR2: 0000000000000020
---[ end trace 99679331a3775f48 ]---
CC: stable@vger.kernel.org
Signed-off-by: Asias He <asias@redhat.com>
Reviewed-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Add hot cpu notifier to reset the request virtqueue affinity
when doing cpu hotplug.
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Reviewed-by: Asias He <asias@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This patch adds queue steering to virtio-scsi. When a target is sent
multiple requests, we always drive them to the same queue so that FIFO
processing order is kept. However, if a target was idle, we can choose
a queue arbitrarily. In this case the queue is chosen according to the
current VCPU, so the driver expects the number of request queues to be
equal to the number of VCPUs. This makes it easy and fast to select
the queue, and also lets the driver optimize the IRQ affinity for the
virtqueues (each virtqueue's affinity is set to the CPU that "owns"
the queue).
The speedup comes from improving cache locality and giving CPU affinity
to the virtqueues, which is why this scheme was selected. Assuming that
the thread that is sending requests to the device is I/O-bound, it is
likely to be sleeping at the time the ISR is executed, and thus executing
the ISR on the same processor that sent the requests is cheap.
However, the kernel will not execute the ISR on the "best" processor
unless you explicitly set the affinity. This is because in practice
you will have many such I/O-bound processes and thus many otherwise
idle processors. Then the kernel will execute the ISR on a random
processor, rather than the one that is sending requests to the device.
The alternative to per-CPU virtqueues is per-target virtqueues. To
achieve the same locality, we could dynamically choose the virtqueue's
affinity based on the CPU of the last task that sent a request. This
is less appealing because we do not set the affinity directly---we only
provide a hint to the irqbalanced running in userspace. Dynamically
changing the affinity only works if the userspace applies the hint
fast enough.
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Reviewed-by: Asias He <asias@redhat.com>
Tested-by: Venkatesh Srinivas <venkateshs@google.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Avoid duplicated code in all of the callers.
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Reviewed-by: Asias He <asias@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This will be needed soon in order to retrieve the per-target
struct.
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Reviewed-by: Asias He <asias@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
virtio_scsi_target_state is now empty. We will find new uses for it in
the next few patches, so this patch does not drop it completely.
And as James suggested, we use entries target_alloc and target_destroy
in the host template to allocate and destroy the virtio_scsi_target_state
of each target, attach this struct to scsi_target->hostdata. Now
we can get at it from the sdev with scsi_target(sdev)->hostdata.
No messing around with fixed size arrays and bulk memory allocation
and no need to pass in the maximum target size as a parameter because
everything should now happen dynamically.
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Reviewed-by: Asias He <asias@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
It's a bit clearer, and add_buf is going away.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reviewed-by: Asias He <asias@redhat.com>
|
|
Using the new virtqueue_add_sgs function lets us simplify the queueing
path. In particular, all data protected by the tgt_lock is just gone
(multiqueue will find a new use for the lock).
Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Asias He <asias@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Convert the virtio-scsi driver to use pr_err() instead of printk().
Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
CONFIG_HOTPLUG is going away as an option. As a result, the __dev*
markings need to be removed.
This change removes the use of __devinit, __devexit_p, __devinitdata,
__devinitconst, and __devexit from these drivers.
Based on patches originally written by Bill Pemberton, but redone by me
in order to handle some of the coding style issues better, by hand.
Cc: Bill Pemberton <wfp5p@virginia.edu>
Cc: Adam Radford <linuxraid@lsi.com>
Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux
Pull virtio update from Rusty Russell:
"Some nice cleanups, and even a patch my wife did as a "live" demo for
Latinoware 2012.
There's a slightly non-trivial merge in virtio-net, as we cleaned up
the virtio add_buf interface while DaveM accepted the mq virtio-net
patches."
* tag 'virtio-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux: (27 commits)
virtio_console: Add support for remoteproc serial
virtio_console: Merge struct buffer_token into struct port_buffer
virtio: add drv_to_virtio to make code clearly
virtio: use dev_to_virtio wrapper in virtio
virtio-mmio: Fix irq parsing in command line parameter
virtio_console: Free buffers from out-queue upon close
virtio: Convert dev_printk(KERN_<LEVEL> to dev_<level>(
virtio_console: Use kmalloc instead of kzalloc
virtio_console: Free buffer if splice fails
virtio: tools: make it clear that virtqueue_add_buf() no longer returns > 0
virtio: scsi: make it clear that virtqueue_add_buf() no longer returns > 0
virtio: rpmsg: make it clear that virtqueue_add_buf() no longer returns > 0
virtio: net: make it clear that virtqueue_add_buf() no longer returns > 0
virtio: console: make it clear that virtqueue_add_buf() no longer returns > 0
virtio: make virtqueue_add_buf() returning 0 on success, not capacity.
virtio: console: don't rely on virtqueue_add_buf() returning capacity.
virtio_net: don't rely on virtqueue_add_buf() returning capacity.
virtio-net: remove unused skb_vnet_hdr->num_sg field
virtio-net: correct capacity math on ring full
virtio: move queue_index and num_free fields into core struct virtqueue.
...
|
|
We simplified virtqueue_add_buf(), make it clear in the callers.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
virtscsi_queuecommand was leaking memory when the virtio queue was full.
Tested: Guest operates correctly even with very small queue sizes, validated
we're not leaking kmalloc-192 sized allocations anymore.
Signed-off-by: Eric Northup <digitaleric@google.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|
|
Support the LUN parameter change event. Currently, the host fires this event
when the capacity of a disk is changed from the virtual machine monitor.
The resize then appears in the kernel log like this:
sd 0:0:0:0: [sda] 46137344 512-byte logical blocks: (23.6 GB/22.0 GIb)
sda: detected capacity change from 22548578304 to 23622320128
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|
|
virtio-scsi needs to report LUNs greater than 256 using the "flat"
format. Because the Linux SCSI layer just maps the SCSI LUN to
an u32, without any parsing, these end up in the range from 16640
to 32767. Fix max_lun to account for the possibility that logical
unit numbers are encoded with the "flat" format.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|
|
The sg struct is used without being initialized, which breaks
when CONFIG_DEBUG_SG is enabled.
Cc: stable@vger.kernel.org
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|
|
used by sg list
When using the commands below to write some data to a virtio-scsi LUN of the
QEMU guest(32-bit) with 1G physical memory(qemu -m 1024), the qemu will crash.
# sudo mkfs.ext4 /dev/sdb (/dev/sdb is the virtio-scsi LUN.)
# sudo mount /dev/sdb /mnt
# dd if=/dev/zero of=/mnt/file bs=1M count=1024
In current implementation, sg_set_buf is called to add buffers to sg list which
is put into the virtqueue eventually. But if there are some HighMem pages in
table->sgl you can not get virtual address by sg_virt. So, sg_virt(sg_elem) may
return NULL value. This will cause QEMU exit when virtqueue_map_sg is called
in QEMU because an invalid GPA is passed by virtqueue.
Two solutions are discussed here:
http://lkml.indiana.edu/hypermail/linux/kernel/1207.3/00675.html
Finally, value assignment approach was adopted because:
Value assignment creates a well-formed scatterlist, because the termination
marker in source sg_list has been set in blk_rq_map_sg(). The last entry of the
source sg_list is just copied to the the last entry in destination list. Note
that, for now, virtio_ring does not care about the form of the scatterlist and
simply processes the first out_num + in_num consecutive elements of the sg[]
array.
I have tested the patch on my workstation. QEMU would not crash any more.
Cc: <stable@vger.kernel.org> # 3.4: 4fe74b1: [SCSI] virtio-scsi: SCSI driver
Signed-off-by: Wang Sen <senwang@linux.vnet.ibm.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|
|
scanning
This patch changes virtio-scsi to use a new virtio_driver->scan() callback
so that scsi_scan_host() can be properly invoked once virtio_dev_probe() has
set add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK) to signal active virtio-ring
operation, instead of from within virtscsi_probe().
This fixes a bug where SCSI LUN scanning for both virtio-scsi-raw and
virtio-scsi/tcm_vhost setups was happening before VIRTIO_CONFIG_S_DRIVER_OK
had been set, causing VIRTIO_SCSI_S_BAD_TARGET to occur. This fixes a bug
with virtio-scsi/tcm_vhost where LUN scan was not detecting LUNs.
Tested with virtio-scsi-raw + virtio-scsi/tcm_vhost w/ IBLOCK on 3.5-rc2 code.
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|
|
This patch implements the hotplug support for virtio-scsi.
When there is a device attached/detached, the virtio-scsi driver will be
signaled via event virtual queue and it will add/remove the scsi device
in question automatically.
Signed-off-by: Sen Wang <senwang@linux.vnet.ibm.com>
Signed-off-by: Cong Meng <mc@linux.vnet.ibm.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|
|
To improve performance for I/O to different targets, add a separate
scatterlist for each of them.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|
|
We do not need the sglist after calling virtqueue_add_buf. Hence we
can "pipeline" the locked operations and start preparing the sglist
for the next request while we kick the virtqueue.
Together with the previous two patches, this improves performance as
follows. For a simple "if=/dev/sda of=/dev/null bs=128M iflag=direct"
(the source being a 10G disk, residing entirely in the host buffer cache),
the additional locking does not cause any penalty with only one dd
process, but 2 simultaneous I/O operations improve their times by 3%:
number of simultaneous dd
1 2
----------------------------------------
current 5.9958s 10.2640s
patched 5.9531s 9.8663s
(Times are best of 10).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|
|
Keep a separate lock for each virtqueue. While not particularly
important now, it prepares the code for when we will add support
for multiple request queues. It is also more tidy as soon as
we introduce a separate lock for the sglist.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|
|
Separate virtqueue_kick_prepare from virtqueue_notify, so that the
expensive vmexit is done without holding the lock.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|
|
Fix a use-after-free in the TMF path, where cmd may have been already
freed by virtscsi_complete_free when wait_for_completion restarts
executing virtscsi_tmf. Technically a race, but in practice the command
will always be freed long before the completion waiter is awoken.
The fix is to make callers specifying a completion responsible for
freeing the command in all cases.
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|
|
The virtio-scsi HBA is the basis of an alternative storage stack
for QEMU-based virtual machines (including KVM). Compared to
virtio-blk it is more scalable, because it supports many LUNs
on a single PCI slot), more powerful (it more easily supports
passthrough of host devices to the guest) and more easily
extensible (new SCSI features implemented by QEMU should not
require updating the driver in the guest).
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|