RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1089606 - QEMU will not reject invalid number of queues (num_queues = 0) specified for virtio-scsi
Summary: QEMU will not reject invalid number of queues (num_queues = 0) specified for ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm
Version: 7.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Fam Zheng
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 1146826
TreeView+ depends on / blocked
 
Reported: 2014-04-21 07:08 UTC by Sibiao Luo
Modified: 2015-03-05 08:06 UTC (History)
11 users (show)

Fixed In Version: qemu-kvm-1.5.3-78.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1146826 (view as bug list)
Environment:
Last Closed: 2015-03-05 08:06:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0349 0 normal SHIPPED_LIVE Important: qemu-kvm security, bug fix, and enhancement update 2015-03-05 12:27:34 UTC

Description Sibiao Luo 2014-04-21 07:08:10 UTC
Description of problem:
num_queues = 1 means disable the multi-queue function for virtio-scsi, so num_queues = 0 is invalid multiple number, qemu should quit and give a warning message for users.

Version-Release number of selected component (if applicable):
host info:
3.10.0-121.el7.x86_64
qemu-kvm-rhev-1.5.3-60.el7ev.x86_64
seabios-1.7.2.2-12.el7.x86_64
guest info:
3.10.0-121.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.boot KVM guest with invalid multiple number (num_queues=0) for virtio-scsi.
# /usr/libexec/qemu-kvm -M pc -cpu host -enable-kvm -m 4096 -smp 4,sockets=2,cores=2,threads=1 -no-kvm-pit-reinjection...-device virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x7,num_queues=0
2.
3.

Actual results:
it can boot up successfully.

Expected results:
It should quit and give a warning message for users, just like queues=0 for nic.
# /usr/libexec/qemu-kvm...-netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup,queues=0 -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,mac=00:01:02:B6:40:21,bus=pci.0,addr=0x5,vectors=9,mq=on -vnc :2 -monitor stdio
Warning: option deprecated, use lost_tick_policy property of kvm-pit instead.
QEMU 1.5.3 monitor - type 'help' for more information
(qemu) qemu-kvm: -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,mac=00:01:02:B6:40:21,bus=pci.0,addr=0x5,vectors=9,mq=on: Property 'virtio-net-pci.netdev' can't find value 'hostnet0'

Additional info:
Tried the num_queues with negative number and letter which will quit and give a warning prompt message.
# /usr/libexec/qemu-kvm -M pc -S -cpu host -enable-kvm -m 4096 -smp 4,sockets=2,cores=2,threads=1 -no-kvm-pit-reinjection -device virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x7,num_queues=-1 -vnc :2 -monitor stdio
Warning: option deprecated, use lost_tick_policy property of kvm-pit instead.
QEMU 1.5.3 monitor - type 'help' for more information
(qemu) qemu-kvm: -device virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x7,num_queues=-1: Parameter 'num_queues' expects uint32_t

# /usr/libexec/qemu-kvm -M pc -S -cpu host -enable-kvm -m 4096 -smp 4,sockets=2,cores=2,threads=1 -no-kvm-pit-reinjection -device virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x7,num_queues=a -vnc :2 -monitor stdio
Warning: option deprecated, use lost_tick_policy property of kvm-pit instead.
QEMU 1.5.3 monitor - type 'help' for more information
(qemu) qemu-kvm: -device virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x7,num_queues=a: Invalid parameter type for 'num_queues', expected: integer

Comment 1 Sibiao Luo 2014-04-21 07:20:54 UTC
We fix the virtio-scsi multi-queue support for qemu-kvm in bug 911389, and i also tried it on qemu-kvm-rhev-1.5.3-21.el7.x86_64 which also hit this issue, so this issue is not regression.

Best Regards,
sluo

Comment 2 Sibiao Luo 2014-04-21 07:33:09 UTC
(In reply to Sibiao Luo from comment #0)
> Actual results:
> it can boot up successfully.
> 
Hmm, guest will Call Trace and Kernel panic in deed and fail to boot up at all. So i reset the priority to high.

# /usr/libexec/qemu-kvm -M pc -cpu host -enable-kvm -m 4096 -smp 4,sockets=2,cores=2,threads=1 -no-kvm-pit-reinjection......-device virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x7,num_queues=0 -drive file=gluster://10.66.83.171/sluo_volume/data-disk1.qcow2,if=none,id=drive-data-disk1,cache=none,format=qcow2,aio=native,werror=stop,rerror=stop -device scsi-hd,drive=drive-data-disk1,bus=scsi1.0,id=data-disk1 -drive file=gluster://10.66.83.171/sluo_volume/data-disk2.qcow2,if=none,id=drive-data-disk2,cache=none,format=qcow2,aio=native,werror=stop,rerror=stop -device scsi-hd,drive=drive-data-disk2,bus=scsi1.0,id=data-disk2 -drive file=gluster://10.66.83.171/sluo_volume/data-disk3.qcow2,if=none,id=drive-data-disk3,cache=none,format=qcow2,aio=native,werror=stop,rerror=stop -device scsi-hd,drive=drive-data-disk3,bus=scsi1.0,id=data-disk3 -drive file=gluster://10.66.83.171/sluo_volume/data-disk4.qcow2,if=none,id=drive-data-disk4,cache=none,format=qcow2,aio=native,werror=stop,rerror=stop -device scsi-hd,drive=drive-data-disk4,bus=scsi1.0,id=data-disk4

# nc -U /tmp/ttyS0 
[    0.035010] Failed to access perfctr msr (MSR c0010001 is ffffffffffffffff)
[    4.806734] BUG: unable to handle kernel NULL pointer dereference at 0000000000000020
[    4.807022] IP: [<ffffffffa006fdb0>] __virtscsi_set_affinity+0x60/0x140 [virtio_scsi]
[    4.807022] PGD 0 
[    4.807022] Oops: 0000 [#1] SMP 
[    4.807022] Modules linked in: ttm pata_acpi(+) virtio_scsi(+) virtio_net drm ata_piix virtio_pci virtio_ring i2c_core virtio floppy libata dm_mirror dm_region_hash dm_log dm_mod
[    4.813716] CPU: 3 PID: 265 Comm: systemd-udevd Not tainted 3.10.0-121.el7.x86_64 #1
[    4.813716] Hardware name: Red Hat KVM, BIOS Bochs 01/01/2011
[    4.813716] task: ffff88013671b8e0 ti: ffff880036940000 task.ti: ffff880036940000
[    4.813716] RIP: 0010:[<ffffffffa006fdb0>]  [<ffffffffa006fdb0>] __virtscsi_set_affinity+0x60/0x140 [virtio_scsi]
[    4.831361] RSP: 0018:ffff880036941b38  EFLAGS: 00010216
[    4.831361] RAX: 0000000000000200 RBX: 0000000000000000 RCX: ffff880036941fd8
[    4.831361] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[    4.831361] RBP: ffff880036941b58 R08: 00000000000172c0 R09: ffff88013fd972c0
[    4.831361] R10: ffffea0004d81840 R11: ffffffffa007050c R12: ffff880136105740
[    4.831361] R13: 0000000000000003 R14: 0000000000000000 R15: ffff880136061100
[    4.831361] FS:  00007fdf127c2880(0000) GS:ffff88013fd80000(0000) knlGS:0000000000000000
[    4.831361] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    4.831361] CR2: 0000000000000020 CR3: 000000003695a000 CR4: 00000000000006e0
[    4.831361] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    4.831361] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    4.831361] Stack:
[    4.831361]  ffff8801367b3400 ffff880136105740 0000000000000003 ffff880136061020
[    4.831361]  ffff880036941b78 ffffffffa006febc ffff880136105740 ffff8801367b3400
[    4.831361]  ffff880036941bd8 ffffffffa0070514 ffff880136061420 00000000fffffffe
[    4.831361] Call Trace:
[    4.831361]  [<ffffffffa006febc>] virtscsi_remove_vqs+0x2c/0x50 [virtio_scsi]
[    4.831361]  [<ffffffffa0070514>] virtscsi_init+0x134/0x2a0 [virtio_scsi]
[    4.831361]  [<ffffffffa00707ef>] virtscsi_probe+0xef/0x27c [virtio_scsi]
[    4.831361]  [<ffffffffa009f7c0>] ? vp_reset+0x90/0x90 [virtio_pci]
[    4.831361]  [<ffffffffa00221d2>] virtio_dev_probe+0xe2/0x150 [virtio]
[    4.831361]  [<ffffffff813b68e7>] driver_probe_device+0x87/0x390
[    4.831361]  [<ffffffff813b6cc3>] __driver_attach+0x93/0xa0
[    4.831361]  [<ffffffff813b6c30>] ? __device_attach+0x40/0x40
[    4.831361]  [<ffffffff813b4673>] bus_for_each_dev+0x73/0xc0
[    4.831361]  [<ffffffff813b633e>] driver_attach+0x1e/0x20
[    4.831361]  [<ffffffff813b5e90>] bus_add_driver+0x200/0x2d0
[    4.831361]  [<ffffffffa0075000>] ? 0xffffffffa0074fff
[    4.831361]  [<ffffffff813b7344>] driver_register+0x64/0xf0
[    4.831361]  [<ffffffffa0075000>] ? 0xffffffffa0074fff
[    4.831361]  [<ffffffffa0022540>] register_virtio_driver+0x20/0x30 [virtio]
[    4.831361]  [<ffffffffa0075085>] init+0x85/0x1000 [virtio_scsi]
[    4.831361]  [<ffffffff810020e2>] do_one_initcall+0xe2/0x190
[    4.831361]  [<ffffffff810ca7fb>] load_module+0x129b/0x1a90
[    4.831361]  [<ffffffff812da3d0>] ? ddebug_proc_write+0xf0/0xf0
[    4.831361]  [<ffffffff810c7133>] ? copy_module_from_fd.isra.43+0x53/0x150
[    4.831361]  [<ffffffff810cb1a6>] SyS_finit_module+0xa6/0xd0
[    4.831361]  [<ffffffff815fc819>] system_call_fastpath+0x16/0x1b
[    4.831361] Code: e1 39 c3 74 7e 45 84 f6 75 61 41 8b 84 24 c8 01 00 00 31 db 85 c0 74 3b 0f 1f 00 48 63 c3 48 83 c0 20 48 c1 e0 04 49 8b 7c 04 10 <48> 8b 47 20 48 8b 80 b0 02 00 00 48 8b 40 50 48 85 c0 74 07 be 
[    4.831361] RIP  [<ffffffffa006fdb0>] __virtscsi_set_affinity+0x60/0x140 [virtio_scsi]
[    4.831361]  RSP <ffff880036941b38>
[    4.831361] CR2: 0000000000000020
[    5.109710] ---[ end trace 3af188bf0d17b896 ]---
[    5.123777] Kernel panic - not syncing: Fatal exception

Comment 3 Sibiao Luo 2014-04-21 07:39:19 UTC
The KVM guest boot up successfully without any core trace if specify num_queues=1 with the same other qemu-kvm command line.

# /usr/libexec/qemu-kvm -M pc -cpu host -enable-kvm -m 4096 -smp 4,sockets=2,cores=2,threads=1 -no-kvm-pit-reinjection......-device virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x7,num_queues=1 -drive file=gluster://10.66.83.171/sluo_volume/data-disk1.qcow2,if=none,id=drive-data-disk1,cache=none,format=qcow2,aio=native,werror=stop,rerror=stop -device scsi-hd,drive=drive-data-disk1,bus=scsi1.0,id=data-disk1 -drive file=gluster://10.66.83.171/sluo_volume/data-disk2.qcow2,if=none,id=drive-data-disk2,cache=none,format=qcow2,aio=native,werror=stop,rerror=stop -device scsi-hd,drive=drive-data-disk2,bus=scsi1.0,id=data-disk2 -drive file=gluster://10.66.83.171/sluo_volume/data-disk3.qcow2,if=none,id=drive-data-disk3,cache=none,format=qcow2,aio=native,werror=stop,rerror=stop -device scsi-hd,drive=drive-data-disk3,bus=scsi1.0,id=data-disk3 -drive file=gluster://10.66.83.171/sluo_volume/data-disk4.qcow2,if=none,id=drive-data-disk4,cache=none,format=qcow2,aio=native,werror=stop,rerror=stop -device scsi-hd,drive=drive-data-disk4,bus=scsi1.0,id=data-disk4

# ls /dev/sd* -lh
brw-rw----. 1 root disk 8,  0 Apr 21 11:34 /dev/sda
brw-rw----. 1 root disk 8,  1 Apr 21 11:34 /dev/sda1
brw-rw----. 1 root disk 8,  2 Apr 21 11:34 /dev/sda2
brw-rw----. 1 root disk 8, 16 Apr 21 11:34 /dev/sdb <----------data-disk1
brw-rw----. 1 root disk 8, 32 Apr 21 11:34 /dev/sdc <----------data-disk2
brw-rw----. 1 root disk 8, 48 Apr 21 11:34 /dev/sdd <----------data-disk3
brw-rw----. 1 root disk 8, 64 Apr 21 11:34 /dev/sde <----------data-disk4

Best Regards,
sluo

Comment 4 Paolo Bonzini 2014-05-06 16:50:20 UTC
See also bug 1089604

Comment 5 Fam Zheng 2014-05-07 10:54:17 UTC
For the kernel panic in comment 2, it is a similar issue with
https://bugzilla.redhat.com/show_bug.cgi?id=1083860

Thanks,
Fam

Comment 6 Sibiao Luo 2014-05-09 01:58:35 UTC
(In reply to Fam Zheng from comment #5)
> For the kernel panic in comment 2, it is a similar issue with
> https://bugzilla.redhat.com/show_bug.cgi?id=1083860
> 
Not sure about it, as bug 1083860 is casued by using virtio-scsi queues in rhel6.x host, We can re-try it after the patch for bug 1083860 came out.

Best Regards,
sluo

Comment 7 Fam Zheng 2014-08-26 06:31:14 UTC
Proposed change on qemu side (posted to qemu-devel at meantime):

commit cb33534ddd383bc965cffb86669d7da187138d14
Author: Fam Zheng <famz>
Date:   Tue Aug 26 14:23:22 2014 +0800

    virtio-scsi: Report error if num_queues is 0 or too large

    No cmd vq surprises guest (Linux panics in virtscsi_probe), too many
    queues abort qemu (in the following virtio_add_queue).

    Signed-off-by: Fam Zheng <famz>

diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 2dd9255..86aba88 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -699,6 +699,12 @@ void virtio_scsi_common_realize(DeviceState *dev, Error **errp,
     virtio_init(vdev, "virtio-scsi", VIRTIO_ID_SCSI,
                 sizeof(VirtIOSCSIConfig));

+    if (s->conf.num_queues <= 0 || s->conf.num_queues > VIRTIO_PCI_QUEUE_MAX) {
+        error_setg(errp, "Invalid number of queues (= %" PRId32 "), "
+                         "must be a positive integer less than %d.",
+                   s->conf.num_queues, VIRTIO_PCI_QUEUE_MAX);
+        return;
+    }
     s->cmd_vqs = g_malloc0(s->conf.num_queues * sizeof(VirtQueue *));
     s->sense_size = VIRTIO_SCSI_SENSE_SIZE;
     s->cdb_size = VIRTIO_SCSI_CDB_SIZE;

Comment 8 Miroslav Rezanina 2014-11-10 09:29:47 UTC
Fix included in qemu-kvm-1.5.3-78.el7

Comment 10 Sibiao Luo 2014-11-11 08:33:50 UTC
verify this issue on qemu-kvm-1.5.3-78.el7.x86_64.

host info:
# uname -r && rpm -q qemu-kvm
3.10.0-183.el7.x86_64
qemu-kvm-1.5.3-78.el7.x86_64

e.g1:...-drive file=/dev/sde,if=none,id=drive-usb-disk,cache=writeback,aio=native -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x6,num_queues=0 -device scsi-block,bus=scsi0.0,drive=drive-usb-disk,id=usb-disk
Warning: option deprecated, use lost_tick_policy property of kvm-pit instead.
QEMU 1.5.3 monitor - type 'help' for more information
(qemu) qemu-kvm: -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x6,num_queues=0: Invalid number of queues (= 0), must be a positive integer less than 62.
qemu-kvm: -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x6,num_queues=0: Device initialization failed.
qemu-kvm: -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x6,num_queues=0: Device initialization failed.
qemu-kvm: -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x6,num_queues=0: Device 'virtio-scsi-pci' could not be initialized

e.g2:...-drive file=/dev/sde,if=none,id=drive-usb-disk,cache=writeback,aio=native -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x6,num_queues=63 -device scsi-block,bus=scsi0.0,drive=drive-usb-disk,id=usb-disk
Warning: option deprecated, use lost_tick_policy property of kvm-pit instead.
QEMU 1.5.3 monitor - type 'help' for more information
(qemu) qemu-kvm: -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x6,num_queues=63: Invalid number of queues (= 63), must be a positive integer less than 62.
qemu-kvm: -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x6,num_queues=63: Device initialization failed.
qemu-kvm: -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x6,num_queues=63: Device initialization failed.
qemu-kvm: -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x6,num_queues=63: Device 'virtio-scsi-pci' could not be initialized

Base on above, this issue has been fixed correctly, move to VERIFIED status, please correctly me if any mistake, thanks.

Best Regards,
sluo

Comment 12 errata-xmlrpc 2015-03-05 08:06:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0349.html


Note You need to log in before you can comment on or make changes to this bug.