Bug 1083860
Summary: | kernel panic when virtscsi_init fails | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | FuXiangChun <xfu> |
Component: | kernel | Assignee: | Fam Zheng <famz> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.1 | CC: | areis, bsarathy, famz, hhuang, juzhang, knoel, mazhang, michen, mkenneth, pbonzini, qzhang, rbalakri, sluo, virt-bugs, virt-maint, vrozenfe, xfu |
Target Milestone: | pre-dev-freeze | ||
Target Release: | 7.1 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | kernel-3.10.0-152.el7 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-03-05 11:48:10 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
FuXiangChun
2014-04-03 06:09:20 UTC
> Additional info:
> QE know RHEL6 don't support this function, but it shouldn't cause guest
> kernel panic.
QE knows we do not support virtio-scsi multi-queue in rhel6.x and we might need to give a friendly behaviour instead of causing guest panic directly.
Posted a fix to upstream: https://www.mail-archive.com/kvm@vger.kernel.org/msg101086.html I've built a kernel package with the upstreamed fix included: http://brewweb.devel.redhat.com/brew/taskinfo?taskID=7424793 Xiangchun, Could you give a test using the above kernel in guest? Thanks a lot, Fam (In reply to juzhang from comment #11) > Hi Sluo, I can reproduce it using rhel7 guest on rhel6 host with the same steps as comment #0. host info: # uname -r && rpm -q qemu-kvm-rhev 2.6.32-448.el6.x86_64 qemu-kvm-rhev-0.12.1.2-2.424.el6.x86_64 guest info: # uname -r 2.6.32-448.el6.x86_64 My qemu-kvm command line: # /usr/libexec/qemu-kvm -smp 2 -drive file=/home/test,if=none,id=drive-virtio-disk,format=qcow2,cache=none,aio=native,werror=stop,rerror=stop,media=disk,snapshot=off -device virtio-blk-pci,scsi=off,drive=drive-virtio-disk,id=virtio-disk,bus=pci.0,addr=0x7,bootindex=1,physical_block_size=512,logical_block_size=512,multifunction=on -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,mac=00:01:02:B6:40:11,bus=pci.0,addr=0x5 -vnc :2 -monitor stdio -drive file=/home/my-data-disk1.qcow2,if=none,id=drive-scsi-disk,format=qcow2,cache=none,werror=stop,rerror=stop -device virtio-scsi-pci,id=scsi0,addr=0x13,vectors=512,indirect_desc=on,event_idx=off,hotplug=on,param_change=off,multifunction=on,rombar=64,num_queues=2 -device scsi-hd,drive=drive-scsi-disk,bus=scsi0.0,scsi-id=0,lun=0,id=data-disk1 -serial unix:/tmp/ttyS0,server,nowait > Since xiangchun is on pto, would you please give a help doing the following > testing? > > 1. Test this bz by using rhel7.0 guest(about kernel, please use fam's build) > according to comment0 on rhel6.6 host? Rhel7 guest with fam's private build on rhel6 host boot up successfully without any Call Trace. host info: # uname -r && rpm -q qemu-kvm-rhev 2.6.32-448.el6.x86_64 qemu-kvm-rhev-0.12.1.2-2.424.el6.x86_64 guest info: 3.10.0-123.el7.test.x86_64 > 2. Test this bz by using rhel7.0 guest(about kernel, please use fam's build) > according to comment0 on rhel7.0 host? Rhel7 guest with fam's private build on rhel7 host also boot up successfully without any Call Trace. host info: # uname -r && rpm -q qemu-kvm 3.10.0-121.el7.x86_64 qemu-kvm-1.5.3-60.el7.x86_64 guest info: guest info: 3.10.0-123.el7.test.x86_64 > Plus > > Had better have a try rhel6.6 guest on according to comment0 on rhel6.6 host > as well? > test rhel6.6 guest on rhel6 host according to comment #0 which boot up successfully without any Call Trace. host info: # uname -r && rpm -q qemu-kvm-rhev 2.6.32-448.el6.x86_64 qemu-kvm-rhev-0.12.1.2-2.424.el6.x86_64 guest info: # uname -r 2.6.32-448.el6.x86_64 Best Regards, sluo Patch(es) available on kernel-3.10.0-152.el7 Reproduce this bug on rhel6 host. Host: qemu-kvm-rhev-0.12.1.2-2.448.el6.x86_64 qemu-kvm-rhev-debuginfo-0.12.1.2-2.448.el6.x86_64 gpxe-roms-qemu-0.9.7-6.12.el6.noarch qemu-img-rhev-0.12.1.2-2.448.el6.x86_64 qemu-kvm-rhev-tools-0.12.1.2-2.448.el6.x86_64 kernel-2.6.32-497.el6.x86_64 Guest: kernel-3.10.0-123.el7.x86_64 Cli: /usr/libexec/qemu-kvm \ -M pc \ -cpu SandyBridge \ -m 2G \ -smp 2 \ -enable-kvm \ -name rhel7 \ -uuid 990ea161-6b67-47b2-b803-19fb01d30d12 \ -smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 \ -k en-us \ -rtc base=localtime,clock=host,driftfix=slew \ -nodefaults \ -monitor stdio \ -qmp tcp:0:5555,server,nowait \ -boot menu=on \ -bios /usr/share/seabios/bios.bin \ -monitor unix:/tmp/monitor2,server,nowait \ -vga std \ -vnc :0 \ -usb \ -device usb-tablet,id=tablet0 \ -netdev tap,id=hostnet0 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=54:52:00:B6:40:21 \ -drive file=/home/rhel7-64.qcow2,if=none,id=drive-virtio-disk,format=qcow2,cache=none,aio=native,werror=stop,rerror=stop,media=disk,snapshot=off \ -device virtio-blk-pci,scsi=off,drive=drive-virtio-disk,id=virtio-disk,bus=pci.0,addr=0x7,bootindex=1,physical_block_size=512,logical_block_size=512,multifunction=on \ -drive file=/home/storage0.qcow2,if=none,id=drive-scsi-disk,format=qcow2,cache=none,werror=stop,rerror=stop \ -device virtio-scsi-pci,id=scsi0,addr=0x13,vectors=512,indirect_desc=on,event_idx=off,hotplug=on,param_change=off,multifunction=on,rombar=64,num_queues=2 \ -device scsi-hd,drive=drive-scsi-disk,bus=scsi0.0,scsi-id=0,lun=0,id=data-disk2 \ Result: Guest kernel panic. Update guest kernel to kernel-3.10.0-197.el7.x86_64 re-test this bug, guest works well. So this bug has been fixed. According to comment16, set this issue as verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0290.html |