Bug 1673401
Summary: | Qemu core dump when start guest with two disks using same drive | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Markus Armbruster <armbru> |
Component: | qemu-kvm | Assignee: | Markus Armbruster <armbru> |
Status: | CLOSED ERRATA | QA Contact: | Xueqiang Wei <xuwei> |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | 8.0 | CC: | chayang, coli, ddepaula, jferlan, juzhang, knoel, mtessun, ngu, rbalakri, virt-maint |
Target Milestone: | rc | ||
Target Release: | 8.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | qemu-kvm-2.12.0-76.module+el8.1.0+3351+d11c20fa | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1662508 | Environment: | |
Last Closed: | 2019-11-05 20:47:34 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1662508, 1673402 | ||
Bug Blocks: |
Comment 2
Danilo de Paula
2019-06-11 18:38:50 UTC
Tested with qemu-kvm-2.12.0-76.module+el8.1.0+3351+d11c20fa, not hit this issue. So set status to VERIFIED. Versions: kernel-4.18.0-100.el8.x86_64 qemu-kvm-2.12.0-76.module+el8.1.0+3351+d11c20fa # sh bug_1673401.sh QEMU 2.12.0 monitor - type 'help' for more information (qemu) qemu-kvm: -device scsi-hd,id=data1,drive=drive_image1: Conflicts with use by image1 as 'root', which does not allow 'write' on drive_image1 /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm1' \ -machine q35 \ -nodefaults \ -device VGA,bus=pcie.0,addr=0x1 \ -device pcie-root-port,id=pcie_root_port_0,slot=2,chassis=2,addr=0x2,bus=pcie.0 \ -device pcie-root-port,id=pcie_root_port_1,slot=3,chassis=3,addr=0x3,bus=pcie.0 \ -device pcie-root-port,id=pcie_root_port_2,slot=4,chassis=4,addr=0x4,bus=pcie.0 \ -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20181228-045409-0zhg5aer,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20181228-045409-0zhg5aer,server,nowait \ -mon chardev=qmp_id_catch_monitor,mode=control \ -device pvpanic,ioport=0x505,id=idjsOqD4 \ -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20181228-045409-0zhg5aer,server,nowait \ -device isa-serial,chardev=serial_id_serial0 \ -chardev socket,id=seabioslog_id_20181228-045409-0zhg5aer,path=/var/tmp/seabios-20181228-045409-0zhg5aer,server,nowait \ -device isa-debugcon,chardev=seabioslog_id_20181228-045409-0zhg5aer,iobase=0x402 \ -device pcie-root-port,id=pcie.0-root-port-5,slot=5,chassis=5,addr=0x5,bus=pcie.0 \ -device qemu-xhci,id=usb1,bus=pcie.0-root-port-5,addr=0x0 \ -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \ -object iothread,id=iothread0 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0,iothread=iothread0 \ -blockdev driver=file,node-name=driveimage1,filename=/home/kvm_autotest_root/images/rhel810-64-virtio-scsi.qcow2 \ -blockdev node-name=drive_image1,file=driveimage1,driver=qcow2 \ -device scsi-hd,id=image1,drive=drive_image1 \ -device pcie-root-port,id=pcie.0-root-port-7,slot=7,chassis=7,addr=0x7,bus=pcie.0 \ -device virtio-net-pci,mac=9a:0f:10:11:12:13,id=idPrzVst,vectors=4,netdev=id5uShpn,bus=pcie.0-root-port-7,addr=0x0 \ -netdev tap,id=id5uShpn,vhost=on \ -m 8G \ -smp 10,maxcpus=10,cores=5,threads=1,sockets=2 \ -cpu 'Opteron_G5',+kvm_pv_unhalt \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -vnc :0 \ -rtc base=utc,clock=host,driftfix=slew \ -boot order=cdn,once=c,menu=off,strict=off \ -enable-kvm \ -monitor stdio \ -blockdev driver=file,node-name=drivedata1,filename=/home/kvm_autotest_root/images/data.raw \ -blockdev node-name=drive_data1,file=drivedata1,driver=raw \ -device scsi-hd,id=data1,drive=drive_image1 \ Martin, can you please grant zstream+? Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:3345 |