Created attachment 1635309 [details] engine and vdsm logs Description of problem: Try to start a VM cloned from a snapshot(of VM1 with one OS disk)-> start VM fails with the following errors: 2019-11-12 13:12:17,427+0200 ERROR (vm/05ab4798) [virt.vm] (vmId='05ab4798-fac6-41d4-afa8-0333bacef365') The vm start process failed (vm:841) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 775, in _startUnderlyingVm self._run() File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 2597, in _run dom = self._connection.defineXML(self._domain.xml) File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python3.6/site-packages/libvirt.py", line 3752, in defineXML if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self) libvirt.libvirtError: internal error: Bus 0 must be PCI for integrated PIIX3 USB or IDE controllers 2019-11-12 13:12:17,427+0200 INFO (vm/05ab4798) [virt.vm] (vmId='05ab4798-fac6-41d4-afa8-0333bacef365') Changed state to Down: internal error: Bus 0 must be PCI for integrated PIIX3 USB or IDE controllers (code=1) (vm:1610) Version-Release number of selected component (if applicable): ovirt-engine-tools-4.4.0-0.4.master.el7.noarch vdsm-4.40.0-127.gitc628cce.el8ev.x86_64 libvirt-client-5.0.0-12.module+el8.0.1+3755+6782b0ed.x86_64 qemu-guest-agent-2.12.0-3.el7.x86_64 qemu-kvm-3.1.0-30.module+el8.0.1+3755+6782b0ed.x86_64 How reproducible: 100% Steps to Reproduce: 1.Create a VM1 from template with OS(RHEL8) 2.Start the VM1 and create a snapshot 3.clone a VM2 from the snapshot of VM1 4.Start VM2 Actual results: Start VM fails with error. Expected results: Start VM should work, Additional info: Libvirt log is not available at the host /var/log/libvirt -> will open a bug for this. A similar issue which was closed was mentioned at bug 1460602. VM xml from vdsm log: 2019-11-12 13:12:17,410+0200 INFO (vm/05ab4798) [virt.vm] (vmId='05ab4798-fac6-41d4-afa8-0333bacef365') <?xml version='1.0' encoding='utf-8'?> <domain xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" type="kvm"> <name>cloned_VM_from_snapshot</name> <uuid>05ab4798-fac6-41d4-afa8-0333bacef365</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <iothreads>1</iothreads> <maxMemory slots="16">4194304</maxMemory> <vcpu current="1">16</vcpu> <sysinfo type="smbios"> <system> <entry name="manufacturer">Red Hat</entry> <entry name="product">RHEL</entry> <entry name="version">8.1-3.3.el8</entry> <entry name="serial">058f37ca-3c97-44b6-81a5-0f19c6622e85</entry> <entry name="uuid">05ab4798-fac6-41d4-afa8-0333bacef365</entry> </system> </sysinfo> <clock adjustment="0" offset="variable"> <timer name="rtc" tickpolicy="catchup" /> <timer name="pit" tickpolicy="delay" /> <timer name="hpet" present="no" /> </clock> <features> <acpi /> </features> <cpu match="exact"> <model>Nehalem</model> <topology cores="1" sockets="16" threads="1" /> <numa> <cell cpus="0-15" id="0" memory="1048576" /> </numa> </cpu> <cputune /> <devices> <input bus="usb" type="tablet" /> <channel type="unix"> <target name="ovirt-guest-agent.0" type="virtio" /> <source mode="bind" path="/var/lib/libvirt/qemu/channels/05ab4798-fac6-41d4-afa8-0333bacef365.ovirt-guest-agent.0" /> </channel> <channel type="unix"> <target name="org.qemu.guest_agent.0" type="virtio" /> <source mode="bind" path="/var/lib/libvirt/qemu/channels/05ab4798-fac6-41d4-afa8-0333bacef365.org.qemu.guest_agent.0" /> </channel> <video> <model heads="1" ram="65536" type="qxl" vgamem="16384" vram="8192" /> <alias name="ua-4c7be3ef-01cc-481e-af9b-2302f0046f62" /> </video> <controller index="0" model="qemu-xhci" ports="8" type="usb" /> <controller type="ide"> <address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci" /> </controller> <controller index="0" model="virtio-scsi" type="scsi"> <driver iothread="1" /> <alias name="ua-7d07c0f7-6c3a-47b3-9e70-2b2570004f6e" /> </controller> <graphics autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice"> <channel mode="secure" name="main" /> <channel mode="secure" name="inputs" /> <channel mode="secure" name="cursor" /> <channel mode="secure" name="playback" /> <channel mode="secure" name="record" /> <channel mode="secure" name="display" /> <channel mode="secure" name="smartcard" /> <channel mode="secure" name="usbredir" /> <listen network="vdsm-ovirtmgmt" type="network" /> </graphics> <controller index="0" ports="16" type="virtio-serial"> <alias name="ua-9a5f2304-28b3-4dc7-9f93-43ce635552dd" /> <address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci" /> </controller> <graphics autoport="yes" keymap="en-us" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc"> <listen network="vdsm-ovirtmgmt" type="network" /> </graphics> <sound model="ich6"> <alias name="ua-fa5d36e3-e731-4caa-bd52-3e4d66d99dc1" /> <address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci" /> </sound> <memballoon model="virtio"> <stats period="5" /> <alias name="ua-fc3d7ccf-8acc-49f5-b205-6338ae901059" /> <address bus="0x00" domain="0x0000" function="0x0" slot="0x08" type="pci" /> </memballoon> <rng model="virtio"> <backend model="random">/dev/urandom</backend> <alias name="ua-fd9c19c3-3874-480e-b441-6636b8423852" /> </rng> <channel type="spicevmc"> <target name="com.redhat.spice.0" type="virtio" /> </channel> <disk device="cdrom" snapshot="no" type="file"> <driver error_policy="report" name="qemu" type="raw" /> <source file="" startupPolicy="optional"> <seclabel model="dac" relabel="no" type="none" /> </source> <target bus="sata" dev="sdc" /> <readonly /> <alias name="ua-ee35e9de-ee8b-42a0-aa5f-8f66f32d123f" /> <address bus="1" controller="0" target="0" type="drive" unit="0" /> </disk> <disk device="disk" snapshot="no" type="file"> <target bus="virtio" dev="vda" /> <source file="/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Storage__NFS_storage__local__ge4__nfs__0/908e342b-9502-4749-9557-879ea04e9076/images/5f15a494-fd6b-43ab-98a7-75a262960d2a/259d958f-5f19-4c57-9764-0d9da4ed44e5"> <seclabel model="dac" relabel="no" type="none" /> </source> <driver cache="none" error_policy="stop" io="threads" iothread="1" name="qemu" type="raw" /> <alias name="ua-5f15a494-fd6b-43ab-98a7-75a262960d2a" /> <address bus="0x00" domain="0x0000" function="0x0" slot="0x07" type="pci" /> <boot order="1" /> <serial>5f15a494-fd6b-43ab-98a7-75a262960d2a</serial> </disk> <interface type="bridge"> <model type="virtio" /> <link state="up" /> <source bridge="ovirtmgmt" /> <alias name="ua-750ef36e-069a-4ab0-b9c1-7e30875ecf73" /> <address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" /> <mac address="00:1a:4a:16:25:aa" /> <mtu size="1500" /> <filterref filter="vdsm-no-mac-spoofing" /> <bandwidth /> </interface> </devices> <pm> <suspend-to-disk enabled="no" /> <suspend-to-mem enabled="no" /> </pm> <os> <type arch="x86_64" machine="pc-q35-rhel8.0.0">hvm</type> <smbios mode="sysinfo" /> </os> <metadata> <ns0:qos /> <ovirt-vm:vm> <ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion> <ovirt-vm:custom /> <ovirt-vm:device mac_address="00:1a:4a:16:25:aa"> <ovirt-vm:custom /> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda"> <ovirt-vm:poolID>c6458a6b-4bc5-4af7-a155-a34a71f310f4</ovirt-vm:poolID> <ovirt-vm:volumeID>259d958f-5f19-4c57-9764-0d9da4ed44e5</ovirt-vm:volumeID> <ovirt-vm:imageID>5f15a494-fd6b-43ab-98a7-75a262960d2a</ovirt-vm:imageID> <ovirt-vm:domainID>908e342b-9502-4749-9557-879ea04e9076</ovirt-vm:domainID> </ovirt-vm:device> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> </ovirt-vm:vm> </metadata> </domain> (vm:2595)
Marking as regression as the same scenario did not reproduce the issue in 4.3.7.
Opened bug 1771489 for the libvirt log not existing issue
Do you have the template? Is it a 4.4 Template? Can you show log/xml when you run BM from that template first?
(In reply to Michal Skrivanek from comment #3) > Do you have the template? Yes. Template name= "golden_mixed_virtio_template" Template ID = "b086f61e-454e-4eb3-b69c-bc1c174f596d" Is it a 4.4 Template? Good point. Template was created on 4.3 not 4.4 than the ENV(engine machine to be precise) was upgraded from 4.3->4.4. Can you show log/xml when > you run BM from that template first? I added the VM xml in the comment above, is this not enough? Sorry for the delay(PTO).
The issue is not seen in rhv-4.4.0-27. libvirt-client-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64 qemu-kvm-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64 vdsm-4.40.9-1.el8ev.x86_64 ovirt-engine 4.4.0-0.29.master.el8ev
The bug doesn't reproduce with 4.3 el8 made template. The cd-rom device is changed as part of BZ 1770697 when creating a VM from a template in 4.4 cluster(Q35 chipset). Therefore, when starting the VM, the cd-rom bus is 0. After cloning a VM from the snapshot and starting it, the bus stays 0. The VM started as expected.