Bug 1856278 - Error: "Bus 0 must be PCI for integrated PIIX3 USB or IDE controllers" after migrate VM from oVirt 4.3.10 to the oVirt 4.4.1
Summary: Error: "Bus 0 must be PCI for integrated PIIX3 USB or IDE controllers" after ...
Keywords:
Status: CLOSED DUPLICATE of bug 1839545
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: General
Version: 4.4.1.8
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: bugs@ovirt.org
QA Contact: meital avital
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-13 09:27 UTC by Pavel Zinchuk
Modified: 2022-12-01 20:31 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-08-05 13:17:58 UTC
oVirt Team: Virt
Embargoed:


Attachments (Terms of Use)
Can't change destination storage domain (73.64 KB, image/png)
2020-08-06 05:58 UTC, Pavel Zinchuk
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-36656 0 None None None 2022-12-01 20:31:13 UTC

Description Pavel Zinchuk 2020-07-13 09:27:58 UTC
Description of problem:
I have oVirt Instance with version 4.3.10 in the DC01 and another oVirt Instance with version 4.4.1 in the DC02

I need migrate VVs from DC01 to the DC02. In other words, I need migrate VMs from oVirt 4.3.10 to the oVirt 4.4.1

I am export VM as OVA on the oVirt 4.3.10 and import VM from OVA on the oVirt 4.4.1.
Export and import finished without issues.

But when I try to start VM I receive next error:
2020-07-13 09:01:01,698Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-3) [] EVENT_ID: VM_DOWN_ERROR(119), VM prd-vm-02 is down with error. Exit message: internal error: Bus 0 must be PCI for integrated PIIX3 USB or IDE controllers.


Version-Release number of selected component (if applicable):
oVirt Host packages in the DC01:
# rpm -qa | grep ovirt
cockpit-ovirt-dashboard-0.13.10-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.32-1.el7.noarch
cockpit-machines-ovirt-195.6-1.el7.centos.noarch
ovirt-host-dependencies-4.3.5-1.el7.x86_64
ovirt-host-4.3.5-1.el7.x86_64
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
python-ovirt-engine-sdk4-4.3.4-2.el7.x86_64
ovirt-release43-4.3.10-1.el7.noarch
ovirt-vmconsole-host-1.0.7-2.el7.noarch
ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
ovirt-hosted-engine-setup-2.3.13-1.el7.noarch
ovirt-imageio-common-1.5.3-0.el7.x86_64
ovirt-ansible-repositories-1.1.5-1.el7.noarch
ovirt-host-deploy-common-1.8.5-1.el7.noarch
python2-ovirt-host-deploy-1.8.5-1.el7.noarch
ovirt-vmconsole-1.0.7-2.el7.noarch
ovirt-hosted-engine-ha-2.3.6-1.el7.noarch
ovirt-imageio-daemon-1.5.3-0.el7.noarch
ovirt-provider-ovn-driver-1.2.29-1.el7.noarch

oVirt Host packages in the DC02:
# rpm -qa | grep ovirt
ovirt-provider-ovn-driver-1.2.30-1.el8.noarch
cockpit-ovirt-dashboard-0.14.9-1.el8.noarch
ovirt-imageio-daemon-2.0.9-1.el8.x86_64
ovirt-python-openvswitch-2.11-0.2020061801.el8.noarch
ovirt-host-dependencies-4.4.1-4.el8.x86_64
ovirt-ansible-hosted-engine-setup-1.1.5-1.el8.noarch
ovirt-imageio-common-2.0.9-1.el8.x86_64
ovirt-openvswitch-ovn-host-2.11-0.2020061801.el8.noarch
ovirt-ansible-engine-setup-1.2.4-1.el8.noarch
ovirt-hosted-engine-setup-2.4.5-1.el8.noarch
ovirt-vmconsole-host-1.0.8-1.el8.noarch
ovirt-openvswitch-ovn-2.11-0.2020061801.el8.noarch
ovirt-openvswitch-ovn-common-2.11-0.2020061801.el8.noarch
ovirt-openvswitch-2.11-0.2020061801.el8.noarch
ovirt-hosted-engine-ha-2.4.4-1.el8.noarch
python3-ovirt-setup-lib-1.3.2-1.el8.noarch
ovirt-host-4.4.1-4.el8.x86_64
ovirt-imageio-client-2.0.9-1.el8.x86_64
ovirt-release44-4.4.1-1.el8.noarch
ovirt-vmconsole-1.0.8-1.el8.noarch
python3-ovirt-engine-sdk4-4.4.4-1.el8.x86_64

How reproducible:
Need just export VM from oVirt 4.3.10 and import on the oVirt 4.4.1 as OVA


Steps to Reproduce:
1. Export VM from oVirt 4.3.10 (oVirt Webadmin -> Select VM -> Export as OVA)
2. Copy ova file to the oVirt 4.4.1 Host
4. Import VM on the oVirt 4.4.1 (oVirt Webadmin -> Import-> From source: OVA)
5. Start VM

Actual results:
Error: internal error: Bus 0 must be PCI for integrated PIIX3 USB or IDE controllers.


Expected results:
VM should be started without errors


Additional info:
oVirt Engine 4.4.1 logs /var/log/ovirt-engine/engine.log

2020-07-13 09:01:50,873Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] (default task-31) [c43767d2-9cb5-4784-8a50-03358c6405f6] Lock Acquired to object 'EngineLock:{exclusiveLocks='[540797ce-31e2-4f90-8bff-0faa00e29dc6=VM]', sharedLocks=''}'
2020-07-13 09:01:50,879Z INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-31) [c43767d2-9cb5-4784-8a50-03358c6405f6] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6'}), log id: 7d3ef792
2020-07-13 09:01:50,879Z INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-31) [c43767d2-9cb5-4784-8a50-03358c6405f6] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 7d3ef792
2020-07-13 09:01:50,920Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-25263) [c43767d2-9cb5-4784-8a50-03358c6405f6] Running command: RunVmCommand internal: false. Entities affected :  ID: 540797ce-31e2-4f90-8bff-0faa00e29dc6 Type: VMAction group RUN_VM with role type USER
2020-07-13 09:01:50,924Z INFO  [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] (EE-ManagedThreadFactory-engine-Thread-25263) [c43767d2-9cb5-4784-8a50-03358c6405f6] Emulated machine 'pc-q35-rhel8.1.0' which is different than that of the cluster is set for 'prd-vm-02'(540797ce-31e2-4f90-8bff-0faa00e29dc6)
2020-07-13 09:01:50,953Z INFO  [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25263) [c43767d2-9cb5-4784-8a50-03358c6405f6] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@2ebd8999'}), log id: 264b5ea
2020-07-13 09:01:50,957Z INFO  [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25263) [c43767d2-9cb5-4784-8a50-03358c6405f6] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 264b5ea
2020-07-13 09:01:50,960Z INFO  [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25263) [c43767d2-9cb5-4784-8a50-03358c6405f6] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='42a9790b-46d1-4f9d-99c2-324ac788aa0a', vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6', vm='VM [prd-vm-02]'}), log id: 5989a86d
2020-07-13 09:01:50,961Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25263) [c43767d2-9cb5-4784-8a50-03358c6405f6] START, CreateBrokerVDSCommand(HostName = ovirt-host-02.live.example.local, CreateVDSCommandParameters:{hostId='42a9790b-46d1-4f9d-99c2-324ac788aa0a', vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6', vm='VM [prd-vm-02]'}), log id: 2a1a61b1
2020-07-13 09:01:50,974Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25263) [c43767d2-9cb5-4784-8a50-03358c6405f6] VM <?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">
  <name>prd-vm-02</name>
  <uuid>540797ce-31e2-4f90-8bff-0faa00e29dc6</uuid>
  <memory>2097152</memory>
  <currentMemory>2097152</currentMemory>
  <iothreads>1</iothreads>
  <maxMemory slots="16">16777216</maxMemory>
  <vcpu current="2">32</vcpu>
  <sysinfo type="smbios">
    <system>
      <entry name="manufacturer">oVirt</entry>
      <entry name="product">OS-NAME:</entry>
      <entry name="version">OS-VERSION:</entry>
      <entry name="family">oVirt</entry>
      <entry name="serial">HOST-SERIAL:</entry>
      <entry name="uuid">540797ce-31e2-4f90-8bff-0faa00e29dc6</entry>
    </system>
  </sysinfo>
  <clock offset="variable" adjustment="0">
    <timer name="rtc" tickpolicy="catchup"/>
    <timer name="pit" tickpolicy="delay"/>
    <timer name="hpet" present="no"/>
  </clock>
  <features>
    <acpi/>
  </features>
  <cpu match="exact">
    <model>Skylake-Server</model>
    <feature name="hle" policy="disable"/>
    <feature name="rtm" policy="disable"/>
    <topology cores="2" threads="1" sockets="16"/>
    <numa>
      <cell id="0" cpus="0-31" memory="2097152"/>
    </numa>
  </cpu>
  <cputune/>
  <qemu:capabilities>
    <qemu:add capability="blockdev"/>
    <qemu:add capability="incremental-backup"/>
  </qemu:capabilities>
  <devices>
    <input type="tablet" bus="usb"/>
    <channel type="unix">
      <target type="virtio" name="ovirt-guest-agent.0"/>
      <source mode="bind" path="/var/lib/libvirt/qemu/channels/540797ce-31e2-4f90-8bff-0faa00e29dc6.ovirt-guest-agent.0"/>
    </channel>
    <channel type="unix">
      <target type="virtio" name="org.qemu.guest_agent.0"/>
      <source mode="bind" path="/var/lib/libvirt/qemu/channels/540797ce-31e2-4f90-8bff-0faa00e29dc6.org.qemu.guest_agent.0"/>
    </channel>
    <graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us">
      <listen type="network" network="vdsm-ovirtmgmt"/>
    </graphics>
    <graphics type="spice" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" tlsPort="-1">
      <channel name="main" mode="secure"/>
      <channel name="inputs" mode="secure"/>
      <channel name="cursor" mode="secure"/>
      <channel name="playback" mode="secure"/>
      <channel name="record" mode="secure"/>
      <channel name="display" mode="secure"/>
      <channel name="smartcard" mode="secure"/>
      <channel name="usbredir" mode="secure"/>
      <listen type="network" network="vdsm-ovirtmgmt"/>
    </graphics>
    <controller type="usb" model="qemu-xhci" index="0" ports="8">
      <alias name="ua-3378061f-712a-4b46-9096-8cdae419da1f"/>
    </controller>
    <rng model="virtio">
      <backend model="random">/dev/urandom</backend>
      <alias name="ua-3b51c81c-ef50-4087-a8e6-6e4def0e345b"/>
    </rng>
    <console type="unix">
      <source path="/var/run/ovirt-vmconsole-console/540797ce-31e2-4f90-8bff-0faa00e29dc6.sock" mode="bind"/>
      <target type="serial" port="0"/>
      <alias name="ua-87e6c526-d997-47f2-a655-9210a0b3c74b"/>
    </console>
    <controller type="scsi" model="virtio-scsi" index="0">
      <alias name="ua-8d1c3b9a-9944-4ea1-8830-a2bd91b33f14"/>
    </controller>
    <memballoon model="virtio">
      <stats period="5"/>
      <alias name="ua-a29c26f0-ed64-47be-bf3b-2afd9b22ed37"/>
    </memballoon>
    <controller type="ide">
      <address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci"/>
    </controller>
    <video>
      <model type="qxl" vram="32768" heads="1" ram="65536" vgamem="16384"/>
      <alias name="ua-b8fbd865-10f5-457a-b6d6-e78b891793c9"/>
    </video>
    <controller type="virtio-serial" index="0" ports="16">
      <alias name="ua-ca599026-5c7d-46a8-8004-503c235bb1ae"/>
    </controller>
    <serial type="unix">
      <source path="/var/run/ovirt-vmconsole-console/540797ce-31e2-4f90-8bff-0faa00e29dc6.sock" mode="bind"/>
      <target port="0"/>
    </serial>
    <channel type="spicevmc">
      <target type="virtio" name="com.redhat.spice.0"/>
    </channel>
    <controller type="pci" model="pcie-root"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <disk type="file" device="cdrom" snapshot="no">
      <driver name="qemu" type="raw" error_policy="report"/>
      <source file="PAYLOAD:" startupPolicy="optional">
        <seclabel model="dac" type="none" relabel="no"/>
      </source>
      <target dev="sdb" bus="sata"/>
      <readonly/>
      <alias name="ua-feb7aad8-5fb6-4040-ab75-fd419c9be7bf"/>
    </disk>
    <disk type="file" device="cdrom" snapshot="no">
      <driver name="qemu" type="raw" error_policy="report"/>
      <source file="" startupPolicy="optional">
        <seclabel model="dac" type="none" relabel="no"/>
      </source>
      <target dev="sdc" bus="sata"/>
      <readonly/>
      <alias name="ua-93bfcef9-5f9a-4cec-8a11-3caba9db27a0"/>
    </disk>
    <disk snapshot="no" type="block" device="disk">
      <target dev="sda" bus="scsi"/>
      <source dev="/rhev/data-center/mnt/blockSD/c33db23f-8d93-4988-b73c-aecfbab8a2ce/images/e5f75962-1518-487a-a343-918b08e707d7/bd6de030-f4f8-4a3c-a1c1-76068aff5087">
        <seclabel model="dac" type="none" relabel="no"/>
      </source>
      <driver name="qemu" io="native" type="qcow2" error_policy="stop" cache="none"/>
      <alias name="ua-e5f75962-1518-487a-a343-918b08e707d7"/>
      <address bus="0" controller="0" unit="0" type="drive" target="0"/>
      <boot order="1"/>
      <serial>e5f75962-1518-487a-a343-918b08e707d7</serial>
    </disk>
  </devices>
  <pm>
    <suspend-to-disk enabled="no"/>
    <suspend-to-mem enabled="no"/>
  </pm>
  <os>
    <type arch="x86_64" machine="pc-q35-rhel8.1.0">hvm</type>
    <smbios mode="sysinfo"/>
    <bios useserial="yes"/>
  </os>
  <metadata>
    <ovirt-tune:qos/>
    <ovirt-vm:vm>
      <ovirt-vm:minGuaranteedMemoryMb type="int">2048</ovirt-vm:minGuaranteedMemoryMb>
      <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion>
      <ovirt-vm:custom/>
      <ovirt-vm:device devtype="disk" name="sda">
        <ovirt-vm:poolID>9eaab007-b755-4563-93a0-5776944327af</ovirt-vm:poolID>
        <ovirt-vm:volumeID>bd6de030-f4f8-4a3c-a1c1-76068aff5087</ovirt-vm:volumeID>
        <ovirt-vm:imageID>e5f75962-1518-487a-a343-918b08e707d7</ovirt-vm:imageID>
        <ovirt-vm:domainID>c33db23f-8d93-4988-b73c-aecfbab8a2ce</ovirt-vm:domainID>
      </ovirt-vm:device>
      <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
      <ovirt-vm:device devtype="disk" name="sdb">
        <ovirt-vm:payload>
          <ovirt-vm:volId>config-2</ovirt-vm:volId>
          <ovirt-vm:file path="openstack/latest/network_data.json">ewogICJsaW5rcyIgOiBbIHsKICAgICJuYW1lIiA6ICJldGgwIiwKICAgICJpZCIgOiAiZXRoMCIsCiAgICAidHlwZSIgOiAidmlmIgogIH0sIHsKICAgICJuYW1lIiA6ICJldGgxIiwKICAgICJpZCIgOiAiZXRoMSIsCiAgICAidHlwZSIgOiAidmlmIgogIH0gXSwKICAic2VydmljZXMiIDogWyB7CiAgICAiYWRkcmVzcyIgOiAiMTAuNjAuNjQuMiIsCiAgICAidHlwZSIgOiAiZG5zIgogIH0sIHsKICAgICJhZGRyZXNzIiA6ICIxMC42MC42NC4zIiwKICAgICJ0eXBlIiA6ICJkbnMiCiAgfSBdLAogICJuZXR3b3JrcyIgOiBbIHsKICAgICJuZXRtYXNrIiA6ICIyNTUuMjU1LjI0OC4wIiwKICAgICJkbnNfc2VhcmNoIiA6IFsgImxpdmUucm90MDEua3dlYmJsLmNsb3VkIiwgImxpdmUuYW1zMDEua3dlYmJsLmNsb3VkIiBdLAogICAgImxpbmsiIDogImV0aDAiLAogICAgImlkIiA6ICJldGgwIiwKICAgICJpcF9hZGRyZXNzIiA6ICIxMC42MC42NC4zMiIsCiAgICAidHlwZSIgOiAiaXB2NCIsCiAgICAiZ2F0ZXdheSIgOiAiMTAuNjAuNzEuMjU0IiwKICAgICJkbnNfbmFtZXNlcnZlcnMiIDogWyAiMTAuNjAuNjQuMiIsICIxMC42MC42NC4zIiBdCiAgfSwgewogICAgIm5ldG1hc2siIDogIjI1NS4yNTUuMjQ4LjAiLAogICAgImRuc19zZWFyY2giIDogWyAibGl2ZS5yb3QwMS5rd2ViYmwuY2xvdWQiLCAibGl2ZS5hbXMwMS5rd2ViYmwuY2xvdWQiIF0sCiAgICAibGluayIgOiAiZXRoMSIsCiAgICAiaWQiIDogImV0aDEiLAogICAgImlwX2FkZHJlc3MiIDogIjEwLjYwLjcyLjMyIiwKICAgICJ0eXBlIiA6ICJpcHY0IiwKICAgICJkbnNfbmFtZXNlcnZlcnMiIDogWyAiMTAuNjAuNjQuMiIsICIxMC42MC42NC4zIiBdCiAgfSBdCn0=</ovirt-vm:file>
          <ovirt-vm:file path="openstack/latest/meta_data.json">ewogICJwdWJsaWNfa2V5cyIgOiBbICJzc2gtcnNhIEFBQUFCM056YUMxeWMyRUFBQUFEQVFBQkFBQUJnUURCSk5GM2p3TkJVNUZhc0RPYVdFbjlKOW9leTJrZkNQR1hSWWxCL2pjUUNXazlZRzZKTk1PUjFPQjlQVjFXam5LU2dUNGpqcTl4ZWh4OGkvNStaZlVaK2o4OUlIQUVFY3dUQzhxOEZqTHhXMkI3SFpqcnMxRlRralJKNlFUTHNZOUpiUVRBYWVmSmRINVVLV0xzcURjeU5vMEVoMGViWit6elRBZU15VGN3QjdxdEwzbUVsL3NYSVo1OEdCUlZaVzhBV1RYTStqS1hCSEhlVHA4MTRYVlRKSTFHTnIySXJZYVY0V2Zvcnc5a0cvRkl6bVBnaUFBME1BenlrREw3RFRhU2JZOGw1NWJWcDVndDdHR3BEckFzN2c5Q052elN2VVMwdGs1NHdxRkltd2ZOTHY1WFd6RXl5d2trUG9Va3FDSnZlVjRpODcwSmhhRElWeUU1WHZhY2FmYlRJbVpVbzNvNElRbEh6Yys4cUU0VGJKeXcwY0tCOUwzZ3dqTHFQL1A5TEJwUTRjSFFQUDlxQ205OWNPaDNWdXkyVVVVOUhZd056QlNHK2hQTTF6V1lJRElpV3hCTlRnQTZqZHBkOUJzeDRJYkpiS2tEUnMvQ1g0Z3VmQWFncXhTU2JlM1BaWVdFQWVuTG9vN0JFbDFGc3ZWVkJHZEdvT3dvY0V6cnRQMD0ga3dhZG1pbkB6YWJiaXgub2JzZXJ2ZS5rd2ViYmwuY2xvdWQiIF0sCiAgImF2YWlsYWJpbGl0eV96b25lIiA6ICJub3ZhIiwKICAiaG9zdG5hbWUiIDogInByZC1pbmZyYS1jb25zdWwtMDIubGl2ZS5yb3QwMS5rd2ViYmwuY2xvdWQiLAogICJsYXVuY2hfaW5kZXgiIDogIjAiLAogICJtZXRhIiA6IHsKICAgICJyb2xlIiA6ICJzZXJ2ZXIiLAogICAgImRzbW9kZSIgOiAibG9jYWwiLAogICAgImVzc2VudGlhbCIgOiAiZmFsc2UiCiAgfSwKICAibmFtZSIgOiAicHJkLWluZnJhLWNvbnN1bC0wMi5saXZlLnJvdDAxLmt3ZWJibC5jbG91ZCIsCiAgInV1aWQiIDogIjVkNDg2MjgwLTAxODEtNDYxYS04YmI1LTg5MWJiNDJkNjVlZCIKfQ==</ovirt-vm:file>
          <ovirt-vm:file path="openstack/latest/user_data">I2Nsb3VkLWNvbmZpZwpvdXRwdXQ6CiAgYWxsOiAnPj4gL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJwpkaXNhYmxlX3Jvb3Q6IDAKcnVuY21kOgotICdzZWQgLWkgJycvXmRhdGFzb3VyY2VfbGlzdDogL2QnJyAvZXRjL2Nsb3VkL2Nsb3VkLmNmZzsgZWNobyAnJ2RhdGFzb3VyY2VfbGlzdDoKICBbIk5vQ2xvdWQiLCAiQ29uZmlnRHJpdmUiXScnID4+IC9ldGMvY2xvdWQvY2xvdWQuY2ZnJwp0aW1lem9uZTogRXVyb3BlL0J1ZGFwZXN0CnNzaF9kZWxldGVrZXlzOiAnZmFsc2UnCnNzaF9wd2F1dGg6IHRydWUKY2hwYXNzd2Q6CiAgZXhwaXJlOiBmYWxzZQpydW5jbWQ6ICAgLSB0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQgICAtIHJtIC9ldGMvTmV0d29ya01hbmFnZXIvY29uZi5kLzk5LWNsb3VkLWluaXQuY29uZiA=</ovirt-vm:file>
        </ovirt-vm:payload>
      </ovirt-vm:device>
      <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
    </ovirt-vm:vm>
  </metadata>
</domain>

2020-07-13 09:01:50,982Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25263) [c43767d2-9cb5-4784-8a50-03358c6405f6] FINISH, CreateBrokerVDSCommand, return: , log id: 2a1a61b1
2020-07-13 09:01:50,987Z INFO  [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25263) [c43767d2-9cb5-4784-8a50-03358c6405f6] FINISH, CreateVDSCommand, return: WaitForLaunch, log id: 5989a86d
2020-07-13 09:01:50,988Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-25263) [c43767d2-9cb5-4784-8a50-03358c6405f6] Lock freed to object 'EngineLock:{exclusiveLocks='[540797ce-31e2-4f90-8bff-0faa00e29dc6=VM]', sharedLocks=''}'
2020-07-13 09:01:50,993Z INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-25263) [c43767d2-9cb5-4784-8a50-03358c6405f6] EVENT_ID: USER_STARTED_VM(153), VM prd-vm-02 was started by admin@internal (Host: ovirt-host-02.live.example.local).
2020-07-13 09:01:52,188Z INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '540797ce-31e2-4f90-8bff-0faa00e29dc6' was reported as Down on VDS '42a9790b-46d1-4f9d-99c2-324ac788aa0a'(ovirt-host-02.live.example.local)
2020-07-13 09:01:52,189Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = ovirt-host-02.live.example.local, DestroyVmVDSCommandParameters:{hostId='42a9790b-46d1-4f9d-99c2-324ac788aa0a', vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 24574fcf
2020-07-13 09:01:52,892Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, return: , log id: 24574fcf
2020-07-13 09:01:52,892Z INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '540797ce-31e2-4f90-8bff-0faa00e29dc6'(prd-vm-02) moved from 'WaitForLaunch' --> 'Down'
2020-07-13 09:01:52,904Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) [] EVENT_ID: VM_DOWN_ERROR(119), VM prd-vm-02 is down with error. Exit message: internal error: Bus 0 must be PCI for integrated PIIX3 USB or IDE controllers.
2020-07-13 09:01:52,905Z INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] add VM '540797ce-31e2-4f90-8bff-0faa00e29dc6'(prd-vm-02) to rerun treatment
2020-07-13 09:01:52,914Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-11) [] Rerun VM '540797ce-31e2-4f90-8bff-0faa00e29dc6'. Called from VDS 'ovirt-host-02.live.example.local'
2020-07-13 09:01:52,932Z WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-25264) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM prd-vm-02 on Host ovirt-host-02.live.example.local.
2020-07-13 09:01:52,943Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-25264) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[540797ce-31e2-4f90-8bff-0faa00e29dc6=VM]', sharedLocks=''}'
2020-07-13 09:01:52,949Z INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25264) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6'}), log id: 322c93f7
2020-07-13 09:01:52,949Z INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25264) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 322c93f7
2020-07-13 09:01:53,001Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-25264) [] Running command: RunVmCommand internal: false. Entities affected :  ID: 540797ce-31e2-4f90-8bff-0faa00e29dc6 Type: VMAction group RUN_VM with role type USER
2020-07-13 09:01:53,006Z INFO  [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] (EE-ManagedThreadFactory-engine-Thread-25264) [] Emulated machine 'pc-q35-rhel8.1.0' which is different than that of the cluster is set for 'prd-vm-02'(540797ce-31e2-4f90-8bff-0faa00e29dc6)
2020-07-13 09:01:53,079Z INFO  [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25264) [] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@46724372'}), log id: 32073b3a
2020-07-13 09:01:53,083Z INFO  [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25264) [] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 32073b3a
2020-07-13 09:01:53,085Z INFO  [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25264) [] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='2d6a80f5-0dfd-46bb-bd9a-7487498d0d9e', vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6', vm='VM [prd-vm-02]'}), log id: 1825579e
2020-07-13 09:01:53,086Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25264) [] START, CreateBrokerVDSCommand(HostName = ovirt-host-03.live.example.local, CreateVDSCommandParameters:{hostId='2d6a80f5-0dfd-46bb-bd9a-7487498d0d9e', vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6', vm='VM [prd-vm-02]'}), log id: 300003ce
2020-07-13 09:01:53,104Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25264) [] VM <?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">
  <name>prd-vm-02</name>
  <uuid>540797ce-31e2-4f90-8bff-0faa00e29dc6</uuid>
  <memory>2097152</memory>
  <currentMemory>2097152</currentMemory>
  <iothreads>1</iothreads>
  <maxMemory slots="16">16777216</maxMemory>
  <vcpu current="2">32</vcpu>
  <sysinfo type="smbios">
    <system>
      <entry name="manufacturer">oVirt</entry>
      <entry name="product">OS-NAME:</entry>
      <entry name="version">OS-VERSION:</entry>
      <entry name="family">oVirt</entry>
      <entry name="serial">HOST-SERIAL:</entry>
      <entry name="uuid">540797ce-31e2-4f90-8bff-0faa00e29dc6</entry>
    </system>
  </sysinfo>
  <clock offset="variable" adjustment="0">
    <timer name="rtc" tickpolicy="catchup"/>
    <timer name="pit" tickpolicy="delay"/>
    <timer name="hpet" present="no"/>
  </clock>
  <features>
    <acpi/>
  </features>
  <cpu match="exact">
    <model>Skylake-Server</model>
    <feature name="hle" policy="disable"/>
    <feature name="rtm" policy="disable"/>
    <topology cores="2" threads="1" sockets="16"/>
    <numa>
      <cell id="0" cpus="0-31" memory="2097152"/>
    </numa>
  </cpu>
  <cputune/>
  <qemu:capabilities>
    <qemu:add capability="blockdev"/>
    <qemu:add capability="incremental-backup"/>
  </qemu:capabilities>
  <devices>
    <input type="tablet" bus="usb"/>
    <channel type="unix">
      <target type="virtio" name="ovirt-guest-agent.0"/>
      <source mode="bind" path="/var/lib/libvirt/qemu/channels/540797ce-31e2-4f90-8bff-0faa00e29dc6.ovirt-guest-agent.0"/>
    </channel>
    <channel type="unix">
      <target type="virtio" name="org.qemu.guest_agent.0"/>
      <source mode="bind" path="/var/lib/libvirt/qemu/channels/540797ce-31e2-4f90-8bff-0faa00e29dc6.org.qemu.guest_agent.0"/>
    </channel>
    <graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us">
      <listen type="network" network="vdsm-ovirtmgmt"/>
    </graphics>
    <graphics type="spice" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" tlsPort="-1">
      <channel name="main" mode="secure"/>
      <channel name="inputs" mode="secure"/>
      <channel name="cursor" mode="secure"/>
      <channel name="playback" mode="secure"/>
      <channel name="record" mode="secure"/>
      <channel name="display" mode="secure"/>
      <channel name="smartcard" mode="secure"/>
      <channel name="usbredir" mode="secure"/>
      <listen type="network" network="vdsm-ovirtmgmt"/>
    </graphics>
    <controller type="usb" model="qemu-xhci" index="0" ports="8">
      <alias name="ua-3378061f-712a-4b46-9096-8cdae419da1f"/>
    </controller>
    <rng model="virtio">
      <backend model="random">/dev/urandom</backend>
      <alias name="ua-3b51c81c-ef50-4087-a8e6-6e4def0e345b"/>
    </rng>
    <console type="unix">
      <source path="/var/run/ovirt-vmconsole-console/540797ce-31e2-4f90-8bff-0faa00e29dc6.sock" mode="bind"/>
      <target type="serial" port="0"/>
      <alias name="ua-87e6c526-d997-47f2-a655-9210a0b3c74b"/>
    </console>
    <controller type="scsi" model="virtio-scsi" index="0">
      <alias name="ua-8d1c3b9a-9944-4ea1-8830-a2bd91b33f14"/>
    </controller>
    <memballoon model="virtio">
      <stats period="5"/>
      <alias name="ua-a29c26f0-ed64-47be-bf3b-2afd9b22ed37"/>
    </memballoon>
    <controller type="ide">
      <address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci"/>
    </controller>
    <video>
      <model type="qxl" vram="32768" heads="1" ram="65536" vgamem="16384"/>
      <alias name="ua-b8fbd865-10f5-457a-b6d6-e78b891793c9"/>
    </video>
    <controller type="virtio-serial" index="0" ports="16">
      <alias name="ua-ca599026-5c7d-46a8-8004-503c235bb1ae"/>
    </controller>
    <serial type="unix">
      <source path="/var/run/ovirt-vmconsole-console/540797ce-31e2-4f90-8bff-0faa00e29dc6.sock" mode="bind"/>
      <target port="0"/>
    </serial>
    <channel type="spicevmc">
      <target type="virtio" name="com.redhat.spice.0"/>
    </channel>
    <controller type="pci" model="pcie-root"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <disk type="file" device="cdrom" snapshot="no">
      <driver name="qemu" type="raw" error_policy="report"/>
      <source file="PAYLOAD:" startupPolicy="optional">
        <seclabel model="dac" type="none" relabel="no"/>
      </source>
      <target dev="sdb" bus="sata"/>
      <readonly/>
      <alias name="ua-f43f17c1-5a75-483e-a0fc-569e1e0a849d"/>
    </disk>
    <disk type="file" device="cdrom" snapshot="no">
      <driver name="qemu" type="raw" error_policy="report"/>
      <source file="" startupPolicy="optional">
        <seclabel model="dac" type="none" relabel="no"/>
      </source>
      <target dev="sdc" bus="sata"/>
      <readonly/>
      <alias name="ua-93bfcef9-5f9a-4cec-8a11-3caba9db27a0"/>
    </disk>
    <disk snapshot="no" type="block" device="disk">
      <target dev="sda" bus="scsi"/>
      <source dev="/rhev/data-center/mnt/blockSD/c33db23f-8d93-4988-b73c-aecfbab8a2ce/images/e5f75962-1518-487a-a343-918b08e707d7/bd6de030-f4f8-4a3c-a1c1-76068aff5087">
        <seclabel model="dac" type="none" relabel="no"/>
      </source>
      <driver name="qemu" io="native" type="qcow2" error_policy="stop" cache="none"/>
      <alias name="ua-e5f75962-1518-487a-a343-918b08e707d7"/>
      <address bus="0" controller="0" unit="0" type="drive" target="0"/>
      <boot order="1"/>
      <serial>e5f75962-1518-487a-a343-918b08e707d7</serial>
    </disk>
  </devices>
  <pm>
    <suspend-to-disk enabled="no"/>
    <suspend-to-mem enabled="no"/>
  </pm>
  <os>
    <type arch="x86_64" machine="pc-q35-rhel8.1.0">hvm</type>
    <smbios mode="sysinfo"/>
    <bios useserial="yes"/>
  </os>
  <metadata>
    <ovirt-tune:qos/>
    <ovirt-vm:vm>
      <ovirt-vm:minGuaranteedMemoryMb type="int">2048</ovirt-vm:minGuaranteedMemoryMb>
      <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion>
      <ovirt-vm:custom/>
      <ovirt-vm:device devtype="disk" name="sda">
        <ovirt-vm:poolID>9eaab007-b755-4563-93a0-5776944327af</ovirt-vm:poolID>
        <ovirt-vm:volumeID>bd6de030-f4f8-4a3c-a1c1-76068aff5087</ovirt-vm:volumeID>
        <ovirt-vm:imageID>e5f75962-1518-487a-a343-918b08e707d7</ovirt-vm:imageID>
        <ovirt-vm:domainID>c33db23f-8d93-4988-b73c-aecfbab8a2ce</ovirt-vm:domainID>
      </ovirt-vm:device>
      <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
      <ovirt-vm:device devtype="disk" name="sdb">
        <ovirt-vm:payload>
          <ovirt-vm:volId>config-2</ovirt-vm:volId>
          <ovirt-vm:file path="openstack/latest/network_data.json">ewogICJsaW5rcyIgOiBbIHsKICAgICJuYW1lIiA6ICJldGgwIiwKICAgICJpZCIgOiAiZXRoMCIsCiAgICAidHlwZSIgOiAidmlmIgogIH0sIHsKICAgICJuYW1lIiA6ICJldGgxIiwKICAgICJpZCIgOiAiZXRoMSIsCiAgICAidHlwZSIgOiAidmlmIgogIH0gXSwKICAic2VydmljZXMiIDogWyB7CiAgICAiYWRkcmVzcyIgOiAiMTAuNjAuNjQuMiIsCiAgICAidHlwZSIgOiAiZG5zIgogIH0sIHsKICAgICJhZGRyZXNzIiA6ICIxMC42MC42NC4zIiwKICAgICJ0eXBlIiA6ICJkbnMiCiAgfSBdLAogICJuZXR3b3JrcyIgOiBbIHsKICAgICJuZXRtYXNrIiA6ICIyNTUuMjU1LjI0OC4wIiwKICAgICJkbnNfc2VhcmNoIiA6IFsgImxpdmUucm90MDEua3dlYmJsLmNsb3VkIiwgImxpdmUuYW1zMDEua3dlYmJsLmNsb3VkIiBdLAogICAgImxpbmsiIDogImV0aDAiLAogICAgImlkIiA6ICJldGgwIiwKICAgICJpcF9hZGRyZXNzIiA6ICIxMC42MC42NC4zMiIsCiAgICAidHlwZSIgOiAiaXB2NCIsCiAgICAiZ2F0ZXdheSIgOiAiMTAuNjAuNzEuMjU0IiwKICAgICJkbnNfbmFtZXNlcnZlcnMiIDogWyAiMTAuNjAuNjQuMiIsICIxMC42MC42NC4zIiBdCiAgfSwgewogICAgIm5ldG1hc2siIDogIjI1NS4yNTUuMjQ4LjAiLAogICAgImRuc19zZWFyY2giIDogWyAibGl2ZS5yb3QwMS5rd2ViYmwuY2xvdWQiLCAibGl2ZS5hbXMwMS5rd2ViYmwuY2xvdWQiIF0sCiAgICAibGluayIgOiAiZXRoMSIsCiAgICAiaWQiIDogImV0aDEiLAogICAgImlwX2FkZHJlc3MiIDogIjEwLjYwLjcyLjMyIiwKICAgICJ0eXBlIiA6ICJpcHY0IiwKICAgICJkbnNfbmFtZXNlcnZlcnMiIDogWyAiMTAuNjAuNjQuMiIsICIxMC42MC42NC4zIiBdCiAgfSBdCn0=</ovirt-vm:file>
          <ovirt-vm:file path="openstack/latest/meta_data.json">ewogICJwdWJsaWNfa2V5cyIgOiBbICJzc2gtcnNhIEFBQUFCM056YUMxeWMyRUFBQUFEQVFBQkFBQUJnUURCSk5GM2p3TkJVNUZhc0RPYVdFbjlKOW9leTJrZkNQR1hSWWxCL2pjUUNXazlZRzZKTk1PUjFPQjlQVjFXam5LU2dUNGpqcTl4ZWh4OGkvNStaZlVaK2o4OUlIQUVFY3dUQzhxOEZqTHhXMkI3SFpqcnMxRlRralJKNlFUTHNZOUpiUVRBYWVmSmRINVVLV0xzcURjeU5vMEVoMGViWit6elRBZU15VGN3QjdxdEwzbUVsL3NYSVo1OEdCUlZaVzhBV1RYTStqS1hCSEhlVHA4MTRYVlRKSTFHTnIySXJZYVY0V2Zvcnc5a0cvRkl6bVBnaUFBME1BenlrREw3RFRhU2JZOGw1NWJWcDVndDdHR3BEckFzN2c5Q052elN2VVMwdGs1NHdxRkltd2ZOTHY1WFd6RXl5d2trUG9Va3FDSnZlVjRpODcwSmhhRElWeUU1WHZhY2FmYlRJbVpVbzNvNElRbEh6Yys4cUU0VGJKeXcwY0tCOUwzZ3dqTHFQL1A5TEJwUTRjSFFQUDlxQ205OWNPaDNWdXkyVVVVOUhZd056QlNHK2hQTTF6V1lJRElpV3hCTlRnQTZqZHBkOUJzeDRJYkpiS2tEUnMvQ1g0Z3VmQWFncXhTU2JlM1BaWVdFQWVuTG9vN0JFbDFGc3ZWVkJHZEdvT3dvY0V6cnRQMD0ga3dhZG1pbkB6YWJiaXgub2JzZXJ2ZS5rd2ViYmwuY2xvdWQiIF0sCiAgImF2YWlsYWJpbGl0eV96b25lIiA6ICJub3ZhIiwKICAiaG9zdG5hbWUiIDogInByZC1pbmZyYS1jb25zdWwtMDIubGl2ZS5yb3QwMS5rd2ViYmwuY2xvdWQiLAogICJsYXVuY2hfaW5kZXgiIDogIjAiLAogICJtZXRhIiA6IHsKICAgICJyb2xlIiA6ICJzZXJ2ZXIiLAogICAgImRzbW9kZSIgOiAibG9jYWwiLAogICAgImVzc2VudGlhbCIgOiAiZmFsc2UiCiAgfSwKICAibmFtZSIgOiAicHJkLWluZnJhLWNvbnN1bC0wMi5saXZlLnJvdDAxLmt3ZWJibC5jbG91ZCIsCiAgInV1aWQiIDogIjAxNWU0YzMxLTQ4YTgtNDVhNC05MzgwLWQxOTAyODFiNGM4OSIKfQ==</ovirt-vm:file>
          <ovirt-vm:file path="openstack/latest/user_data">I2Nsb3VkLWNvbmZpZwpvdXRwdXQ6CiAgYWxsOiAnPj4gL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJwpkaXNhYmxlX3Jvb3Q6IDAKcnVuY21kOgotICdzZWQgLWkgJycvXmRhdGFzb3VyY2VfbGlzdDogL2QnJyAvZXRjL2Nsb3VkL2Nsb3VkLmNmZzsgZWNobyAnJ2RhdGFzb3VyY2VfbGlzdDoKICBbIk5vQ2xvdWQiLCAiQ29uZmlnRHJpdmUiXScnID4+IC9ldGMvY2xvdWQvY2xvdWQuY2ZnJwp0aW1lem9uZTogRXVyb3BlL0J1ZGFwZXN0CnNzaF9kZWxldGVrZXlzOiAnZmFsc2UnCnNzaF9wd2F1dGg6IHRydWUKY2hwYXNzd2Q6CiAgZXhwaXJlOiBmYWxzZQpydW5jbWQ6ICAgLSB0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQgICAtIHJtIC9ldGMvTmV0d29ya01hbmFnZXIvY29uZi5kLzk5LWNsb3VkLWluaXQuY29uZiA=</ovirt-vm:file>
        </ovirt-vm:payload>
      </ovirt-vm:device>
      <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
    </ovirt-vm:vm>
  </metadata>
</domain>

2020-07-13 09:01:53,112Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25264) [] FINISH, CreateBrokerVDSCommand, return: , log id: 300003ce
2020-07-13 09:01:53,117Z INFO  [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25264) [] FINISH, CreateVDSCommand, return: WaitForLaunch, log id: 1825579e
2020-07-13 09:01:53,117Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-25264) [] Lock freed to object 'EngineLock:{exclusiveLocks='[540797ce-31e2-4f90-8bff-0faa00e29dc6=VM]', sharedLocks=''}'
2020-07-13 09:01:53,121Z INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-25264) [] EVENT_ID: USER_STARTED_VM(153), VM prd-vm-02 was started by admin@internal (Host: ovirt-host-03.live.example.local).
2020-07-13 09:01:53,763Z INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-3) [] VM '540797ce-31e2-4f90-8bff-0faa00e29dc6' was reported as Down on VDS '2d6a80f5-0dfd-46bb-bd9a-7487498d0d9e'(ovirt-host-03.live.example.local)
2020-07-13 09:01:53,764Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-3) [] START, DestroyVDSCommand(HostName = ovirt-host-03.live.example.local, DestroyVmVDSCommandParameters:{hostId='2d6a80f5-0dfd-46bb-bd9a-7487498d0d9e', vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 29fff4d9
2020-07-13 09:01:54,187Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-3) [] FINISH, DestroyVDSCommand, return: , log id: 29fff4d9
2020-07-13 09:01:54,187Z INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-3) [] VM '540797ce-31e2-4f90-8bff-0faa00e29dc6'(prd-vm-02) moved from 'WaitForLaunch' --> 'Down'
2020-07-13 09:01:54,235Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-3) [] EVENT_ID: VM_DOWN_ERROR(119), VM prd-vm-02 is down with error. Exit message: internal error: Bus 0 must be PCI for integrated PIIX3 USB or IDE controllers.
2020-07-13 09:01:54,236Z INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-3) [] add VM '540797ce-31e2-4f90-8bff-0faa00e29dc6'(prd-vm-02) to rerun treatment
2020-07-13 09:01:54,245Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-3) [] Rerun VM '540797ce-31e2-4f90-8bff-0faa00e29dc6'. Called from VDS 'ovirt-host-03.live.example.local'
2020-07-13 09:01:54,260Z WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-25265) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM prd-vm-02 on Host ovirt-host-03.live.example.local.
2020-07-13 09:01:54,295Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-25265) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[540797ce-31e2-4f90-8bff-0faa00e29dc6=VM]', sharedLocks=''}'
2020-07-13 09:01:54,301Z INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25265) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6'}), log id: 70b2862a
2020-07-13 09:01:54,301Z INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25265) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 70b2862a
2020-07-13 09:01:54,330Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-25265) [] Running command: RunVmCommand internal: false. Entities affected :  ID: 540797ce-31e2-4f90-8bff-0faa00e29dc6 Type: VMAction group RUN_VM with role type USER
2020-07-13 09:01:54,335Z INFO  [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] (EE-ManagedThreadFactory-engine-Thread-25265) [] Emulated machine 'pc-q35-rhel8.1.0' which is different than that of the cluster is set for 'prd-ovirt-02'(540797ce-31e2-4f90-8bff-0faa00e29dc6)
2020-07-13 09:01:54,357Z INFO  [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25265) [] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@ce55a214'}), log id: 6f50f561
2020-07-13 09:01:54,361Z INFO  [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25265) [] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 6f50f561
2020-07-13 09:01:54,363Z INFO  [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25265) [] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='36f4b127-8571-4016-81fb-3b359552105f', vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6', vm='VM [prd-vm-02]'}), log id: ac6a86d
2020-07-13 09:01:54,364Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25265) [] START, CreateBrokerVDSCommand(HostName = ovirt-host-01.live.example.local, CreateVDSCommandParameters:{hostId='36f4b127-8571-4016-81fb-3b359552105f', vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6', vm='VM [prd-vm-02]'}), log id: 5a0d584b
2020-07-13 09:01:54,377Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25265) [] VM <?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">
  <name>prd-vm-02</name>
  <uuid>540797ce-31e2-4f90-8bff-0faa00e29dc6</uuid>
  <memory>2097152</memory>
  <currentMemory>2097152</currentMemory>
  <iothreads>1</iothreads>
  <maxMemory slots="16">16777216</maxMemory>
  <vcpu current="2">32</vcpu>
  <sysinfo type="smbios">
    <system>
      <entry name="manufacturer">oVirt</entry>
      <entry name="product">OS-NAME:</entry>
      <entry name="version">OS-VERSION:</entry>
      <entry name="family">oVirt</entry>
      <entry name="serial">HOST-SERIAL:</entry>
      <entry name="uuid">540797ce-31e2-4f90-8bff-0faa00e29dc6</entry>
    </system>
  </sysinfo>
  <clock offset="variable" adjustment="0">
    <timer name="rtc" tickpolicy="catchup"/>
    <timer name="pit" tickpolicy="delay"/>
    <timer name="hpet" present="no"/>
  </clock>
  <features>
    <acpi/>
  </features>
  <cpu match="exact">
    <model>Skylake-Server</model>
    <feature name="hle" policy="disable"/>
    <feature name="rtm" policy="disable"/>
    <topology cores="2" threads="1" sockets="16"/>
    <numa>
      <cell id="0" cpus="0-31" memory="2097152"/>
    </numa>
  </cpu>
  <cputune/>
  <qemu:capabilities>
    <qemu:add capability="blockdev"/>
    <qemu:add capability="incremental-backup"/>
  </qemu:capabilities>
  <devices>
    <input type="tablet" bus="usb"/>
    <channel type="unix">
      <target type="virtio" name="ovirt-guest-agent.0"/>
      <source mode="bind" path="/var/lib/libvirt/qemu/channels/540797ce-31e2-4f90-8bff-0faa00e29dc6.ovirt-guest-agent.0"/>
    </channel>
    <channel type="unix">
      <target type="virtio" name="org.qemu.guest_agent.0"/>
      <source mode="bind" path="/var/lib/libvirt/qemu/channels/540797ce-31e2-4f90-8bff-0faa00e29dc6.org.qemu.guest_agent.0"/>
    </channel>
    <graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us">
      <listen type="network" network="vdsm-ovirtmgmt"/>
    </graphics>
    <graphics type="spice" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" tlsPort="-1">
      <channel name="main" mode="secure"/>
      <channel name="inputs" mode="secure"/>
      <channel name="cursor" mode="secure"/>
      <channel name="playback" mode="secure"/>
      <channel name="record" mode="secure"/>
      <channel name="display" mode="secure"/>
      <channel name="smartcard" mode="secure"/>
      <channel name="usbredir" mode="secure"/>
      <listen type="network" network="vdsm-ovirtmgmt"/>
    </graphics>
    <controller type="usb" model="qemu-xhci" index="0" ports="8">
      <alias name="ua-3378061f-712a-4b46-9096-8cdae419da1f"/>
    </controller>
    <rng model="virtio">
      <backend model="random">/dev/urandom</backend>
      <alias name="ua-3b51c81c-ef50-4087-a8e6-6e4def0e345b"/>
    </rng>
    <console type="unix">
      <source path="/var/run/ovirt-vmconsole-console/540797ce-31e2-4f90-8bff-0faa00e29dc6.sock" mode="bind"/>
      <target type="serial" port="0"/>
      <alias name="ua-87e6c526-d997-47f2-a655-9210a0b3c74b"/>
    </console>
    <controller type="scsi" model="virtio-scsi" index="0">
      <alias name="ua-8d1c3b9a-9944-4ea1-8830-a2bd91b33f14"/>
    </controller>
    <memballoon model="virtio">
      <stats period="5"/>
      <alias name="ua-a29c26f0-ed64-47be-bf3b-2afd9b22ed37"/>
    </memballoon>
    <controller type="ide">
      <address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci"/>
    </controller>
    <video>
      <model type="qxl" vram="32768" heads="1" ram="65536" vgamem="16384"/>
      <alias name="ua-b8fbd865-10f5-457a-b6d6-e78b891793c9"/>
    </video>
    <controller type="virtio-serial" index="0" ports="16">
      <alias name="ua-ca599026-5c7d-46a8-8004-503c235bb1ae"/>
    </controller>
    <serial type="unix">
      <source path="/var/run/ovirt-vmconsole-console/540797ce-31e2-4f90-8bff-0faa00e29dc6.sock" mode="bind"/>
      <target port="0"/>
    </serial>
    <channel type="spicevmc">
      <target type="virtio" name="com.redhat.spice.0"/>
    </channel>
    <controller type="pci" model="pcie-root"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <controller type="pci" model="pcie-root-port"/>
    <disk type="file" device="cdrom" snapshot="no">
      <driver name="qemu" type="raw" error_policy="report"/>
      <source file="PAYLOAD:" startupPolicy="optional">
        <seclabel model="dac" type="none" relabel="no"/>
      </source>
      <target dev="sdb" bus="sata"/>
      <readonly/>
      <alias name="ua-8715257b-28ce-4398-84ed-f831504d4e2f"/>
    </disk>
    <disk type="file" device="cdrom" snapshot="no">
      <driver name="qemu" type="raw" error_policy="report"/>
      <source file="" startupPolicy="optional">
        <seclabel model="dac" type="none" relabel="no"/>
      </source>
      <target dev="sdc" bus="sata"/>
      <readonly/>
      <alias name="ua-93bfcef9-5f9a-4cec-8a11-3caba9db27a0"/>
    </disk>
    <disk snapshot="no" type="block" device="disk">
      <target dev="sda" bus="scsi"/>
      <source dev="/rhev/data-center/mnt/blockSD/c33db23f-8d93-4988-b73c-aecfbab8a2ce/images/e5f75962-1518-487a-a343-918b08e707d7/bd6de030-f4f8-4a3c-a1c1-76068aff5087">
        <seclabel model="dac" type="none" relabel="no"/>
      </source>
      <driver name="qemu" io="native" type="qcow2" error_policy="stop" cache="none"/>
      <alias name="ua-e5f75962-1518-487a-a343-918b08e707d7"/>
      <address bus="0" controller="0" unit="0" type="drive" target="0"/>
      <boot order="1"/>
      <serial>e5f75962-1518-487a-a343-918b08e707d7</serial>
    </disk>
  </devices>
  <pm>
    <suspend-to-disk enabled="no"/>
    <suspend-to-mem enabled="no"/>
  </pm>
  <os>
    <type arch="x86_64" machine="pc-q35-rhel8.1.0">hvm</type>
    <smbios mode="sysinfo"/>
    <bios useserial="yes"/>
  </os>
  <metadata>
    <ovirt-tune:qos/>
    <ovirt-vm:vm>
      <ovirt-vm:minGuaranteedMemoryMb type="int">2048</ovirt-vm:minGuaranteedMemoryMb>
      <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion>
      <ovirt-vm:custom/>
      <ovirt-vm:device devtype="disk" name="sda">
        <ovirt-vm:poolID>9eaab007-b755-4563-93a0-5776944327af</ovirt-vm:poolID>
        <ovirt-vm:volumeID>bd6de030-f4f8-4a3c-a1c1-76068aff5087</ovirt-vm:volumeID>
        <ovirt-vm:imageID>e5f75962-1518-487a-a343-918b08e707d7</ovirt-vm:imageID>
        <ovirt-vm:domainID>c33db23f-8d93-4988-b73c-aecfbab8a2ce</ovirt-vm:domainID>
      </ovirt-vm:device>
      <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
      <ovirt-vm:device devtype="disk" name="sdb">
        <ovirt-vm:payload>
          <ovirt-vm:volId>config-2</ovirt-vm:volId>
          <ovirt-vm:file path="openstack/latest/network_data.json">ewogICJsaW5rcyIgOiBbIHsKICAgICJuYW1lIiA6ICJldGgwIiwKICAgICJpZCIgOiAiZXRoMCIsCiAgICAidHlwZSIgOiAidmlmIgogIH0sIHsKICAgICJuYW1lIiA6ICJldGgxIiwKICAgICJpZCIgOiAiZXRoMSIsCiAgICAidHlwZSIgOiAidmlmIgogIH0gXSwKICAic2VydmljZXMiIDogWyB7CiAgICAiYWRkcmVzcyIgOiAiMTAuNjAuNjQuMiIsCiAgICAidHlwZSIgOiAiZG5zIgogIH0sIHsKICAgICJhZGRyZXNzIiA6ICIxMC42MC42NC4zIiwKICAgICJ0eXBlIiA6ICJkbnMiCiAgfSBdLAogICJuZXR3b3JrcyIgOiBbIHsKICAgICJuZXRtYXNrIiA6ICIyNTUuMjU1LjI0OC4wIiwKICAgICJkbnNfc2VhcmNoIiA6IFsgImxpdmUucm90MDEua3dlYmJsLmNsb3VkIiwgImxpdmUuYW1zMDEua3dlYmJsLmNsb3VkIiBdLAogICAgImxpbmsiIDogImV0aDAiLAogICAgImlkIiA6ICJldGgwIiwKICAgICJpcF9hZGRyZXNzIiA6ICIxMC42MC42NC4zMiIsCiAgICAidHlwZSIgOiAiaXB2NCIsCiAgICAiZ2F0ZXdheSIgOiAiMTAuNjAuNzEuMjU0IiwKICAgICJkbnNfbmFtZXNlcnZlcnMiIDogWyAiMTAuNjAuNjQuMiIsICIxMC42MC42NC4zIiBdCiAgfSwgewogICAgIm5ldG1hc2siIDogIjI1NS4yNTUuMjQ4LjAiLAogICAgImRuc19zZWFyY2giIDogWyAibGl2ZS5yb3QwMS5rd2ViYmwuY2xvdWQiLCAibGl2ZS5hbXMwMS5rd2ViYmwuY2xvdWQiIF0sCiAgICAibGluayIgOiAiZXRoMSIsCiAgICAiaWQiIDogImV0aDEiLAogICAgImlwX2FkZHJlc3MiIDogIjEwLjYwLjcyLjMyIiwKICAgICJ0eXBlIiA6ICJpcHY0IiwKICAgICJkbnNfbmFtZXNlcnZlcnMiIDogWyAiMTAuNjAuNjQuMiIsICIxMC42MC42NC4zIiBdCiAgfSBdCn0=</ovirt-vm:file>
          <ovirt-vm:file path="openstack/latest/meta_data.json">ewogICJwdWJsaWNfa2V5cyIgOiBbICJzc2gtcnNhIEFBQUFCM056YUMxeWMyRUFBQUFEQVFBQkFBQUJnUURCSk5GM2p3TkJVNUZhc0RPYVdFbjlKOW9leTJrZkNQR1hSWWxCL2pjUUNXazlZRzZKTk1PUjFPQjlQVjFXam5LU2dUNGpqcTl4ZWh4OGkvNStaZlVaK2o4OUlIQUVFY3dUQzhxOEZqTHhXMkI3SFpqcnMxRlRralJKNlFUTHNZOUpiUVRBYWVmSmRINVVLV0xzcURjeU5vMEVoMGViWit6elRBZU15VGN3QjdxdEwzbUVsL3NYSVo1OEdCUlZaVzhBV1RYTStqS1hCSEhlVHA4MTRYVlRKSTFHTnIySXJZYVY0V2Zvcnc5a0cvRkl6bVBnaUFBME1BenlrREw3RFRhU2JZOGw1NWJWcDVndDdHR3BEckFzN2c5Q052elN2VVMwdGs1NHdxRkltd2ZOTHY1WFd6RXl5d2trUG9Va3FDSnZlVjRpODcwSmhhRElWeUU1WHZhY2FmYlRJbVpVbzNvNElRbEh6Yys4cUU0VGJKeXcwY0tCOUwzZ3dqTHFQL1A5TEJwUTRjSFFQUDlxQ205OWNPaDNWdXkyVVVVOUhZd056QlNHK2hQTTF6V1lJRElpV3hCTlRnQTZqZHBkOUJzeDRJYkpiS2tEUnMvQ1g0Z3VmQWFncXhTU2JlM1BaWVdFQWVuTG9vN0JFbDFGc3ZWVkJHZEdvT3dvY0V6cnRQMD0ga3dhZG1pbkB6YWJiaXgub2JzZXJ2ZS5rd2ViYmwuY2xvdWQiIF0sCiAgImF2YWlsYWJpbGl0eV96b25lIiA6ICJub3ZhIiwKICAiaG9zdG5hbWUiIDogInByZC1pbmZyYS1jb25zdWwtMDIubGl2ZS5yb3QwMS5rd2ViYmwuY2xvdWQiLAogICJsYXVuY2hfaW5kZXgiIDogIjAiLAogICJtZXRhIiA6IHsKICAgICJyb2xlIiA6ICJzZXJ2ZXIiLAogICAgImRzbW9kZSIgOiAibG9jYWwiLAogICAgImVzc2VudGlhbCIgOiAiZmFsc2UiCiAgfSwKICAibmFtZSIgOiAicHJkLWluZnJhLWNvbnN1bC0wMi5saXZlLnJvdDAxLmt3ZWJibC5jbG91ZCIsCiAgInV1aWQiIDogImVmYzY3NTJjLTVhOGItNDE4NC1iMzUxLWMzNTlmNDUxMDQxMSIKfQ==</ovirt-vm:file>
          <ovirt-vm:file path="openstack/latest/user_data">I2Nsb3VkLWNvbmZpZwpvdXRwdXQ6CiAgYWxsOiAnPj4gL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJwpkaXNhYmxlX3Jvb3Q6IDAKcnVuY21kOgotICdzZWQgLWkgJycvXmRhdGFzb3VyY2VfbGlzdDogL2QnJyAvZXRjL2Nsb3VkL2Nsb3VkLmNmZzsgZWNobyAnJ2RhdGFzb3VyY2VfbGlzdDoKICBbIk5vQ2xvdWQiLCAiQ29uZmlnRHJpdmUiXScnID4+IC9ldGMvY2xvdWQvY2xvdWQuY2ZnJwp0aW1lem9uZTogRXVyb3BlL0J1ZGFwZXN0CnNzaF9kZWxldGVrZXlzOiAnZmFsc2UnCnNzaF9wd2F1dGg6IHRydWUKY2hwYXNzd2Q6CiAgZXhwaXJlOiBmYWxzZQpydW5jbWQ6ICAgLSB0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQgICAtIHJtIC9ldGMvTmV0d29ya01hbmFnZXIvY29uZi5kLzk5LWNsb3VkLWluaXQuY29uZiA=</ovirt-vm:file>
        </ovirt-vm:payload>
      </ovirt-vm:device>
      <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
    </ovirt-vm:vm>
  </metadata>
</domain>

2020-07-13 09:01:54,387Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25265) [] FINISH, CreateBrokerVDSCommand, return: , log id: 5a0d584b
2020-07-13 09:01:54,392Z INFO  [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25265) [] FINISH, CreateVDSCommand, return: WaitForLaunch, log id: ac6a86d
2020-07-13 09:01:54,392Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-25265) [] Lock freed to object 'EngineLock:{exclusiveLocks='[540797ce-31e2-4f90-8bff-0faa00e29dc6=VM]', sharedLocks=''}'
2020-07-13 09:01:54,396Z INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-25265) [] EVENT_ID: USER_STARTED_VM(153), VM prd-vm-02 was started by admin@internal (Host: ovirt-host-01.live.example.local).
2020-07-13 09:01:55,632Z INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '540797ce-31e2-4f90-8bff-0faa00e29dc6' was reported as Down on VDS '36f4b127-8571-4016-81fb-3b359552105f'(ovirt-host-01.live.example.local)
2020-07-13 09:01:55,633Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = ovirt-host-01.live.example.local, DestroyVmVDSCommandParameters:{hostId='36f4b127-8571-4016-81fb-3b359552105f', vmId='540797ce-31e2-4f90-8bff-0faa00e29dc6', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 5d62122b
2020-07-13 09:01:56,343Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, return: , log id: 5d62122b
2020-07-13 09:01:56,343Z INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '540797ce-31e2-4f90-8bff-0faa00e29dc6'(prd-vm-02) moved from 'WaitForLaunch' --> 'Down'
2020-07-13 09:01:56,356Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) [] EVENT_ID: VM_DOWN_ERROR(119), VM prd-vm-02 is down with error. Exit message: internal error: Bus 0 must be PCI for integrated PIIX3 USB or IDE controllers.
2020-07-13 09:01:56,356Z INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] add VM '540797ce-31e2-4f90-8bff-0faa00e29dc6'(prd-vm-02) to rerun treatment
2020-07-13 09:01:56,365Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-11) [] Rerun VM '540797ce-31e2-4f90-8bff-0faa00e29dc6'. Called from VDS 'ovirt-host-01.live.example.local'
2020-07-13 09:01:56,380Z WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-25266) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM prd-vm-02 on Host ovirt-host-01.live.example.local.
2020-07-13 09:01:56,385Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-25266) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM prd-vm-02  (User: admin@internal).
2020-07-13 09:01:56,390Z INFO  [org.ovirt.engine.core.bll.ProcessDownVmCommand] (EE-ManagedThreadFactory-engine-Thread-25267) [4e2a0009] Running command: ProcessDownVmCommand internal: true.



Is there any workaround to solve this problem?

Comment 1 Liran Rotenberg 2020-07-13 14:55:52 UTC
Hi Pavel, thanks for the bug report.
I'm thinking you are hitting BZ 1853909 which is being in progress.

The error you got is a mismatch of devices to the chipset and emulated machine.
As a workaround on a new cluster, created on the 4.4 engine it should work.

Comment 2 RHEL Program Management 2020-07-13 14:56:00 UTC
The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again.

Comment 3 Pavel Zinchuk 2020-07-13 15:12:33 UTC
Hi Liran!

Did I understand you correctly?
Do you offer to create a new VM in the oVirt 4.4.1 cluster and reattach disks from the migrated VM to the new one?

This scenario does not suit me, because I use terraform. I need to save the ID of VM and all inner VM objects.
It will be too complicated to update IDs in the terraform state if all VM objects will be changed.

Maybe there is another solution until the release of oVirt 4.4.2?

Comment 4 Liran Rotenberg 2020-07-14 11:27:26 UTC
I offered to create a new cluster on 4.4 and use him.

Maybe changing the cluster bios type and machine type can be helpful too for running the problematic VM. But, it may cause other problems on new VM that created and so on.
The first option is the safest, you may only try it with one host and one new SD with the imported VM to see it works. Then you can decide what you wish to do.
The best option would be to wait for the fix and update which suppose to be in 4.4.2.

Comment 5 Pavel Zinchuk 2020-07-16 07:58:32 UTC
I performed additional tests.
This problem does not seem to relate to https://bugzilla.redhat.com/show_bug.cgi?id=1853909

I've updated first the emulated_machine for cluster in the postgresql engine database to 'pc-i440fx-rhel7.6.0;pc-q35-rhel8.1.0':
engine=# UPDATE cluster
SET emulated_machine = 'pc-i440fx-rhel7.6.0;pc-q35-rhel8.1.0'
WHERE emulated_machine = 'pc-q35-rhel8.1.0';

Initially was emulated_machine='pc-q35-rhel8.1.0'

Then I've tried to import VM again. But issue "Bus 0 must be PCI for integrated PIIX3 USB or IDE controllers" still persists.

----------------------------------------------------------

Then, I started looking for a possible workaround. I switched CPU parameters, BIOS Type, various VM parameters.
As a result, I found a set of action steps, that allow to start the VM:
1. Import VM as OVA
2. Start VM with action "Run Once". This is important step. Need click exactly "Run Once" button, not just "Run" button. On this step VM will fail to start.
3. Edit VM. Need change System -> Advanced Parameters -> Custom Emulated Machine to the pc-i440fx-rhel7.6.0
4. Start VM with action "Run". On this step VM will fail to start too.
5. Edit VM. Need change System -> Advanced Parameters -> Custom Emulated Machine back to the cluster default value pc-q35-rhel8.1.0
6. Start VM with action "Run". On this step VM will start successfully.

To understand what changed, I have saved XML configuration file for the same, before step 2 and after step 6.

XML Configuration before step 2:
<domain type='qemu'>
  <name>vm-test-02</name>
  <uuid>540797ce-31e2-4f90-8bff-0faa00e29dc6</uuid>
  <memory unit='KiB'>2048</memory>
  <currentMemory unit='KiB'>2048</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <cpu mode='custom' match='exact' check='none'>
    <model fallback='forbid'>qemu64</model>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <controller type='usb' index='0' model='piix3-uhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </memballoon>
  </devices>
</domain>



XML Configuration after step 6:
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>vm-test-02</name>
  <uuid>540797ce-31e2-4f90-8bff-0faa00e29dc6</uuid>
  <metadata xmlns:ns1="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
    <ns1:qos/>
    <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
      <ovirt-vm:balloonTarget type="int">2097152</ovirt-vm:balloonTarget>
      <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion>
      <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>
      <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
      <ovirt-vm:memGuaranteedSize type="int">2048</ovirt-vm:memGuaranteedSize>
      <ovirt-vm:minGuaranteedMemoryMb type="int">2048</ovirt-vm:minGuaranteedMemoryMb>
      <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
      <ovirt-vm:startTime type="float">1594880195.2476044</ovirt-vm:startTime>
      <ovirt-vm:device mac_address="56:6f:06:a1:00:2c"/>
"/tmp/virshQUVDN0.xml" 272L, 12504C
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>vm-test-02</name>
  <uuid>540797ce-31e2-4f90-8bff-0faa00e29dc6</uuid>
  <metadata xmlns:ns1="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
    <ns1:qos/>
    <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
      <ovirt-vm:balloonTarget type="int">2097152</ovirt-vm:balloonTarget>
      <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion>
      <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>
      <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
      <ovirt-vm:memGuaranteedSize type="int">2048</ovirt-vm:memGuaranteedSize>
      <ovirt-vm:minGuaranteedMemoryMb type="int">2048</ovirt-vm:minGuaranteedMemoryMb>
      <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
      <ovirt-vm:startTime type="float">1594880195.2476044</ovirt-vm:startTime>
      <ovirt-vm:device mac_address="56:6f:06:a1:00:2c"/>
      <ovirt-vm:device mac_address="56:6f:06:a1:00:2d"/>
      <ovirt-vm:device devtype="disk" name="sda">
        <ovirt-vm:domainID>c33db23f-8d93-4988-b73c-aecfbab8a2ce</ovirt-vm:domainID>
        <ovirt-vm:imageID>e5f75962-1518-487a-a343-918b08e707d7</ovirt-vm:imageID>
        <ovirt-vm:poolID>9eaab007-b755-4563-93a0-5776944327af</ovirt-vm:poolID>
        <ovirt-vm:volumeID>bd6de030-f4f8-4a3c-a1c1-76068aff5087</ovirt-vm:volumeID>
      </ovirt-vm:device>
    </ovirt-vm:vm>
  </metadata>
  <maxMemory slots='16' unit='KiB'>16777216</maxMemory>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static' current='2'>32</vcpu>
  <iothreads>1</iothreads>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>oVirt</entry>
      <entry name='product'>RHEL</entry>
      <entry name='version'>8.2-2.2004.0.1.el8</entry>
      <entry name='serial'>00000000-0000-0000-0000-0cc47a2a6e60</entry>
      <entry name='uuid'>540797ce-31e2-4f90-8bff-0faa00e29dc6</entry>
      <entry name='family'>oVirt</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-q35-rhel8.1.0'>hvm</type>
    <bios useserial='yes'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact' check='partial'>
    <model fallback='allow'>Skylake-Server</model>
    <topology sockets='16' dies='1' cores='2' threads='1'/>
    <feature policy='disable' name='hle'/>
    <topology sockets='16' dies='1' cores='2' threads='1'/>
    <feature policy='disable' name='hle'/>
    <feature policy='disable' name='rtm'/>
    <numa>
      <cell id='0' cpus='0-31' memory='2097152' unit='KiB'/>
    </numa>
  </cpu>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw' error_policy='report'/>
      <source startupPolicy='optional'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <target dev='sdc' bus='sata'/>
      <readonly/>
      <alias name='ua-93bfcef9-5f9a-4cec-8a11-3caba9db27a0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/mnt/blockSD/c33db23f-8d93-4988-b73c-aecfbab8a2ce/images/e5f75962-1518-487a-a343-918b08e707d7/bd6de030-f4f8-4a3c-a1c1-76068aff5087'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <target dev='sda' bus='scsi'/>
      <serial>e5f75962-1518-487a-a343-918b08e707d7</serial>
      <boot order='1'/>
      <alias name='ua-e5f75962-1518-487a-a343-918b08e707d7'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <driver iothread='1'/>
      <alias name='ua-008315ab-5d47-4536-8d37-40136f02a2b4'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <controller type='pci' index='8' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='8' port='0x17'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
    </controller>
    <controller type='pci' index='9' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='9' port='0x18'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='10' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='10' port='0x19'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
    </controller>
    <controller type='pci' index='11' model='pcie-root-port'>
      <model name='pcie-root-port'/>
    <controller type='pci' index='11' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='11' port='0x1a'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
    </controller>
    <controller type='pci' index='12' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='12' port='0x1b'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
    </controller>
    <controller type='pci' index='13' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='13' port='0x1c'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
    </controller>
    <controller type='pci' index='14' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='14' port='0x1d'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
    </controller>
    <controller type='pci' index='15' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='15' port='0x1e'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
    </controller>
    <controller type='pci' index='16' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='16' port='0x1f'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='qemu-xhci'>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='56:6f:06:a1:00:2c'/>
      <source bridge='prod_mgmt'/>
      <model type='virtio'/>
      <driver name='vhost' queues='2'/>
      <filterref filter='vdsm-no-mac-spoofing'/>
      <link state='up'/>
      <mtu size='1500'/>
      <alias name='ua-bbf56657-fa8f-4b35-b77f-afcc78cadc93'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <interface type='bridge'>
    </interface>
    <interface type='bridge'>
      <mac address='56:6f:06:a1:00:2d'/>
      <source bridge='prod_mon'/>
      <model type='virtio'/>
      <driver name='vhost' queues='2'/>
      <filterref filter='vdsm-no-mac-spoofing'/>
      <link state='up'/>
      <mtu size='1500'/>
      <alias name='ua-7697274d-05d8-4c6b-8d01-552eb246b62d'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </interface>
    <serial type='unix'>
      <source mode='bind' path='/var/run/ovirt-vmconsole-console/540797ce-31e2-4f90-8bff-0faa00e29dc6.sock'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='unix'>
      <source mode='bind' path='/var/run/ovirt-vmconsole-console/540797ce-31e2-4f90-8bff-0faa00e29dc6.sock'/>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/540797ce-31e2-4f90-8bff-0faa00e29dc6.ovirt-guest-agent.0'/>
      <target type='virtio' name='ovirt-guest-agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/540797ce-31e2-4f90-8bff-0faa00e29dc6.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='3'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' keymap='en-us' passwd='*****' passwdValidTo='1970-01-01T00:00:01'>
      <listen type='network' network='vdsm-ovirtmgmt'/>
    </graphics>
    <graphics type='spice' autoport='yes' passwd='*****' passwdValidTo='1970-01-01T00:00:01'>
      <listen type='network' network='vdsm-ovirtmgmt'/>
      <channel name='main' mode='secure'/>
      <channel name='display' mode='secure'/>
      <channel name='inputs' mode='secure'/>
      <channel name='cursor' mode='secure'/>
      <channel name='playback' mode='secure'/>
  </devices>
  <qemu:capabilities>
    <qemu:add capability='blockdev'/>
    <qemu:add capability='incremental-backup'/>
  </qemu:capabilities>
</domain>



As we see, oVirt updated XML configuration only after manual manipulations. I can only assume that oVirt should have performed these actions automatically during initial import of VM.

Comment 6 Arik 2020-07-16 08:19:14 UTC

*** This bug has been marked as a duplicate of bug 1839545 ***

Comment 7 Arik 2020-08-04 12:53:50 UTC
Please provide logs of the case in which import succeeds, the VM starts but can't boot with latest 4.4.1

Comment 8 Pavel Zinchuk 2020-08-04 17:23:27 UTC
I will upload logs tomorrow

Comment 9 Pavel Zinchuk 2020-08-05 07:24:39 UTC
Today I've performed the new tests.

At this moment VM always start after import from OVA.
During tests was only one attempt when there was issue with missed boot disk. All other attempts (in total, 15 tests were performed) was cussessull, without issues.

Previously I've reported in bug report at https://bugzilla.redhat.com/show_bug.cgi?id=1839545 2020-07-29 05:30:07 UTC:
>I've tested yesterday with:
>ovirt-engine-4.4.1.10-1.el8.noarch
>vdsm-4.40.22-1.el8.x86_64
>libvirt-daemon-6.0.0-17.el8.x86_64
>qemu-kvm-4.2.0-19.el8.x86_64
>
>
>The problem was still. Imported with OVA VM able to start, but not able to load from boot disk. 
>Tested OVA impor from oVirt 4.3.10 to the oVirt 4.4.1.10.

From this time only one thing was changed, was updated GlusterFS to version glusterfs-7.7-1.el8.x86_64. This action was made to fix another issue - https://bugzilla.redhat.com/show_bug.cgi?id=1862053
Seems this bug and https://bugzilla.redhat.com/show_bug.cgi?id=1862053 related to the same problem with GlusterFS. Because only after update GlusterFS we able to import VMs with OVA.

Interesting this, during import VM with OVA oVirt always create disks on the oVirt hosted_storage (this is Gluster storage). During import with OVA customer can't change destination storage.

Conclusion:
The problem was most likely caused by a bug in the GloucesterFS package. Details about this issue can be found here https://bugzilla.redhat.com/show_bug.cgi?id=1862053
Update GlusterFS to version 7.7-1 fixed issue.

Customer currently can't import VM with OVA to the iSCSI destination storage. oVirt always try import VM disk to the Gluster storage hosted_storage

Arik, is there a chance that the oVirt Engine maintainers will see the report on this issue and will add the updated GlusterFS packages to the oVirt repository to eliminate this issue for other users?

At this moment out servers use glusterfs test repos instead of ovirt repos:
# yum info glusterfs
Last metadata expiration check: 0:04:46 ago on Wed 05 Aug 2020 07:18:06 AM GMT.
Installed Packages
Name         : glusterfs
Version      : 7.7
Release      : 1.el8
Architecture : x86_64
Size         : 2.7 M
Source       : glusterfs-7.7-1.el8.src.rpm
Repository   : @System
From repo    : centos-gluster7-test
Summary      : Distributed File System
URL          : http://docs.gluster.org/
License      : GPLv2 or LGPLv3+

Comment 10 Arik 2020-08-05 13:17:58 UTC
(In reply to Pavel Zinchuk from comment #9)
> Conclusion:
> The problem was most likely caused by a bug in the GloucesterFS package.
> Details about this issue can be found here
> https://bugzilla.redhat.com/show_bug.cgi?id=1862053
> Update GlusterFS to version 7.7-1 fixed issue.

Cool, so it really way a different problem :)
 
> Customer currently can't import VM with OVA to the iSCSI destination
> storage. oVirt always try import VM disk to the Gluster storage
> hosted_storage

Did you try to change the storage domain in the import dialog?
 
> Arik, is there a chance that the oVirt Engine maintainers will see the
> report on this issue and will add the updated GlusterFS packages to the
> oVirt repository to eliminate this issue for other users?

Not really, this one is about an issue that was solved by the fix for bz 1839545 (that made the IDE controller to be replaced during import).
I'll update this bug accordingly.
Please file a different bug to make sure it will get noticed.

*** This bug has been marked as a duplicate of bug 1839545 ***

Comment 11 Pavel Zinchuk 2020-08-06 05:58:20 UTC
Created attachment 1710602 [details]
Can't change destination storage domain

Screenshot, which demonstrate that customer can't change the destination storage domain during import VM from OVA.

Comment 12 Pavel Zinchuk 2020-08-06 05:59:23 UTC
(In reply to Arik from comment #10)
> (In reply to Pavel Zinchuk from comment #9)
> > Conclusion:
> > The problem was most likely caused by a bug in the GloucesterFS package.
> > Details about this issue can be found here
> > https://bugzilla.redhat.com/show_bug.cgi?id=1862053
> > Update GlusterFS to version 7.7-1 fixed issue.
> 
> Cool, so it really way a different problem :)
>  
> > Customer currently can't import VM with OVA to the iSCSI destination
> > storage. oVirt always try import VM disk to the Gluster storage
> > hosted_storage
> 
> Did you try to change the storage domain in the import dialog?
>  
Customer can't change destination storage domain during import from OVA. In previous comment I've provided screenshot.

Comment 13 Arik 2020-08-06 06:54:44 UTC
We can't set the destination storage domain per-disk but note that there's the "Storage Domain" field at the top of the dialog that enables setting the destination storage domain for all disks

Comment 14 Omer Sen 2022-12-01 20:24:48 UTC
MY issue was having Ide controllers which are not supported anymore.

Just removed 

    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>


block from xml file and any reference to it (if any) like:

    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

and ran `virsh define xxxx.xml` and now i can see it on virt-manager and can start it.I have ubuntu 22.04 


etc/libvirt/qemu# dpkg -l |grep libvirt
ii  gir1.2-libvirt-glib-1.0:amd64         4.0.0-2                                 amd64        GObject introspection files for the libvirt-glib library
ii  libvirt-clients                       8.0.0-1ubuntu7.3                        amd64        Programs for the libvirt library
ii  libvirt-daemon                        8.0.0-1ubuntu7.3                        amd64        Virtualization daemon
ii  libvirt-daemon-config-network         8.0.0-1ubuntu7.3                        all          Libvirt daemon configuration files (default network)
ii  libvirt-daemon-config-nwfilter        8.0.0-1ubuntu7.3                        all          Libvirt daemon configuration files (default network filters)
ii  libvirt-daemon-driver-qemu            8.0.0-1ubuntu7.3                        amd64        Virtualization daemon QEMU connection driver
ii  libvirt-daemon-system                 8.0.0-1ubuntu7.3                        amd64        Libvirt daemon configuration files
ii  libvirt-daemon-system-systemd         8.0.0-1ubuntu7.3                        all          Libvirt daemon configuration files (systemd)
ii  libvirt-glib-1.0-0:amd64              4.0.0-2                                 amd64        libvirt GLib and GObject mapping library
ii  libvirt-glib-1.0-data                 4.0.0-2                                 all          Common files for libvirt GLib library
ii  libvirt0:amd64                        8.0.0-1ubuntu7.3                        amd64        library for interfacing with different virtualization systems
ii  python3-libvirt                       8.0.0-1build1                           amd64        libvirt Python 3 bindings


Note You need to log in before you can comment on or make changes to this bug.