Created attachment 1752876 [details] Logs Description of problem: A VM fails to start after preview operation with the following errors: Exit message: XML error: Invalid PCI address 0000:05:00.0. slot must be >= 1. Engine log: 2021-01-30 13:36:00,911+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-13) [5bd57035] EVEN T_ID: VM_DOWN_ERROR(119), VM vm_TestCase18894_3013274002 is down with error. Exit message: XML error: The device at PCI address 0000:00:02.0 canno t be plugged into the PCI controller with index='0'. It requires a controller that accepts a pcie-root-port. VDSM Log: 2021-01-30 13:36:04,251+0200 ERROR (vm/9ae75934) [virt.vm] (vmId='9ae75934-56cd-4681-9dc1-586c19a99634') The vm start process failed (vm:951) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 881, in _startUnderlyingVm self._run() File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 2800, in _run dom = self._connection.defineXML(self._domain.xml) File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python3.6/site-packages/libvirt.py", line 4063, in defineXML if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self) libvirt.libvirtError: XML error: The device at PCI address 0000:00:02.0 cannot be plugged into the PCI controller with index='0'. It requires a co ntroller that accepts a pcie-root-port. Libvirt Log: 2021-01-30 13:36:04.944+0000: 441843: info : virDBusCall:1555 : DBUS_METHOD_ERROR: 'org.fedoraproject.FirewallD1.direct.passthrough' on '/org/fedoraproject/FirewallD1' at 'org.fedoraproject.FirewallD1' error org.fedoraproject.FirewallD1.Exception: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet66 -j libvirt-J-vnet66' failed: ebtables: Bad rule (does a matching rule exist in that chain?) Version-Release number of selected component (if applicable): vdsm-4.40.50.3-1.el8ev.x86_64 libvirt-6.6.0-13.module+el8.3.1+9548+0a8fede5.x86_64 ovirt-engine-4.4.5.3-0.14.el8ev.noarch How reproducible: 100% Steps to Reproduce: 1. Create a VM from template (RHEL 8.3) 2. Create snapshot s0 3. Preview s0 4. Start VM -> VM fails to start. Actual results: VM fails to start. Expected results: Operation should succeed. Additional info: Ataching art+engine+vdsm+libvirt logs.
Arik, can you please have a look?
What's the cluster's bios type?
(In reply to Arik from comment #3) > What's the cluster's bios type? Q35 Chipset with BIOS
Thanks Evelina, that's a side effect of the changes we're doing to change the handling of bios type. The patches that were posted already are going to fix this.
Verified with: ovirt-engine-4.4.5.4-0.6.el8ev.noarch Steps: 1. Create a VM from template(latest-rhel-guest-image-8.3-infra) 2. Run the VM (this step is a must for reproducing the bug) 3. Power off the VM, create snapshot s0 4. Preview s0 5. Start the VM Results: The VM starts successfully.
This bugzilla is included in oVirt 4.4.5 release, published on March 18th 2021. Since the problem described in this bug report should be resolved in oVirt 4.4.5 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.