Description of problem: - Changing VM scsi_hostdev custom property with SCSI hostdev attached, may lead to VM running failure. - Workaround for this issue is to remove SCSI passthrough device -> change custom property -> add device -> run VM. - vdsm.log shows the next error: 2020-01-16 16:26:25,358+0200 ERROR (vm/5950be42) [virt.vm] (vmId='5950be42-fab0-4f3d-b47e-3eb7661385f0') The vm start process failed (vm:835) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 769, in _startUnderlyingVm self._run() File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 2582, in _run dom = self._connection.defineXML(self._domain.xml) File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python3.6/site-packages/libvirt.py", line 3928, in defineXML if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self) libvirt.libvirtError: unsupported configuration: virtio disk cannot have an address of type 'drive' - engine.log error: 2020-01-16 16:26:25,657+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-23) [2efc868a] EVENT_ID: VM_DOWN_ERROR(119), VM rhel7l is down with error. Exit message: unsupported configuration: virtio disk cannot have an address of type 'drive'. Version-Release number of selected component (if applicable): ovirt-engine-4.4.0-0.14.master.el7 vdsm-4.40.0-180.giteba0b75.el8ev.x86_64 libvirt-5.6.0-6.module+el8.1.0+4244+9aa4e6bb.x86_64 qemu-kvm-4.1.0-14.module+el8.1.0+4548+ed1300f4.x86_64 How reproducible: 50% Steps to Reproduce: 1. From WebAdmin, add SCSI hostdev, edit VM and add custom property -> scsi_hosted: scsi_generic. 2. Run VM and verify VM is running with SCSI host device. 3. Power off VM -> edit VM and change custom properties from scsi_generic to scsi_blk_pci 4. Run VM Actual results: VM failed to run. Expected results: VM should run with scsi_blk_pci property. Additional info: vdsm.log and engine.log attached
Created attachment 1652786 [details] vdsm.log
Created attachment 1652787 [details] engine.log
Removed the blocker, since this isn't a workflow the vast majority of users would follow, and while this bug should be fixed, it doesn't block testing of the basic functionality of the RFE
Verification builds: ovirt-engine-4.4.0-0.29.master.el8ev qemu-kvm-4.2.0-17.module+el8.2.0+6131+4e715f3b.x86_64 libvirt-daemon-6.0.0-16.module+el8.2.0+6131+4e715f3b.x86_64 vdsm-4.40.9-1.el8ev.x86_64 sanlock-3.8.0-2.el8.x86_64
This bugzilla is included in oVirt 4.4.0 release, published on May 20th 2020. Since the problem described in this bug report should be resolved in oVirt 4.4.0 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.