Created attachment 1383979 [details] engine , vdsm ,libvirt logs Description of problem: Upload ISO file & attach it as a CD to a VM via VM edit boot options window. Start the VM -> VM fail to start with error: unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads. Engine: 2018-01-21 15:41:00,447+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-10) [] EVENT_ID: VM_DOWN_ERROR(119), VM vm_test1 is down with error. Exit message: unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads. 2018-01-21 15:41:00,451+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-10) [] add VM 'b781ed9c-da80-4932-ab0f-8284901cbefa'(vm_test1) to rerun treatment 2018-01-21 15:41:00,461+02 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-10) [] Rerun VM 'b781ed9c-da80-4932-ab0f-8284901cbefa'. Called from VDS 'host_mixed_1' VDSM log: 2018-01-21 15:40:57,200+0200 ERROR (vm/b781ed9c) [virt.vm] (vmId='b781ed9c-da80-4932-ab0f-8284901cbefa') The vm start process failed (vm:918) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 847, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2748, in _run dom.createWithFlags(flags) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads libvirt log: 2018-01-21 13:40:59.422+0000: 8324: debug : virFileIsSharedFSType:3421 : Check if path /rhev/data-center/mnt/blockSD/d39379dd-97d3-407b-b49e-f8951d0683d0/images/f4430603-7d83-47ab-9726-d60f725b8c3c/f032e027-449c -4e91-8c43-07854afeaeec with FS magic 1481003842 is shared 2018-01-21 13:40:59.422+0000: 8324: error : qemuOpenFileAs:3241 : Failed to open file '/rhev/data-center/mnt/blockSD/d39379dd-97d3-407b-b49e-f8951d0683d0/images/f4430603-7d83-47ab-9726-d60f725b8c3c/f032e027-449c -4e91-8c43-07854afeaeec': No such file or directory Version-Release number of selected component (if applicable): 4.2.1.2-0.1 VDSM: 4.20.14-1 libvirt: 3.9.0-7 qemu-guest-agent-2.8.0-2.el7.x86_64 qemu-kvm-rhev-2.10.0-17 How reproducible: 100% so far Steps to Reproduce (all via webadmin): 1.Upload ISO file - finished successfully. 2.Create new VM & in boot options choose : - CD as only boot option - Enable 'attach CD' checkbox with chosen uploaded ISO as CD. - Press OK 3.Start VM Actual results: VM fail to start with error: unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads. Expected results: VM should start. Additional info:
Tested same scenario in older build & it worked so it looks like a regression issue . Build that the same scenario works: Engine: 4.2.1-0.2 - changed ! VDSM: 4.20.13-1 - changed ! libvirt: 3.9.0-7 - same qemu-guest-agent-2.8.0-2.el7.x86_64 - Same qemu-img-rhev-2.10.0-16 - changed !
likely a libvirt regression we would have to work around it....one option is to use "threads, I do not recall why are we using "native" for CDROM. It shouldn't make much of a difference
(In reply to Michal Skrivanek from comment #2) sorry, not really. It's a wrong xml snippet generated for CDROMs now, apparently introduced by bug 1530730
(In reply to Michal Skrivanek from comment #3) > (In reply to Michal Skrivanek from comment #2) > > sorry, not really. > It's a wrong xml snippet generated for CDROMs now, apparently introduced by > bug 1530730 Unlikely Michal, this used to work at the time the patch was introduced, also for QE, where did you see a wrong XML snippet?
see attached log, cdrom is created with io="native" which is invalid for CDROMs (we use that for disks on storage domain, but there we use cache="none" too)
(In reply to Michal Skrivanek from comment #5) > see attached log, cdrom is created with io="native" which is invalid for > CDROMs (we use that for disks on storage domain, but there we use > cache="none" too) This is nothing new and the patch did not change that, I guess Libvirt now disallows it or something? The patch only changes the type of device from file to block
the problem is indeed in the new "CDROM on SD" feature, in the change of cdrom from "file" to "block" type on a block SD. Fix should be trivial, let's just use "threads" for any CDROM regardless its underlying type
Reducing severity and the regression keyword, it can be a regression as it's a new feature, a CDROM from an ISO domain works as it should, the only problem is running a CDROM from data domain on block domains only and only in newer Libvirt versions, this will be fixed for 4.2.2
This bug report has Keywords: Regression or TestBlocker. Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.
Raising severity - This bug blocks the entire feature as ISO images are not usable
(In reply to Raz Tamir from comment #10) > Raising severity - This bug blocks the entire feature as ISO images are not > usable Not 100% precise, you can still use ISO on file domains
(In reply to Tal Nisan from comment #11) > (In reply to Raz Tamir from comment #10) > > Raising severity - This bug blocks the entire feature as ISO images are not > > usable > > Not 100% precise, you can still use ISO on file domains Thanks Tal, Missed that
Created attachment 1388247 [details] engine , vdsm logs
The same issue still occurs on latest downstream with the following Engine/VDSM/libvirt: ovirt-engine-4.2.1.6-0.1.el7.noarch vdsm-client-4.20.17-1.el7ev.noarch libvirt-3.9.0-7.el7.x86_64 qemu-kvm-rhev-2.10.0-20.el7.x86_64
Created attachment 1394525 [details] relevant engine ,vdsm, libvirt Added relevant logs . The issue occurred at 2018-02-11 12:04:53 , see relevant snippets of logs below. Engine: 2018-02-11 12:04:58,055+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-6) [] EVENT_ID: VM_DOWN_ERROR(119), VM vm1 is down with error. Exit message: unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads VDSM: 2018-02-11 12:04:53,742+0200 ERROR (vm/d22e0520) [virt.vm] (vmId='d22e0520-7d3c-49a0-8598-51c7956af505') The vm start process failed (vm:927) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2756, in _run dom.createWithFlags(flags) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads ,762+0200 INFO (vm/d22e0520) [virt.vm] (vmId='d22e0520-7d3c-49a0-8598-51c7956af505') Changed state to Down: unsupported configuration: native I/O needs either no disk cache or directsync cach e mode, QEMU will fallback to aio=threads (code=1) (vm:1646) Libvirt: 2018-02-11 10:04:53.657+0000: 27041: error : qemuBuildDriveStrValidate:1616 : unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads
Verified at 4.2.2-0.1
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.