Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1536826

Summary: Start VM with uploaded ISO fails with libvirtError: unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads.
Product: [oVirt] ovirt-engine Reporter: Avihai <aefrat>
Component: BLL.VirtAssignee: Tal Nisan <tnisan>
Status: CLOSED CURRENTRELEASE QA Contact: Avihai <aefrat>
Severity: high Docs Contact:
Priority: medium    
Version: 4.2.1.1CC: bugs, ebenahar, herrold, lveyde, michal.skrivanek, ratamir, tnisan
Target Milestone: ovirt-4.2.2Flags: rule-engine: ovirt-4.2+
rule-engine: blocker+
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ovirt-engine-4.2.2 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-03-29 10:56:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1122970, 1511420, 1530730, 1532630, 1534626    
Attachments:
Description Flags
engine , vdsm ,libvirt logs
none
engine , vdsm logs
none
relevant engine ,vdsm, libvirt none

Description Avihai 2018-01-21 14:11:34 UTC
Created attachment 1383979 [details]
engine , vdsm ,libvirt logs

Description of problem:
Upload ISO file & attach it as a CD to a VM via VM edit boot options window.
Start the VM -> VM fail to start with error:
unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads.

Engine:
2018-01-21 15:41:00,447+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-10) [] EVENT_ID: VM_DOWN_ERROR(119), VM vm_test1 is down with error. Exit message: unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads.
2018-01-21 15:41:00,451+02 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-10) [] add VM 'b781ed9c-da80-4932-ab0f-8284901cbefa'(vm_test1) to rerun treatment
2018-01-21 15:41:00,461+02 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-10) [] Rerun VM 'b781ed9c-da80-4932-ab0f-8284901cbefa'. Called from VDS 'host_mixed_1'


VDSM log:
2018-01-21 15:40:57,200+0200 ERROR (vm/b781ed9c) [virt.vm] (vmId='b781ed9c-da80-4932-ab0f-8284901cbefa') The vm start process failed (vm:918)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 847, in _startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2748, in _run
    dom.createWithFlags(flags)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags
    if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
libvirtError: unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads

libvirt log:
2018-01-21 13:40:59.422+0000: 8324: debug : virFileIsSharedFSType:3421 : Check if path /rhev/data-center/mnt/blockSD/d39379dd-97d3-407b-b49e-f8951d0683d0/images/f4430603-7d83-47ab-9726-d60f725b8c3c/f032e027-449c
-4e91-8c43-07854afeaeec with FS magic 1481003842 is shared
2018-01-21 13:40:59.422+0000: 8324: error : qemuOpenFileAs:3241 : Failed to open file '/rhev/data-center/mnt/blockSD/d39379dd-97d3-407b-b49e-f8951d0683d0/images/f4430603-7d83-47ab-9726-d60f725b8c3c/f032e027-449c
-4e91-8c43-07854afeaeec': No such file or directory

Version-Release number of selected component (if applicable):
4.2.1.2-0.1
VDSM: 4.20.14-1
libvirt: 3.9.0-7
qemu-guest-agent-2.8.0-2.el7.x86_64
qemu-kvm-rhev-2.10.0-17

How reproducible:
100% so far

Steps to Reproduce (all via webadmin):
1.Upload ISO file - finished successfully.
2.Create new VM & in boot options choose :
- CD as only boot option
- Enable 'attach CD' checkbox with chosen uploaded ISO as CD.
- Press OK
3.Start VM

Actual results:
VM fail to start with error:
unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads.

Expected results:
VM should start.

Additional info:

Comment 1 Avihai 2018-01-21 14:32:03 UTC
Tested same scenario in older build & it worked so it looks like a regression issue .

Build that the same scenario works:
Engine: 4.2.1-0.2 - changed !
VDSM: 4.20.13-1 - changed !
libvirt: 3.9.0-7 - same
qemu-guest-agent-2.8.0-2.el7.x86_64 - Same
qemu-img-rhev-2.10.0-16 - changed !

Comment 2 Michal Skrivanek 2018-01-22 12:55:33 UTC
likely a libvirt regression
we would have to work around it....one option is to use "threads, I do not recall why are we using "native" for CDROM. It shouldn't make much of a difference

Comment 3 Michal Skrivanek 2018-01-22 13:26:14 UTC
(In reply to Michal Skrivanek from comment #2)

sorry, not really. 
It's a wrong xml snippet generated for CDROMs now, apparently introduced by bug 1530730

Comment 4 Tal Nisan 2018-01-22 13:51:55 UTC
(In reply to Michal Skrivanek from comment #3)
> (In reply to Michal Skrivanek from comment #2)
> 
> sorry, not really. 
> It's a wrong xml snippet generated for CDROMs now, apparently introduced by
> bug 1530730

Unlikely Michal, this used to work at the time the patch was introduced, also for QE, where did you see a wrong XML snippet?

Comment 5 Michal Skrivanek 2018-01-22 13:55:18 UTC
see attached log, cdrom is created with io="native" which is invalid for CDROMs (we use that for disks on storage domain, but there we use cache="none" too)

Comment 6 Tal Nisan 2018-01-22 15:29:00 UTC
(In reply to Michal Skrivanek from comment #5)
> see attached log, cdrom is created with io="native" which is invalid for
> CDROMs (we use that for disks on storage domain, but there we use
> cache="none" too)

This is nothing new and the patch did not change that, I guess Libvirt now disallows it or something? The patch only changes the type of device from file to block

Comment 7 Michal Skrivanek 2018-01-23 11:34:16 UTC
the problem is indeed in the new "CDROM on SD" feature, in the change of cdrom from "file" to "block" type on a block SD.

Fix should be trivial, let's just use "threads" for any CDROM regardless its underlying type

Comment 8 Tal Nisan 2018-01-23 11:46:12 UTC
Reducing severity and the regression keyword, it can be a regression as it's a new feature, a CDROM from an ISO domain works as it should, the only problem is running a CDROM from data domain on block domains only and only in newer Libvirt versions, this will be fixed for 4.2.2

Comment 9 Red Hat Bugzilla Rules Engine 2018-01-23 11:46:19 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 10 Raz Tamir 2018-01-29 12:26:02 UTC
Raising severity - This bug blocks the entire feature as ISO images are not usable

Comment 11 Tal Nisan 2018-01-29 12:42:24 UTC
(In reply to Raz Tamir from comment #10)
> Raising severity - This bug blocks the entire feature as ISO images are not
> usable

Not 100% precise, you can still use ISO on file domains

Comment 12 Raz Tamir 2018-01-29 12:53:02 UTC
(In reply to Tal Nisan from comment #11)
> (In reply to Raz Tamir from comment #10)
> > Raising severity - This bug blocks the entire feature as ISO images are not
> > usable
> 
> Not 100% precise, you can still use ISO on file domains

Thanks Tal,

Missed that

Comment 13 Avihai 2018-01-30 10:07:35 UTC
Created attachment 1388247 [details]
engine , vdsm logs

Comment 14 Avihai 2018-02-11 10:03:01 UTC
The same issue still occurs on latest downstream with the following Engine/VDSM/libvirt:

ovirt-engine-4.2.1.6-0.1.el7.noarch
vdsm-client-4.20.17-1.el7ev.noarch
libvirt-3.9.0-7.el7.x86_64
qemu-kvm-rhev-2.10.0-20.el7.x86_64

Comment 15 Avihai 2018-02-11 10:11:34 UTC
Created attachment 1394525 [details]
relevant engine ,vdsm, libvirt

Added relevant logs .

The issue occurred at 2018-02-11 12:04:53 , see relevant snippets of logs below.

Engine:
2018-02-11 12:04:58,055+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-6) [] EVENT_ID: VM_DOWN_ERROR(119), VM vm1 is down with error. Exit message: unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads

VDSM:
2018-02-11 12:04:53,742+0200 ERROR (vm/d22e0520) [virt.vm] (vmId='d22e0520-7d3c-49a0-8598-51c7956af505') The vm start process failed (vm:927)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, in _startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2756, in _run
    dom.createWithFlags(flags)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags
    if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
libvirtError: unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads
,762+0200 INFO  (vm/d22e0520) [virt.vm] (vmId='d22e0520-7d3c-49a0-8598-51c7956af505') Changed state to Down: unsupported configuration: native I/O needs either no disk cache or directsync cach
e mode, QEMU will fallback to aio=threads (code=1) (vm:1646)

Libvirt:
2018-02-11 10:04:53.657+0000: 27041: error : qemuBuildDriveStrValidate:1616 : unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads

Comment 16 Avihai 2018-02-19 09:39:35 UTC
Verified at 4.2.2-0.1

Comment 17 Sandro Bonazzola 2018-03-29 10:56:19 UTC
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.