Bug 1207992 - [RFE] Report IO errors to guests if the device is a CDROM
Summary: [RFE] Report IO errors to guests if the device is a CDROM
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.5.0
Hardware: All
OS: Linux
medium
high
Target Milestone: ovirt-4.2.2
: 4.2.0
Assignee: Milan Zamazal
QA Contact: Vitalii Yerys
URL:
Whiteboard:
: 1497173 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-01 07:29 UTC by Jaison Raju
Modified: 2021-09-09 11:38 UTC (History)
24 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Virtual machines now stay operational when connectivity with CD-ROM images breaks. The error is reported to the guest operating system. Note that the configuration of the storage device may affect the time it takes to detect the error. During this time, the virtual machine is non-operational.
Clone Of:
Environment:
Last Closed: 2018-05-15 17:36:24 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)
engine (1.25 MB, application/x-gzip)
2018-01-17 13:15 UTC, Polina
no flags Details
engine & vdsm logs (1.25 MB, application/x-gzip)
2018-01-17 13:25 UTC, Polina
no flags Details
engine, vdsm and rest files (270.20 KB, application/x-gzip)
2018-01-21 08:11 UTC, Polina
no flags Details
engine &vdsm 4.2.1-6 build. vdsm-4.20.17 (161.04 KB, application/x-gzip)
2018-01-28 12:51 UTC, Polina
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1497173 0 unspecified CLOSED VM marked as non responsive if it has ISO from an inaccessible ISO domain 2021-05-01 16:54:52 UTC
Red Hat Knowledge Base (Solution) 528793 0 None None None Never
Red Hat Product Errata RHEA-2018:1488 0 None None None 2018-05-15 17:38:04 UTC
oVirt gerrit 80007 0 'None' MERGED core: report io errors to guests for cd-rom devices 2020-12-13 20:17:47 UTC
oVirt gerrit 81086 0 'None' ABANDONED virt: storage: default policy 'report' for cdroms 2020-12-13 20:17:47 UTC
oVirt gerrit 86115 0 'None' MERGED vm: support error_policy='report' for CDROMs 2020-12-13 20:17:47 UTC
oVirt gerrit 88130 0 'None' MERGED periodic: Don't update read-only drive info 2020-12-13 20:17:47 UTC
oVirt gerrit 88324 0 'None' MERGED periodic: Don't update read-only drive info 2020-12-13 20:17:46 UTC

Internal Links: 1497173

Description Jaison Raju 2015-04-01 07:29:04 UTC
1. Proposed title of this feature request  
      Provide alternatives to avoid guest hang due to unreachable isodomain .
      
     
    3. What is the nature and description of the request? 
Guests with iso images attached go into hang state (Seen as 'Not Responding' in portal) , if the iso domain is unreachable & the guest tries to access any
data from the cdrom .

In my reproducer (rhel6 guest), the console remained 'hung' until the iso domain was reproducible .
Reproduced using iptables on NFS server providing iso domain with DROP & REJECT .

Need option to change how the qemu-kvm replies back to the guest, such that the guest does not hang .
      
    4. Why does the customer need this? (List the business requirements here) 
The iso images are attached to large number of guests . If at all the iso domain goes down , all guests are at risk for being in hung state .
      
    5. How would the customer like to achieve this? (List the functional requirements here)  
Add option via portal in RunOnce & Change CD through which , if the isodomain is inaccessible , the error is immediately passed to guest in such a way that no hang is noticed on guest .
      
    6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.  
      
    7. Is there already an existing RFE upstream or in Red Hat Bugzilla?  
      No. But a similar RFE is raised for error handling for disks . https://bugzilla.redhat.com/show_bug.cgi?id=1024428
    8. Does the customer have any specific timeline dependencies and which release would they like to target (i.e. RHEL5, RHEL6)?  
No
      
    9. Is the sales team involved in this request and do they have any additional input?  
  N    
    10. List any affected packages or components.  
   qemu-kvm-rhev , vdsm   
    11. Would the customer be able to assist in testing this functionality if implemented?  
yes


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Michal Skrivanek 2015-04-12 12:15:19 UTC
Allon, could it be done with different ioerror handling per disk?

Comment 3 Allon Mureinik 2015-04-12 13:38:20 UTC
(In reply to Michal Skrivanek from comment #2)
> Allon, could it be done with different ioerror handling per disk?

I believe it could, but afaik, Kevin Wolf from qemu is adamantly against it (see, e.g., bug 1024428).
Kevin - can you provide some insight to this please?

Comment 5 Kevin Wolf 2015-04-14 08:31:50 UTC
(In reply to Allon Mureinik from comment #3)
> I believe it could, but afaik, Kevin Wolf from qemu is adamantly against it
> (see, e.g., bug 1024428).
> Kevin - can you provide some insight to this please?

I am not against using other error options, after all there's a reason why they
are options. We just need to check that changing it is the correct solution for
the problem at hand; and for the customer problem in bug 1024428 it was clearly
not the right solution (it hid the symptoms of a qemu bug for some cases, but it
caused new problems in other cases).

This BZ might be one of the cases that justify using rerror=report. In contrast
to the other BZ, you are really talking about host-side errors here. Read (and
even worse write) errors usually make the OS think that a disk has died and refuse
to keep using it, so in many cases it would not be useful behaviour. Here we're
talking about CD-ROM, i.e. removable media, so ejecting and reloading the image
should restore a working state after having reported an error to the guest.

So I would say reporting errors to the guest is fine in this specific case.


Having said all of that, are you sure this would be a complete fix? Not sure what
the 'Not Responding' state in RHEV really means, but qemu can't report anything
to the guest until the kernel has failed the request, and it may appear to be
completely hanging in some cases before this happened. So you need to make sure
that NFS is configured correctly to return an error after a short timeout.

Comment 6 Michal Skrivanek 2015-04-15 10:37:32 UTC
Not Responding reflects the fact that the libvirt process is hung in querying the qemu which is likely in D state
Not much can be done about that on NFS, so one alternative is to use a different storage type where errors are reported reliably and immediately.

I think indeed the problem as it is stated doesn't need any changes in error reporting or handling, it won't help on NFS

Comment 7 Michal Skrivanek 2015-04-27 09:59:54 UTC
Scott, could you please review? I'd suggest closing

Comment 15 Yaniv Kaul 2016-11-30 06:26:33 UTC
Any downside in reporting the error to the guest if the disk is the cdrom?

Comment 16 Allon Mureinik 2016-11-30 06:57:41 UTC
(In reply to Yaniv Kaul from comment #15)
> Any downside in reporting the error to the guest if the disk is the cdrom?
Not that I can imagine.

Comment 22 Michal Skrivanek 2017-06-07 09:40:16 UTC
(In reply to Yaniv Kaul from comment #15)
> Any downside in reporting the error to the guest if the disk is the cdrom?

The downside is what I mentioned in comment #6, it doesn't solve anything on NFS unless you change the NFS retries/timeouts to something reasonable.

If NFS is not a concern then it can be done today via REST API propagate_errors parameter

Comment 29 Michal Skrivanek 2017-08-16 12:03:46 UTC
please test 
- no change in behavior for default NFS parameters
- errors are reported in guest for short timeouts (seconds, perhaps)

Comment 32 Tomas Jelinek 2017-10-03 07:34:26 UTC
*** Bug 1497173 has been marked as a duplicate of this bug. ***

Comment 33 Yaniv Lavi 2017-10-15 07:05:41 UTC
Should this be modified? I see your patch is merged.

Comment 34 Arik 2017-10-15 07:14:35 UTC
(In reply to Yaniv Lavi (Dary) from comment #33)
> Should this be modified? I see your patch is merged.

Not yet, it didn't work because of the recent 'engine xml' changes in VDSM.
I see that https://gerrit.ovirt.org/#/c/81086/ was abandoned - Francesco, so is it supposed to work now or do we wait for the initialization of the storage devices from the xml to get in?

Comment 35 Francesco Romani 2017-10-16 07:27:47 UTC
(In reply to Arik from comment #34)
> (In reply to Yaniv Lavi (Dary) from comment #33)
> > Should this be modified? I see your patch is merged.
> 
> Not yet, it didn't work because of the recent 'engine xml' changes in VDSM.
> I see that https://gerrit.ovirt.org/#/c/81086/ was abandoned - Francesco, so
> is it supposed to work now or do we wait for the initialization of the
> storage devices from the xml to get in?

The plan is:
1. merge the remaining patches to enable storage devices to be initialized from XML - this is needed anyway and will solve this issue for 4.2
Those patches are mostly fixes or minor things, so we should get them in at good pace.
2. resume 81086 if needed for non-Engine-XML flows (e.g. backport?)

Comment 40 Polina 2018-01-17 13:15:02 UTC
Created attachment 1382398 [details]
engine

Comment 41 Polina 2018-01-17 13:25:02 UTC
Created attachment 1382411 [details]
engine & vdsm logs

Comment 42 Polina 2018-01-17 13:25:55 UTC
The bug was re-assigned because of the following tests results.
Test Steps:
1. Run VM with CD attached from ISO domain. in <vdsm-client VM getStats> I can see this like:
"cdrom": "/rhev/data-center/mnt/vserver-production.qa.lab.tlv.redhat.com:_iso__domain/d31037e5-0d45-4306-975b-73eea845ae86/images/11111111-1111-1111-1111-111111111111/CentOS-7-x86_64-NetInstall-1611.iso"

2. Then on hosts run the iptables command to block ISO Domain: iptables -I INPUT -s vserver-production.qa.lab.tlv.redhat.com -j DROP

Actual Results:
1. engine  reports that these VMs are not responding (and they appear with "?" mark in UI)
2018-01-16 23:02:38,135+02 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread- 15) [] EVENT_ID: VM_NOT_RESPONDING(126), VM golden_env_mixed_virtio_2_1 is not responding.

2. In <vdsm-client VM getStats> on host I see that 
    "vmName": "golden_env_mixed_virtio_2_1", 
    "status": "Up",
is vdsmd log:
INFO  (libvirt/events) [virt.vm] (vmId='93d929d5-c60f-42ff-a248-e42206592d36') CPU stopped: onIOError (vm:5942)
2018-01-17 14:01:02,596+0200 WARN  (check/loop) [storage.check] Checker u'/rhev/data-center/mnt/vserver-production.qa.lab.tlv.redhat.com:_iso__domain/d31037e5-0d45-4306-975b-73eea845ae86/dom_md/metadata' is blocked for 270.00 seconds (check:278)

3. the VM is in "Not Responding" state. In this state it is impossible neither to open console, nor ssh to the vm ip for investigate the logs and see I/O error. also in UI the IP disappears in this state.

Engine&vdsm logs are attached as logs.tar.gz

Comment 43 Francesco Romani 2018-01-17 14:38:42 UTC
(In reply to Polina from comment #42)
> The bug was re-assigned because of the following tests results.
> Test Steps:
> 1. Run VM with CD attached from ISO domain. in <vdsm-client VM getStats> I
> can see this like:
> "cdrom":
> "/rhev/data-center/mnt/vserver-production.qa.lab.tlv.redhat.com:_iso__domain/
> d31037e5-0d45-4306-975b-73eea845ae86/images/11111111-1111-1111-1111-
> 111111111111/CentOS-7-x86_64-NetInstall-1611.iso"
> 
> 2. Then on hosts run the iptables command to block ISO Domain: iptables -I
> INPUT -s vserver-production.qa.lab.tlv.redhat.com -j DROP

OK, but only access to ISO mount should be blocked.

Where are the disk images served from? Are by any chance both data and iso domains residing on the same host, or does vserver-production.qa.lab.tlv.redhat.com serve only ISO domain?

maybe try blocking only traffic from port 2049 (NFS)?

Comment 47 Polina 2018-01-21 08:11:49 UTC
Created attachment 1383874 [details]
engine, vdsm and rest files

Comment 48 Polina 2018-01-21 08:26:41 UTC
Hi, I attache engine , vdsm logs and also rest responses illustrating the environment. 
the boot disk sits on iscsi - xtremio-iscsi1.scl.lab.tlv.redhat.com
and the iso domain is on vserver-production.qa.lab.tlv.redhat.com.
Please let me know if you need some additional info

Comment 49 Polina 2018-01-23 13:52:58 UTC
please see comment #c48 and provide the info .

Comment 50 Michal Skrivanek 2018-01-25 13:42:08 UTC
 UpdateVolumes() periodic check seems to get stuck.
So perhaps the comment there is relevant "# TODO: If this blocks (is it actually possible?)". Nir, thoughts?


Polina, in the attached log I do not see any indication of VM() getting EIO (being paused) which may still have couple of reasons and the logs doesn't give enough information to conclude why.
The log doesn't cover VM creation so I can't really verify the CDROM was started with error_policy=report. I see in the dumpxmls there's no error_policy set for CDROM and there's "stop" for the disk.

Comment 52 Nir Soffer 2018-01-25 16:03:58 UTC
(In reply to Michal Skrivanek from comment #50)
>  UpdateVolumes() periodic check seems to get stuck.
> So perhaps the comment there is relevant "# TODO: If this blocks (is it
> actually possible?)". Nir, thoughts?

Sure this can block, it calls self.cif.irs.getVolumeSize(), which can block up 
to 60 seconds on non-responsive NFS server. We do the blocking calls in ioprocess
and the call will time out after 60 seconds, even if ioprocess is till blocked
on storage.

The current code may block the periodic thread pool because storage is not 
responsive. We should replace it with checks running in storage domain thread,
submitting events about disks in this storage domain, so we never block virt 
threads - same way we do with storage domain monitoring.

I think the consumer of this checks is storage code in engine updating disk actual
size, using virt monitoring to get the value. In the virt layer we don't use or
need the volume size.

Comment 53 Francesco Romani 2018-01-26 09:38:21 UTC
(In reply to Polina from comment #48)
> Hi, I attache engine , vdsm logs and also rest responses illustrating the
> environment. 
> the boot disk sits on iscsi - xtremio-iscsi1.scl.lab.tlv.redhat.com
> and the iso domain is on vserver-production.qa.lab.tlv.redhat.com.
> Please let me know if you need some additional info

problem is, Vdsm is still too old. The fix (I850d279f24c3f2ad401f7f93dd31bc29abdaa873) landed in Vdsm 4.20.14, but in the logs I see:

2018-01-21 09:23:55,499+0200 INFO  (MainThread) [vds] (PID: 29848) I am the actual vdsm 4.20.13-1.el7ev cougar03.scl.lab.tlv.redhat.com (3.10.0-693.15.1.el7.x86_64) (vdsmd:148)

So the error_policy values for both drives is incorrect (meaning, don't match the test expectations)

<disk type=\'file\' device=\'cdrom\'>
  <driver name=\'qemu\' type=\'raw\'/>
  <source file=SNIP/CentOS-7-x86_64-NetInstall-1611.iso\'/>
  <backingStore/>
  <target dev=\'hdc\' bus=\'ide\'/>
  <readonly/>
  <alias name=\'ide0-1-0\'/>
  <address type=\'drive\' controller=\'0\' bus=\'1\' target=\'0\' unit=\'0\'/>
</disk>
<disk type=\'block\' device=\'disk\' snapshot=\'no\'>
  <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'native\'/>
  <source dev=\'SNIP\'/>
  <backingStore type=\'block\' index=\'1\'>
    <format type=\'qcow2\'/>
    <source dev=\'SNIP'/>
    <backingStore/>
  </backingStore>
  <target dev=\'vda\' bus=\'virtio\'/>
  <serial>c801010c-9530-4368-9860-42e440789978</serial>
  <boot order=\'1\'/>
  <alias name=\'virtio-disk0\'/>
  <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x07\' function=\'0x0\'/>
</disk>

to make sure the test conditions are met:
1. make sure the XML Engine sent contains "error_policy=report" for the cdrom drive (this was the initial Engine fix)
2. make sure the XML Vdsm sends to Engine keeps the same value (here it was the Vdsm bug I fixed)

Once both conditions above are met our job is basically done, but we can go ahead testing by disconnecting the storage just like you did.

Comment 54 Polina 2018-01-28 12:51:43 UTC
Created attachment 1387181 [details]
engine &vdsm 4.2.1-6 build. vdsm-4.20.17

Comment 55 Polina 2018-01-28 12:59:14 UTC
Hi, I've tested on the last build http://bob.eng.lab.tlv.redhat.com/builds/4.2/rhv-4.2.1-6/rhv-release-4.2.1-6-001.noarch.rpm.

in host :
vdsm-common-4.20.17-1.el7ev.noarch
vdsm-hook-openstacknet-4.20.17-1.el7ev.noarch
vdsm-http-4.20.17-1.el7ev.noarch
vdsm-hook-fcoe-4.20.17-1.el7ev.noarch
vdsm-python-4.20.17-1.el7ev.noarch
vdsm-hook-vmfex-dev-4.20.17-1.el7ev.noarch
vdsm-client-4.20.17-1.el7ev.noarch
vdsm-jsonrpc-4.20.17-1.el7ev.noarch
vdsm-hook-ethtool-options-4.20.17-1.el7ev.noarch
vdsm-yajsonrpc-4.20.17-1.el7ev.noarch
vdsm-hook-vhostmd-4.20.17-1.el7ev.noarch
vdsm-api-4.20.17-1.el7ev.noarch
vdsm-4.20.17-1.el7ev.x86_64
vdsm-hook-vfio-mdev-4.20.17-1.el7ev.noarch
vdsm-network-4.20.17-1.el7ev.x86_64

engine sends :

<disk type="file" device="cdrom" snapshot="no">
<driver name="qemu" type="raw" error_policy="report"/>
<source file="" startupPolicy="optional"/>
<target dev="hdc" bus="ide"/>
<readonly/>
</disk>
<disk snapshot="no" type="block" device="disk">
   <target dev="vda" bus="virtio"/>
   <source dev="/rhev/data-center/mnt/blockSD/585d4e5d-2c4d-4c3a-9683-50db5d87a4cd/images/c8bf90c9-4638-484e-9488-16e8cb666f4c/3d8ab1aa-8e47-4153-ac2a-0f17af05f2f7"/>
   <driver name="qemu" io="native" type="qcow2" error_policy="stop" cache="none"/>
   <boot order="1"/>
   <serial>c8bf90c9-4638-484e-9488-16e8cb666f4c</serial>
</disk>

the same I see in vdsm.log - line 2524
attached new logs.
it looks like the fix is not included in the last build.

Comment 56 Nir Soffer 2018-01-28 13:11:39 UTC
(In reply to Polina from comment #55)
> engine sends :
> 
> <disk type="file" device="cdrom" snapshot="no">
>     <target dev="hdc" bus="ide"/>
>     <driver name="qemu" type="raw" error_policy="report"/>
> </disk>

This is a cdrom "hdc"...

> <disk snapshot="no" type="block" device="disk">
>    <target dev="vda" bus="virtio"/>
>    <driver name="qemu" io="native" type="qcow2" error_policy="stop"
> cache="none"/>
> </disk>

This is a disk "vda"...

Please check the actual xml of the cdrom "hdc".

Comment 57 Polina 2018-01-29 07:59:51 UTC
> (In reply to Nir Soffer from comment #56)
> Please check the actual xml of the cdrom "hdc".

from engine.log:

 <disk type="file" device="cdrom" snapshot="no">
      <driver name="qemu" type="raw" error_policy="report"/>
      <source file="" startupPolicy="optional"/>
      <target dev="hdc" bus="ide"/>
      <readonly/>
      <address bus="1" controller="0" unit="0" type="drive" target="0"/>
    </disk>

Comment 58 Francesco Romani 2018-01-29 08:39:08 UTC
(In reply to Polina from comment #55)
> Hi, I've tested on the last build
> http://bob.eng.lab.tlv.redhat.com/builds/4.2/rhv-4.2.1-6/rhv-release-4.2.1-6-
> 001.noarch.rpm.
> 
> in host :
> vdsm-common-4.20.17-1.el7ev.noarch
> vdsm-hook-openstacknet-4.20.17-1.el7ev.noarch
> vdsm-http-4.20.17-1.el7ev.noarch
> vdsm-hook-fcoe-4.20.17-1.el7ev.noarch
> vdsm-python-4.20.17-1.el7ev.noarch
> vdsm-hook-vmfex-dev-4.20.17-1.el7ev.noarch
> vdsm-client-4.20.17-1.el7ev.noarch
> vdsm-jsonrpc-4.20.17-1.el7ev.noarch
> vdsm-hook-ethtool-options-4.20.17-1.el7ev.noarch
> vdsm-yajsonrpc-4.20.17-1.el7ev.noarch
> vdsm-hook-vhostmd-4.20.17-1.el7ev.noarch
> vdsm-api-4.20.17-1.el7ev.noarch
> vdsm-4.20.17-1.el7ev.x86_64
> vdsm-hook-vfio-mdev-4.20.17-1.el7ev.noarch
> vdsm-network-4.20.17-1.el7ev.x86_64
> 
> engine sends :
> 
> <disk type="file" device="cdrom" snapshot="no">
> <driver name="qemu" type="raw" error_policy="report"/>
> <source file="" startupPolicy="optional"/>
> <target dev="hdc" bus="ide"/>
> <readonly/>
> </disk>
> <disk snapshot="no" type="block" device="disk">
>    <target dev="vda" bus="virtio"/>
>    <source
> dev="/rhev/data-center/mnt/blockSD/585d4e5d-2c4d-4c3a-9683-50db5d87a4cd/
> images/c8bf90c9-4638-484e-9488-16e8cb666f4c/3d8ab1aa-8e47-4153-ac2a-
> 0f17af05f2f7"/>
>    <driver name="qemu" io="native" type="qcow2" error_policy="stop"
> cache="none"/>
>    <boot order="1"/>
>    <serial>c8bf90c9-4638-484e-9488-16e8cb666f4c</serial>
> </disk>
> 
> the same I see in vdsm.log - line 2524
> attached new logs.
> it looks like the fix is not included in the last build.

Wait, why? I'm looking at vdsm.log in logs_build_4.2.1 and looks good, meaning that the test conditions seems fine now:

We create the VM with the correct error policy:

2018-01-28 14:32:24,159+0200 INFO  (jsonrpc/7) [api.virt] START create(vmParams={u'xml': u'<?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"><name>my_test</name><uuid>81a9b3c1-a038-4b9c-96f2-00a5226b0b18

[...]

Engine sends:
<disk type="file" device="cdrom" snapshot="no"><driver name="qemu" type="raw" error_policy="report"></driver><source file="" startupPolicy="optional"></source><target dev="hdc" bus="ide"></target><readonly></readonly></disk>

Vdsm correctly relays the configurations to libvirt:

2018-01-28 14:32:27,526+0200 INFO  (vm/81a9b3c1) [virt.vm] (vmId='81a9b3c1-a038-4b9c-96f2-00a5226b0b18') <?xml version='1.0' encoding='utf-8'?>

[...]
        <disk device="cdrom" snapshot="no" type="file">
            <source file="" startupPolicy="optional" />
            <target bus="ide" dev="hdc" />
            <readonly />
            <driver error_policy="report" io="threads" name="qemu" type="raw" />
        </disk>

And the configuration is further confirmed correct by the output of the 'dumpxmls' calls:

2018-01-28 14:32:32,029+0200 INFO  (jsonrpc/3) [api.host] FINISH dumpxmls return={'status': {'message': 'Done', 'code': 0}, 'domxmls': {u'81a9b3c1-a038-4b9c-96f2-00a5226b0b18': '<domain type=\'kvm\'

[...]

<disk type=\'file\' device=\'cdrom\'>\n      <driver name=\'qemu\' type=\'raw\' error_policy=\'report\' io=\'threads\'/>\n      <source startupPolicy=\'optional\'/>\n      <backingStore/>\n      <target dev=\'hdc\' bus=\'ide\'/>\n      <readonly/>\n      <alias name=\'ide0-1-0\'/>\n      <address type=\'drive\' controller=\'0\' bus=\'1\' target=\'0\' unit=\'0\'/>\n    </disk>

Please note that "error_policy=report" must be applied *only* to cdroms, not to disk devices.

So now the test conditions are met.
Again, it is good to test this scenario end to end, but now we are sending the correct configuration to libvirt, so if we see any bugs from now on is either another incorrect test scenario or libvirt/qemu issue.

If you go ahead and disconnect storage to verify the behaviour, make sure that *only* the connection to cdrom storage (iso domain) is lost, disks should still be accessible.

This is to minimize the noise in the test environment.

Comment 59 Polina 2018-01-29 09:47:11 UTC
(In reply to Francesco Romani from comment #58)

> If you go ahead and disconnect storage to verify the behaviour, make sure
> that *only* the connection to cdrom storage (iso domain) is lost, disks
> should still be accessible.
 
I disconnect the only iso_domain connection to break cdrom.
Please see below xml responce to https://{{host}}/ovirt-engine/api/storageconnections request.
A minute after this block the VM is not responding (VM '81a9b3c1-a038-4b9c-96f2-00a5226b0b18'(my_test) moved from 'Up' --> 'NotResponding')
If the right settings for error_policy mean the bug is fixed, and we still see the buggy behavior, maybe the bug must be moved to another team. But it could not be verified.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<storage_connections>
    <storage_connection href="/ovirt-engine/api/storageconnections/426678fa-5bed-414c-a691-c11f0a37d566" id="426678fa-5bed-414c-a691-c11f0a37d566">
        <address>compute-ge-3.scl.lab.tlv.redhat.com</address>
        <path>/var/lib/exports/iso</path>
        <type>nfs</type>
    </storage_connection>
    <storage_connection href="/ovirt-engine/api/storageconnections/8587d106-5c0c-413c-aa3d-2f00d1bbeba7" id="8587d106-5c0c-413c-aa3d-2f00d1bbeba7">
        <address>xtremio-iscsi1.scl.lab.tlv.redhat.com</address>
        <port>3260</port>
        <target>iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00</target>
        <type>iscsi</type>
    </storage_connection>
    <storage_connection href="/ovirt-engine/api/storageconnections/967f1fd1-3488-4724-8ad7-ee6941d873f2" id="967f1fd1-3488-4724-8ad7-ee6941d873f2">
        <address>yellow-vdsb.qa.lab.tlv.redhat.com</address>
        <path>/Compute_NFS/GE/compute-ge-3/nfs_2</path>
        <type>nfs</type>
    </storage_connection>
    <storage_connection href="/ovirt-engine/api/storageconnections/2d8e1646-3d5e-44c8-9935-e43bc26a6a17" id="2d8e1646-3d5e-44c8-9935-e43bc26a6a17">
        <address>yellow-vdsb.qa.lab.tlv.redhat.com</address>
        <path>/Compute_NFS/GE/compute-ge-3/nfs_1</path>
        <type>nfs</type>
    </storage_connection>
    <storage_connection href="/ovirt-engine/api/storageconnections/46407c1a-505f-4eab-9c83-caf6ce947b3a" id="46407c1a-505f-4eab-9c83-caf6ce947b3a">
        <address>yellow-vdsb.qa.lab.tlv.redhat.com</address>
        <path>/Compute_NFS/GE/compute-ge-3/nfs_0</path>
        <type>nfs</type>
    </storage_connection>
    <storage_connection href="/ovirt-engine/api/storageconnections/d3034bc0-4327-4307-8756-deca33a2e9ae" id="d3034bc0-4327-4307-8756-deca33a2e9ae">
        <address>yellow-vdsb.qa.lab.tlv.redhat.com</address>
        <path>/Compute_NFS/GE/compute-ge-3/export_domain</path>
        <type>nfs</type>
    </storage_connection>
    <storage_connection href="/ovirt-engine/api/storageconnections/1fc2b2ce-58b9-4f26-ac2a-1da8f2c712d5" id="1fc2b2ce-58b9-4f26-ac2a-1da8f2c712d5">
        <address>vserver-production.qa.lab.tlv.redhat.com</address>
        <path>/iso_domain</path>
        <type>nfs</type>
    </storage_connection>
    <storage_connection href="/ovirt-engine/api/storageconnections/6a4b56b2-519c-46b2-aab4-2e0dd31a756a" id="6a4b56b2-519c-46b2-aab4-2e0dd31a756a">
        <address>gluster01.scl.lab.tlv.redhat.com</address>
        <path>/compute-ge-3-volume3</path>
        <type>glusterfs</type>
        <vfs_type>glusterfs</vfs_type>
    </storage_connection>
    <storage_connection href="/ovirt-engine/api/storageconnections/e4231df6-7626-44f7-9b85-a032d30d3ece" id="e4231df6-7626-44f7-9b85-a032d30d3ece">
        <address>gluster01.scl.lab.tlv.redhat.com</address>
        <path>/compute-ge-3-volume1</path>
        <type>glusterfs</type>
        <vfs_type>glusterfs</vfs_type>
    </storage_connection>
    <storage_connection href="/ovirt-engine/api/storageconnections/f0dda39f-0456-40e0-ade4-e8bf5b4616a6" id="f0dda39f-0456-40e0-ade4-e8bf5b4616a6">
        <address>gluster01.scl.lab.tlv.redhat.com</address>
        <path>/compute-ge-3-volume2</path>
        <type>glusterfs</type>
        <vfs_type>glusterfs</vfs_type>
    </storage_connection>
</storage_connections>

Comment 60 Francesco Romani 2018-01-30 16:36:10 UTC
OK, we need a minimal reproducer running a VM with cdrom on NFS and another disk somewhere else. Let's run this reproducer using libvirt (without oVirt or even just vdsm) and move this down to libvirt developers.

Comment 61 Nir Soffer 2018-01-30 16:46:34 UTC
I think what happens is this:

1. Blocking access to iso domain
2. Vdsm periodic check try to get all stats, including block stats
3. Libvirt try to get the size of all drives, including the cdrom
4. Qemu try to call stat() on the cdrom file and get stuck
5. Libvirt stuck waiting for qemu, VM becomes non-responsive

Having "report" error policy will make qemu report an error to the guest when
guest io fail trying to read from the cdrom. Since the storage is non responsive
this will be reported after many minutes.

Comment 62 Francesco Romani 2018-01-30 17:29:38 UTC
(In reply to Nir Soffer from comment #61)
> I think what happens is this:
> 
> 1. Blocking access to iso domain
> 2. Vdsm periodic check try to get all stats, including block stats
> 3. Libvirt try to get the size of all drives, including the cdrom
> 4. Qemu try to call stat() on the cdrom file and get stuck
> 5. Libvirt stuck waiting for qemu, VM becomes non-responsive
> 
> Having "report" error policy will make qemu report an error to the guest when
> guest io fail trying to read from the cdrom. Since the storage is non
> responsive
> this will be reported after many minutes.

This could indeed explain the behaviour - let's hold on a bit more before to engage the libvirt developers.

But could actually be worse than that, need to check if in the bulk stats flow libvirt does additional logic to guard against rogue storage - we had few bugs related to that for NFS storage, and few fixes back in time.

Any way, I still think that despite this followup investigation is worth be done, once we send the correct error_policy to cdroms, this one BZ is fixed, and we need to continue on a new one.

Comment 63 Nir Soffer 2018-01-30 17:44:50 UTC
(In reply to Francesco Romani from comment #62)
I agree, handling issues with cdrom on NFS storage is not in the scope of this
bug.

We probably see the effect of "report" by testing cdrom on block storage 
(by uploading iso to disk on iSCSI or FC storage domain). With block storage
our multipath configuration ensures that we fail fast (in 20-30 seconds)
after access to storage is blocked.

Comment 64 Michal Skrivanek 2018-02-01 13:22:25 UTC
Up until bug 1530730 all CDROMs were on NFS, so without fixing comment #52 we can't really say reporting works. I do not expect people moving away from the iso domain that fast

Do we need UpdateVolumes() for CDROMs?

Comment 65 Nir Soffer 2018-02-07 09:39:09 UTC
(In reply to Michal Skrivanek from comment #64)
> Up until bug 1530730 all CDROMs were on NFS, so without fixing comment #52
> we can't really say reporting works.

reporting works, but it cannot solve the issue of guest hang on non-responsive
iso storage domain.

> I do not expect people moving away from
> the iso domain that fast

I don't think it is related, when the storage becomes responsive, the guest should
not stop but report the error back to the guest.

> Do we need UpdateVolumes() for CDROMs?

I see no reason to query cdroms size, they are readonly. Currently we are updating
all drives:

379     def _execute(self):
380         for drive in self._vm.getDiskDevices():                                                                                                                                           
381             # TODO: If this blocks (is it actually possible?)
382             # we must make sure we don't overwrite good data
383             # with stale old data.
384             self._vm.updateDriveVolume(drive)

We can skip readonly drives - hopefully engine will be ok with this.

Tal, is engine ok with not having size info for readonly disks like cdroms?

Comment 66 Michal Skrivanek 2018-02-07 10:23:01 UTC
(In reply to Nir Soffer from comment #65)

> > I do not expect people moving away from
> > the iso domain that fast
> 
> I don't think it is related, when the storage becomes responsive, the guest
> should
> not stop but report the error back to the guest.

to clarify my previous point - based on the logs it seems the monitoring _is_ the reason for hang. Policy seems to be est correctly to "report" and the only trouble seems to be with vdsm periodic polling

Comment 67 Michal Skrivanek 2018-02-16 11:45:12 UTC
Tal?

btw to change that for readonly drives would be enough for this BZ, but in general error_policy=report can be set on any drive, so eventually we need to handle a stuck updateDriveVolume()

Comment 68 Tal Nisan 2018-02-21 11:10:18 UTC
error_policy should stay on 'guest' for CDROM drives, I don't mind particularly if the updateDriveVolume will not run on readonly disks as their size as static anyway

Comment 74 errata-xmlrpc 2018-05-15 17:36:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1488

Comment 75 Franta Kust 2019-05-16 13:08:01 UTC
BZ<2>Jira Resync


Note You need to log in before you can comment on or make changes to this bug.