Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1230788 - (rhv_turn_off_autoresume_of_paused_VMs) [RFE] Have a policy for autoresume of VMs paused due to IO errors (stay paused, turn off, restart with defined time out time)
[RFE] Have a policy for autoresume of VMs paused due to IO errors (stay pause...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.4.5
All Linux
urgent Severity high
: ovirt-4.2.0
: ---
Assigned To: Michal Skrivanek
Polina
: FutureFeature
: 1206317 1386444 (view as bug list)
Depends On: oVirt_turn_off_autoresume_of_paused_VMs 1481022
Blocks: 1417161 1541529 1386444 1460513 1545980
  Show dependency treegraph
 
Reported: 2015-06-11 10:26 EDT by Julio Entrena Perez
Modified: 2018-05-15 13:38 EDT (History)
31 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Feature: Previously, if the VM has been paused due to IO Error, there was no way how to configure what should happen after the storage gets fixed. The only option was "auto resume", which resumed the VM. This feature adds two more options configurable per VM: "Kill" and "Leave Paused". Reason: There are ways how the "auto resume" together with HA VM using VM lease could lead to split brain. Other reason is that it can interfere with custom HA solutions. Result: Now the user can configure 3 resume policies for VMs: - auto resume (the one which used to be the only one) - leave paused - kill
Story Points: ---
Clone Of:
: oVirt_turn_off_autoresume_of_paused_VMs (view as bug list)
Environment:
Last Closed: 2018-05-15 13:36:24 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Virt
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
mavital: needinfo+
mavital: testing_plan_complete+


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2128511 None None None 2016-01-18 10:32 EST
Red Hat Knowledge Base (Solution) 2749481 None None None 2017-12-04 09:25 EST
Red Hat Product Errata RHEA-2018:1488 None None None 2018-05-15 13:38 EDT

  None (edit)
Description Julio Entrena Perez 2015-06-11 10:26:21 EDT
> 1. Proposed title of this feature request  

Make automatic resume of VMs paused due to I/O error configurable.
      
> 3. What is the nature and description of the request?

Customer needs to be able to instruct RHEV that VMs paused as a result of an I/O error should not be resumed automatically once the storage domain recovers.

According to bug 1036358 VMs paused as a result of a problem in the storage domain should be resumed automatically once the problem is resolved.
      
> 4. Why does the customer need this? (List the business requirements here)  

If VMs are resumed automatically (in an uncontrolled way) when the error condition in the storage domain is resolved, this will cause unexpected and/or undesired effects in their application.
For example, resumed VMs don't have their clock in sync after they resume, which would cause significant issues for the customer's application.

Customer needs to be able to configure RHEV not to automatically resume VMs that paused as a result of problems with the storage.
Comment 3 Doron Fediuck 2015-06-14 10:00:02 EDT
We should consider hosted engine for this RFE, as a VM which will need
to be resumed regardless of the config, or make the configuration on
SD level which means HE SD will not be using it.
Comment 4 Michal Skrivanek 2015-06-15 06:02:31 EDT
Is time sync the problem here? If so we can add a guest agent verb to explicitly sync time after resume

If there are more/other issues then we can extend the existing error_policy/propagateErrors parameter
Comment 19 Yaniv Lavi 2017-02-13 17:59:21 EST
*** Bug 1206317 has been marked as a duplicate of this bug. ***
Comment 22 Michal Skrivanek 2017-08-02 02:13:44 EDT
note the special case of HA VMs discussed in https://bugzilla.redhat.com/show_bug.cgi?id=1467893#c33
Comment 26 Michal Skrivanek 2017-09-19 09:56:38 EDT
see upstream bug 1317450 for more details
Comment 27 Polina 2017-10-03 06:43:50 EDT
could you please add feature page?

thank you
Comment 28 Michal Skrivanek 2017-10-16 02:41:40 EDT
design as per https://bugzilla.redhat.com/show_bug.cgi?id=1317450#c25
Comment 31 Michal Skrivanek 2017-11-24 05:17:47 EST
bot doesn't seem to work, this is already being tested
Comment 35 Doron Fediuck 2017-12-04 09:25:51 EST
*** Bug 1386444 has been marked as a duplicate of this bug. ***
Comment 38 RHV Bugzilla Automation and Verification Bot 2017-12-06 11:17:48 EST
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[No relevant external trackers attached]

For more info please contact: rhv-devops@redhat.com
Comment 39 RHV Bugzilla Automation and Verification Bot 2017-12-12 16:16:17 EST
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[No relevant external trackers attached]

For more info please contact: rhv-devops@redhat.com
Comment 40 Polina 2018-01-16 08:41:44 EST
added depends on 1481022. The RFE could not be verified for all kinds of storages because of precondition problem - no I/O Error VM Pause when blocking NFS/gluster storage. Tested only for iscsi
Comment 42 Emma Heftman 2018-02-20 07:31:21 EST
Hi Michal
Is this ready to be documented? I can see that only iscsi was tested.
Also is there a feature page?
Thanks!
Comment 43 Michal Skrivanek 2018-02-21 04:02:36 EST
(In reply to Emma Heftman from comment #42)
> Hi Michal
> Is this ready to be documented? I can see that only iscsi was tested.
> Also is there a feature page?
> Thanks!

Well, this RFE is complete, but it may make sense to take bug 1540548 into account too, and a comprehensive description of HA VMs currently on review in https://github.com/oVirt/ovirt-site/pull/1530
Comment 44 Marina 2018-02-21 09:26:36 EST
(In reply to Polina from comment #40)
> added depends on 1481022. The RFE could not be verified for all kinds of
> storages because of precondition problem - no I/O Error VM Pause when
> blocking NFS/gluster storage. Tested only for iscsi

Polina, how about FC?
The main customer behind this RFE is using FC storage and we would like to make sure the solution works right for them.
Comment 45 Polina 2018-02-22 02:43:00 EST
(In reply to Marina from comment #44)
> (In reply to Polina from comment #40)
> > added depends on 1481022. The RFE could not be verified for all kinds of
> > storages because of precondition problem - no I/O Error VM Pause when
> > blocking NFS/gluster storage. Tested only for iscsi
> 
> Polina, how about FC?
> The main customer behind this RFE is using FC storage and we would like to
> make sure the solution works right for them.

Hi Marina, this feature was not tested with FC. I'll try to get today environment with FC storage and test this. Will update you asap
Comment 46 Polina 2018-02-26 09:03:56 EST
Hi Marina,

The feature was tested on Fiber Channel Storage Domain successfully. On the latest build :
rhv-release-4.2.1-3-001.noarch &rhel 7.5
Comment 47 Polina 2018-02-26 09:50:09 EST
just to summarize:
The feature was successfully tested on two kinds of storages: ISCSI and Fiber Channel.
On NFS and Gluster SDs there is a problem with the setup (pre-condition) for the tests: 
the VM is not Paused due to I/O error while nfs/gluster storage is blocked. The problem is detailed described in BZ https://bugzilla.redhat.com/show_bug.cgi?id=1481022.
Comment 48 Polina 2018-04-15 12:24:24 EDT
for rhvm-4.2.3-0.1.el7.noarch, libvirt-3.9.0-14.el7_5.2.x86_64:

The feature is verified for Gluster storage.

NFS - please see https://bugzilla.redhat.com/show_bug.cgi?id=1481022#c58
Comment 51 Polina 2018-04-30 03:44:36 EDT
Summary for verification on rhv-release-4.2.3-4-001.noarch:

The bug is verified on Glusted , FC , ISCSI, NFS storages.

1. On ISCSI and Gluster the I/O Pause was created by dropping rule with iptables command.
2. On FC - by making LUN path faulty (like echo "offline" > /sys/block/sdd/device/state).
3. On NFS the I/O Pause was created by changing /etc/exports file on nfs-server while there is a writing on VM.
Comment 52 Michal Skrivanek 2018-04-30 07:53:41 EDT
given the limitations we have with NFS this looks good enough. It woulds till be great if you can reduce the timoeut parameters for NFS mounts so we can check IOError reporting before host gets fenced, but I think that's tracked in other related bug
Comment 53 Polina 2018-05-01 05:17:19 EDT
For NFS I succeeded to get IO Error Pause changing the Retransmissions & Timeout parameters for SD.
here are steps:
   1. Put the SD in maintenance(by Data Center)
   2. Open Storage Domains/Manage Domain/Custom Connection Parameters 
   3. Change the following parameters:
	Retransmissions (#) = 2
	Timeout (deciseconds) = 1 (i.e.10 sec)
   4. Activate the SD. 
   5. Run the VM associated with this SD.

The behavior of NFS VMs has been tested in this setup.
So, I can verify. please confirm.
Comment 54 Michal Skrivanek 2018-05-02 07:18:37 EDT
that's good enough, but needs to be noted in documentation
Comment 55 Polina 2018-05-02 08:05:51 EDT
verified on on rhv-release-4.2.3-4-001.noarch (see comments 51-54).
Comment 58 errata-xmlrpc 2018-05-15 13:36:24 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1488

Note You need to log in before you can comment on or make changes to this bug.