Bug 970520 - [RFE] prevent VMS from pausing when using allocated disks
Summary: [RFE] prevent VMS from pausing when using allocated disks
Keywords:
Status: CLOSED DUPLICATE of bug 1024428
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: RFEs
Version: 3.1.2
Hardware: All
OS: All
medium
high
Target Milestone: ---
: 3.5.0
Assignee: Tal Nisan
QA Contact: Haim
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-06-04 08:47 UTC by Karim Boumedhel
Modified: 2018-12-03 19:00 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-12-15 15:05:45 UTC
oVirt Team: ---
Target Upstream Version:
Embargoed:
scohen: needinfo+


Attachments (Terms of Use)
hook to set errorpolicy to report for all disks or selected ones (988 bytes, text/x-python)
2013-06-04 08:47 UTC, Karim Boumedhel
no flags Details

Description Karim Boumedhel 2013-06-04 08:47:11 UTC
Created attachment 756660 [details]
hook to set errorpolicy to report for all disks or selected ones

Description of problem:
when the storage domain of one of the disks of a VM experiences failure,
the VM will go on pause .
while this might be helpful when using thin provisioning, behaviour should be configurable when using preallocated disks, so the policy changes to report to OS and have the VM handle the lack of storage ( for instance using mdadm or ha-lvm with disks from other storage domains )




Version-Release number of selected component (if applicable):
3.1.X

How reproducible:
have a storage domain fail


Steps to Reproduce:
1.have a storage domain fail
2.
3.

Actual results:
all vm with disks in this SD will go pausing


Expected results:
give the ability to report the I/O error 

Additional info:
find attached a proposed before_vm_start hook that will change the errorpolicy to report upon launching vm 



having this makes it possible to 
-set red hat clusters using virtual machines
-use a Business Disaster Recovery solution using raid software (like mdadm ) for VMS , as long as they are presented with disks from 2 differents storage domains (one from both site)

Comment 1 Itamar Heim 2013-06-06 12:37:52 UTC
what's the implication - will the disk move to error, or blocks trying to be written to will be marked 'bad blocks', which will stay that way when the storage comes back?

Comment 2 Karim Boumedhel 2013-06-06 13:56:45 UTC
hello itamar,
when we let OS handle the I/O error, either using mdadm or lvm, disk from the failed storage domain will be marked as such and OS will handle properly the situation with no impact at lvm level.
when disk gets back, standard OS procedures will be applied to sync back data.

Comment 3 Itamar Heim 2013-06-09 09:18:04 UTC
Ayal - do you see a disk level field for this, a supported custom level property at disk level (now that we have device level custom properties), or just the external hook approach?

Comment 5 Ayal Baron 2013-09-04 21:28:31 UTC
(In reply to Itamar Heim from comment #3)
> Ayal - do you see a disk level field for this, a supported custom level
> property at disk level (now that we have device level custom properties), or
> just the external hook approach?

Should be advanced per VM disk level field, similar to cache mode

Comment 9 Ayal Baron 2013-12-15 15:05:45 UTC

*** This bug has been marked as a duplicate of bug 1024428 ***


Note You need to log in before you can comment on or make changes to this bug.