Bug 1020543

Summary: Moving one storage domain to maintenance makes the entire data center unusable
Product: Red Hat Enterprise Virtualization Manager Reporter: Federico Simoncelli <fsimonce>
Component: vdsmAssignee: Federico Simoncelli <fsimonce>
Status: CLOSED ERRATA QA Contact: Leonid Natapov <lnatapov>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 3.3.0CC: abaron, acanan, bazulay, dfediuck, eedri, fsimonce, iheim, knesenko, lpeer, scohen, yeylon
Target Milestone: ---Keywords: Regression
Target Release: 3.3.0Flags: abaron: Triaged+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: storage
Fixed In Version: is20 Doc Type: Bug Fix
Doc Text:
Previously, moving one storage domain to maintenance mode triggered an SPM contention between hosts, and caused the entire data center to become unusable. This was due to a regression which caused a KeyError on isoprefix for inactive domains. This update fixes the KeyError, so now when a storage domain is moved to maintenance the rest of the storage pool is still usable.
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-01-21 16:18:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1026487    

Description Federico Simoncelli 2013-10-17 21:16:47 UTC
Description of problem:
Moving one storage domain to maintenance makes the entire data center unusable.

The SPM contention moves from one host to another and it never succeeds as the following traceback can be found in vdsm.log:

Thread-708::ERROR::2013-10-17 17:13:23,196::task::850::TaskManager.Task::(_setError) Task=`04c8d8de-c8fe-4ecf-9651-c837278de8c3`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 857, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 2562, in getStoragePoolInfo
    if domInfo[sdUUID]['isoprefix']:
KeyError: 'isoprefix'

Version-Release number of selected component (if applicable):
is19

How reproducible:
100%

Steps to Reproduce:
1. put one of the domain in maintenance

Actual results:
The entire storage pool remains in the contending state (unusable) and the SPM moves from one host to another. No further action is allowed.

Expected results:
The selected storage domain should be moved to maintenance and the rest of the pool should be still usable.

Comment 4 Leonid Natapov 2013-11-24 13:55:16 UTC
si24.1 fixed.

Comment 5 Charlie 2013-11-28 00:29:23 UTC
This bug is currently attached to errata RHBA-2013:15291. If this change is not to be documented in the text for this errata please either remove it from the errata, set the requires_doc_text flag to 
minus (-), or leave a "Doc Text" value of "--no tech note required" if you do not have permission to alter the flag.

Otherwise to aid in the development of relevant and accurate release documentation, please fill out the "Doc Text" field above with these four (4) pieces of information:

* Cause: What actions or circumstances cause this bug to present.
* Consequence: What happens when the bug presents.
* Fix: What was done to fix the bug.
* Result: What now happens when the actions or circumstances above occur. (NB: this is not the same as 'the bug doesn't present anymore')

Once filled out, please set the "Doc Type" field to the appropriate value for the type of change made and submit your edits to the bug.

For further details on the Cause, Consequence, Fix, Result format please refer to:

https://bugzilla.redhat.com/page.cgi?id=fields.html#cf_release_notes 

Thanks in advance.

Comment 6 errata-xmlrpc 2014-01-21 16:18:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0040.html