Red Hat Bugzilla – Bug 829275
ovirt-engine-backend [Auto-Recovery]: Inconsistent and confusing storage domain statuses
Last modified: 2016-02-10 11:46:10 EST
this bug blocks Auto-Recovery tests since we auto recover domains which were changed into maintenance state by the user.
++ This bug was initially created as a clone of Bug #788936 +++
Created attachment 560500 [details]
* If a non-master domain (including ISO and export) is deactivated by user, the domain goes to Maintenance status for a few seconds, and then goes to Inactive status.
* If a master data domain is deactivated by user, the domain goes into Maintenance status and doesn't go into Inactive status.
* These two statuses, Maintenance and Inactive, can co-exist in the same DC at the same time, even though they presumably mean the same thing.
--- Additional comment from firstname.lastname@example.org on 2012-02-09 08:16:59 EST ---
According to ofrenkel, the fact that the SD is moved by backend to Inactive status is a bug.
--- Additional comment from email@example.com on 2012-06-06 07:06:33 EDT ---
this bug is a test blocker for the whole Auto-Recovery feature.
moving this to urgent+blocker
Taking these , is blocking some my bugs
the bug was not related to auto-recovery, these is a storage/infra bug.
Auto-recovery feature just expose a bug
*** Bug 817538 has been marked as a duplicate of this bug. ***
Why is upstream BZ (https://bugzilla.redhat.com/show_bug.cgi?id=788936) in NEW state?
*** Bug 788936 has been marked as a duplicate of this bug. ***
Was not assigned to me, closed it.
(In reply to comment #5)
> *** Bug 788936 has been marked as a duplicate of this bug. ***
Please do not dup a public upstream BZ over a non-public downstream BZ.
verified on si6.
domains remain in maintenance