Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1205739 - [RFE][DR] - Allow recovering an imported domain without an UP DC
[RFE][DR] - Allow recovering an imported domain without an UP DC
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
3.5.0
All Linux
high Severity medium
: ovirt-4.2.0
: 4.2.0
Assigned To: Maor
Elad
: FutureFeature, Improvement
: 1348097 (view as bug list)
Depends On:
Blocks: RHV_DR 1534978
  Show dependency treegraph
 
Reported: 2015-03-25 10:48 EDT by Allan Voss
Modified: 2018-05-15 13:51 EDT (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
A previously imported storage domain that was destroyed or detached can now be imported into an uninitialized Data Center. In the past, this operation failed because the storage domain retained its old metadata.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-05-15 13:49:33 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
sherold: Triaged+
gklein: testing_plan_complete-


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 80662 master POST core: Clean pool id from metadata. 2017-08-15 08:51 EDT
oVirt gerrit 80665 master MERGED hsm: Use default host id when pool doesn't exists 2017-08-24 08:38 EDT
Red Hat Product Errata RHEA-2018:1489 None None None 2018-05-15 13:51 EDT

  None (edit)
Description Allan Voss 2015-03-25 10:48:54 EDT
Description of problem:
When attempting to attach an imported FibreChannel storage domain to a Data Center, the process fails with no indication of why on the RHEV-M. 

Version-Release number of selected component (if applicable):
rhevm-3.5.0-0.32.el6ev.noarch 
vdsm-4.16.8.1-8.el6ev.x86_64

How reproducible:
Unknown

Steps to Reproduce:
1. Import existing Fibrechannel storage domain after rebuilding RHEV-M
2. Attempt to attach imported domain to a Data Center

Actual results:
Failure

Expected results:
Successful attachment
Comment 3 Allon Mureinik 2015-03-26 04:09:13 EDT
Seems as though the storage domain contains leftover metadata from an old pool.

Maor - isn't this the same issue as bug 1179899 ?
If so, please provide GSS with manual steps to handle the current situation so they can unblock the customer, and let's backport this patch to zstream.
Comment 4 Maor 2015-03-26 05:07:02 EDT
(In reply to Allon Mureinik from comment #3)
> Seems as though the storage domain contains leftover metadata from an old
> pool.
> 
> Maor - isn't this the same issue as bug 1179899 ?
> If so, please provide GSS with manual steps to handle the current situation
> so they can unblock the customer, and let's backport this patch to zstream.

no, it is the same as https://bugzilla.redhat.com/show_bug.cgi?id=1178646, which was fixed for 3.5.1
Comment 5 Maor 2015-03-26 05:18:42 EDT
since VDSM takes a lock on the storage pool when performing an attache operation (since we also use detach in the process), we can't import and attach a Storage Domain to an uninitialized Data Center. (see [1])

Currently we have added a validation when trying to attach an imported Storage Domain to an un-initalized Data Center 
(see https://bugzilla.redhat.com/show_bug.cgi?id=1178646)

this obstacle should be removed in a later version, once the storage pool will be removed completely in VDSM.


[1]
http://www.ovirt.org/Features/ImportStorageDomain#Implementation_gaps (section [6])
Comment 6 Maor 2015-03-26 05:21:28 EDT
Allan, please try to initialize the Data Center first by attaching it a new Storage Domain and then try to import the FC Storage Domain again
Comment 7 Allon Mureinik 2015-03-26 09:30:11 EDT
So to clarify and set an action plan:

1. You cannot attach an imported domain to an unitialized DC, since the import procedure requires an active SPM.

2. The workaround for the current customer ticket is to create a storage domain and activate it in the DC. This domain will become the master domain, and an SPM will be started. Once this is done, you can attach the domain you wanted. Then, if you wish, you could disable and remove the previous master domain.

3. In 3.5.1 This behavior is made clearer by a CDA error message (see bug 1178646).

4. In 3.6.0, once we remove the SPM (see bug 1185830), we can work on the ability to attach a domain to a newly created DC. We'll use this BZ to track it. 

Based on these comments, I've reduced the priority.
Comment 16 Vinzenz Feenstra [evilissimo] 2016-11-21 08:50:22 EST
*** Bug 1348097 has been marked as a duplicate of this bug. ***
Comment 17 Allon Mureinik 2017-06-29 12:00:18 EDT
Maor, Nir, what are your thoughts about adding a "force clear pool MD" verb as an HSM verb.
(Sort of "this is a horrible idea, use it on your own perrill" kind of thing for DR situations where the user knows the original pool/SPM is dead)?
Comment 22 Allon Mureinik 2017-07-05 04:45:44 EDT
For the DR scenario, maybe we should consider having this as part of the tool, and not a "first class citizen" in the import flow.
Comment 24 Maor 2017-09-05 14:13:10 EDT
I believe this fix was merged to the build as well
Comment 25 Elad 2017-09-11 11:39:44 EDT
Storage pool initialization from an imported domain that was forcibly detached or destroyed is now possible. DC gets initialized successfully.



2017-09-11 18:32:15,501+03 INFO  [org.ovirt.engine.core.bll.storage.pool.AddStoragePoolWithStoragesCommand] (org.ovirt.thread.EE-ManagedThreadFactory-engine-Thread-17) [de2f8944-0186-4686-adad-d4756b42984f] Domain 'f7824010-0d88-4bcc-a167-5ab389dd8825' is already attached to a different storage pool, clean the storage domain metadata.
2017-09-11 18:32:15,508+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CleanStorageDomainMetaDataVDSCommand] (org.ovirt.thread.EE-ManagedThreadFactory-engine-Thread-17) [de2f8944-0186-4686-adad-d4756b42984f] START, CleanStorageDomainMetaDataVDSCommand(HostName = host_mixed_3, StorageDomainVdsCommandParameters:{hostId='85f7a6ac-1ea9-47cf-a4f0-4f1655e63ddb', storageDomainId='f7824010-0d88-4bcc-a167-5ab389dd8825'}), log id: 64b2ad3b
2017-09-11 18:32:41,289+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CleanStorageDomainMetaDataVDSCommand] (org.ovirt.thread.EE-ManagedThreadFactory-engine-Thread-17) [de2f8944-0186-4686-adad-d4756b42984f] FINISH, CleanStorageDomainMetaDataVDSCommand, log id: 64b2ad3b
2017-09-11 18:32:41,289+03 INFO  [org.ovirt.engine.core.bll.storage.pool.AddStoragePoolWithStoragesCommand] (org.ovirt.thread.EE-ManagedThreadFactory-engine-Thread-17) [de2f8944-0186-4686-adad-d4756b42984f] Successfully cleaned metadata for storage domain 'f7824010-0d88-4bcc-a167-5ab389dd8825'.



Used:
ovirt-engine-4.2.0-0.0.master.20170907100709.git14accac.el7.centos.noarch
vdsm-4.20.3-20.git1fda949.el7.centos.x86_64
Comment 26 RHV Bugzilla Automation and Verification Bot 2017-12-06 11:19:26 EST
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Project 'ovirt-engine'/Component 'vdsm' mismatch]

For more info please contact: rhv-devops@redhat.com
Comment 27 RHV Bugzilla Automation and Verification Bot 2017-12-12 16:17:34 EST
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Project 'ovirt-engine'/Component 'vdsm' mismatch]

For more info please contact: rhv-devops@redhat.com
Comment 28 Allon Mureinik 2017-12-13 11:53:25 EST
Maor, can you please provide some doctext for this bz?
Comment 31 Elad 2018-04-24 10:29:00 EDT
Moving back to VERIFIED (comment 25)
Comment 34 errata-xmlrpc 2018-05-15 13:49:33 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1489

Note You need to log in before you can comment on or make changes to this bug.