Bug 1505798

Summary: It's not clear how to restore storage when restoring self-hosted engine environment
Product: Red Hat Enterprise Virtualization Manager Reporter: Filip Brychta <fbrychta>
Component: DocumentationAssignee: rhev-docs <rhev-docs>
Status: CLOSED DUPLICATE QA Contact: rhev-docs <rhev-docs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: lbopf, lsurette, rbalakri, srevivo, ykaul
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-28 02:01:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Docs RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Filip Brychta 2017-10-24 10:14:54 UTC
Description of problem:
Following documentation describes how to restore self-hosted engine environment:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/sect-restoring_she_bkup#Creating_a_New_Self-Hosted_Engine_Environment_to_be_Used_as_the_Restored_Environment

but there is nothing about how to restore the storage.

Following Step 4 (Configuring Storage) and using storage originally used for self-hosted engine the restore procedure fails with "The selected device already contains a storage domain."

In case of iSCSI storage it's necessary to create new LUN or to clean up the old one with dd.

There should be some note about this or even better the restore process should offer automatic clean up of the old storage.

Comment 1 Lucy Bopf 2017-11-28 02:01:30 UTC
Thanks for raising this bug, Filip.

The engineering team is currently working on the entire backup and restore flow to determine the best practice approach in bug 1420604. We will proceed with all updates based on that outcome of that work. I will add this bug as a dependency there, and close this one as a duplicate.

*** This bug has been marked as a duplicate of bug 1420604 ***