Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1516698

Summary: [BLOCKED] Adding a SD while the master SD is down, results in an error and the new SD being just partly added
Product: Red Hat Enterprise Virtualization Manager Reporter: Sergio Lopez <slopezpa>
Component: ovirt-engineAssignee: Nobody <nobody>
Status: CLOSED DUPLICATE QA Contact: Elad <ebenahar>
Severity: high Docs Contact:
Priority: low    
Version: 4.1.7CC: amureini, ebenahar, lsurette, rbalakri, Rhev-m-bugs, srevivo
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-23 12:42:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 954361    
Bug Blocks:    

Description Sergio Lopez 2017-11-23 09:54:11 UTC
Description of problem:

Adding a new SD while the master SD is down (i.e: trying to recover a DC which has lost its only SD), fails with the following error:

<snip>
Error while executing action: Failed to attach Storage due to an error on the Data Center master Storage Domain.
-Please activate the master Storage Domain first.
</snip>

This new SD is actually added to storage_server_connections, storage_domain_dynamic and storage_domain_static, but is not present in storage_pool_iso_map.

This implies the new SD will _not_ be visible in the GUI, but:

 - Trying to add it again will fail.

 - When trying to reinitialize the DC, it'll be presented as an option, and the operation will succeed. This is actually a good thing, and I think this behavior must be preserved.


Version-Release number of selected component (if applicable):

Re-tested with 4.1.7.


How reproducible:

Always.


Steps to Reproduce:
1. Create a DC with just one NFS-based SD.
2. On the NFS server, remove access to the NFS volume backing the SD. It'll turn o "Inactive" in RHV.
3. Try to add a new SD backed by a different NFS volume.

Comment 5 Elad 2018-08-02 08:16:09 UTC
DUP of bug 954361 which is CLOSED WONTFIX, setting qe_test_coverage-