Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1574926

Summary: Reusing the existing gluster configuration needs validation
Product: [oVirt] cockpit-ovirt Reporter: SATHEESARAN <sasundar>
Component: GdeployAssignee: Parth Dhanjal <dparth>
Status: CLOSED CURRENTRELEASE QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: medium    
Version: 0.11.20CC: bugs, dparth, godas, rhs-bugs, sabose, sankarshan
Target Milestone: ovirt-4.2.7Flags: rule-engine: ovirt-4.2+
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: cockpit-ovirt-0.11.34-1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1574924 Environment:
Last Closed: 2018-11-02 14:30:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1574924, 1629675    

Description SATHEESARAN 2018-05-04 11:19:01 UTC
+++ This bug was initially created as a clone of Bug #1574924 +++

Description of problem:
-----------------------
RHHI installation was invoked from the cockpit UI. While choosing configuration of gluster volumes, after the gdeploy configuration file generation, the wizard was closed, without completing gluster volume configuration. 

While trying setup 'Hyperconverged' setup from cockpit UI, there is information pop-up, saying that 'Gluster configuration found', but there are no volumes created though. While selecting 'use existing configuration', it takes to the node zero deployment, though there are no volumes exist in actual.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
cockpit-ovirt-0.11.23

How reproducible:
-----------------
Always

Steps to Reproduce:
------------------
1. Select hyperconverged deployment, proceed with wizard and reach the final tab, where the gdeploy configuration file is generated
2. Do **not** complete the installtion but close the wizard
3. Re-attempt installation, on clicking on 'hyperconverged', select 'use existing configuration'

Actual results:
---------------
Though there are no volumes available with the host, the 'Use Existing Cofiguration' proceeds with node zero ( HE ) deployment, without validating the availability of volume

Expected results:
-----------------
As there no volumes created and in started state, using the existing gluster configuration should validate the actual volume in question exists and in started state

Comment 1 Sahina Bose 2018-06-28 05:23:06 UTC
There should be validation that gdeploy completed successfully before showing this option

Comment 2 Parth Dhanjal 2018-09-07 12:43:16 UTC
(In reply to Sahina Bose from comment #1)
> There should be validation that gdeploy completed successfully before
> showing this option

Hey Sahina! I've made the changes. You can have a look at the changed UI here https://imgur.com/a/d0hK7rS

Comment 3 SATHEESARAN 2018-10-15 01:44:59 UTC
Hi Parth,

This fixes addresses the part of the problem - ' if gdeploy fails'.
So when a gdeploy configuration fails, then if the user starts Hyperconverged deployment again, the option 'Reuse existing configuration' is not shown now. 

That's the good fix. But original intent of the bug as per comment0 is to validate the presence of engine volume and its status is up. This is particularly required, when ovirt-hosted-engine-setup fails or user chose to start over the deployment and all volumes are cleaned up. In this case, when we start the hyperconverged deployment, it will say - 'use existing configuration' as the previous gdeploy deployment was successful, though there are no engine volume available

Based on the above mentioned reason, moving this bug to ASSIGNED.
Lets discuss on this.

Comment 4 Parth Dhanjal 2018-10-15 12:09:52 UTC
(In reply to SATHEESARAN from comment #3)
> Hi Parth,
> 
> This fixes addresses the part of the problem - ' if gdeploy fails'.
> So when a gdeploy configuration fails, then if the user starts
> Hyperconverged deployment again, the option 'Reuse existing configuration'
> is not shown now. 
> 
> That's the good fix. But original intent of the bug as per comment0 is to
> validate the presence of engine volume and its status is up. This is
> particularly required, when ovirt-hosted-engine-setup fails or user chose to
> start over the deployment and all volumes are cleaned up. In this case, when
> we start the hyperconverged deployment, it will say - 'use existing
> configuration' as the previous gdeploy deployment was successful, though
> there are no engine volume available
> 
> Based on the above mentioned reason, moving this bug to ASSIGNED.
> Lets discuss on this.

Hey Sas!

With oVirt 4.3 release, we will add the delete operation for this file into the ansible cleanup itself. And otherwise as well mark in the docs that on manual cleanup, this file needs to be removed. Other than that we can add more checks and validations for the button(Checking for volume Status).

Comment 5 SATHEESARAN 2018-10-16 00:49:13 UTC
(In reply to Parth Dhanjal from comment #4)
> Hey Sas!
> 
> With oVirt 4.3 release, we will add the delete operation for this file into
> the ansible cleanup itself. And otherwise as well mark in the docs that on
> manual cleanup, this file needs to be removed. Other than that we can add
> more checks and validations for the button(Checking for volume Status).

Hi Parth,

That makes sense to have gluster ansible playbook for cleanup, removes the gdeploy configuration status file. That should be the change part of gluster-ansible playbook.

In that case, verification of your fix can be done. I will open another bug in RHHI for cleanup playbook

Moving this bug ON_QA

Comment 6 SATHEESARAN 2018-10-19 01:54:50 UTC
Verified with cockpit-ovirt-dashboard-0.11.35

Unless the gdeploy configuration is successful, the 'reuse existing gluster configuration' option is not available from cockpit interface

Comment 7 Sandro Bonazzola 2018-11-02 14:30:23 UTC
This bugzilla is included in oVirt 4.2.7 release, published on November 2nd 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.7 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.