Bug 1654124 - [gluster-ansible] Include ansibleStatus file removal with the gluster configuration cleanup
Summary: [gluster-ansible] Include ansibleStatus file removal with the gluster configu...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gluster-ansible
Version: unspecified
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: RHGS 3.4.z Async Update
Assignee: Gobinda Das
QA Contact: bipin
URL:
Whiteboard:
Depends On:
Blocks: 1640868
TreeView+ depends on / blocked
 
Reported: 2018-11-28 05:41 UTC by Gobinda Das
Modified: 2019-10-03 07:58 UTC (History)
7 users (show)

Fixed In Version: gluster-ansible-roles-1.0.5-1.el7rhgs.noarch
Doc Type: Bug Fix
Doc Text:
During cleanup of a failed deployment, not all files were removed. This meant that when users tried to redeploy, they saw an option to use an existing deployment configuration even though any existsing configuration should have been removed. All files are now correctly removed during cleanup, and the 'Use existing deployment' option is no longer visible after cleanup.
Clone Of: 1640868
Environment:
Last Closed: 2019-10-03 07:58:12 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2557 0 None None None 2019-10-03 07:58:34 UTC

Description Gobinda Das 2018-11-28 05:41:33 UTC
+++ This bug was initially created as a clone of Bug #1640868 +++

Description of problem:
-----------------------
With the latest fix in cockpit-ovirt, gdeployStatus file is generated on the successful  gluster configuration. 

When the user opts for cleanup on the successfully deployed gluster configuration, this file needs to be removed

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
cleanup playbook shipped with gluster-ansible-roles-1.0.3


How reproducible:
-----------------
Always


Steps to Reproduce:
---------------------
1. Successfully complete gluster configuration
2. Cleanup the gluster configuration ( removal of volumes, VGs, PVs, etc )
3. Start installation again

Actual results:
---------------
'use existing configuration' option is available

Expected results:
-----------------
As the existing gluster configuration is cleanedup, there should not be 'use existing gluster configuration' option available


Additional info:
----------------
Remove the file ( which holds the gdeploy status configuration ), when the cleanup playbook is executed

--- Additional comment from Sahina Bose on 2018-11-19 00:05:35 EST ---

Gobinda, can you check this? Create a bz clone in cockpit-ovirt if needed

Comment 1 Sandro Bonazzola 2019-01-28 09:43:51 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 2 Gobinda Das 2019-03-04 10:12:40 UTC
Posted patch: https://github.com/gluster/gluster-ansible/pull/65

Comment 4 SATHEESARAN 2019-03-28 11:22:20 UTC
This bug is the must required for the forthcoming RHHI-V release

Comment 6 bipin 2019-05-17 06:43:53 UTC
Moving the bug as verified since the fix works.

Conmponents:
===========
glusterfs-6.0-2.el7rhgs.x86_64
gluster-ansible-roles-1.0.5-1.el7rhgs.noarch
gluster-ansible-infra-1.0.4-1.el7rhgs.noarch
gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch
gluster-ansible-features-1.0.5-1.el7rhgs.noarch

Steps:
=====
1. Complete gluster deployment
2. Run the gluster_cleanup.yml
3. Verified that the file 'ansibleStatus.conf' is removed and also logged into the cockpit UI and verified that 'use existing gluster configuration' was not found

Comment 11 errata-xmlrpc 2019-10-03 07:58:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2557


Note You need to log in before you can comment on or make changes to this bug.