Bug 1640868
Summary: | [gluster-ansible] Include ansibleStatus file removal with the gluster configuration cleanup | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | SATHEESARAN <sasundar> | |
Component: | rhhi | Assignee: | Gobinda Das <godas> | |
Status: | CLOSED ERRATA | QA Contact: | SATHEESARAN <sasundar> | |
Severity: | medium | Docs Contact: | ||
Priority: | medium | |||
Version: | rhhi-1.1 | CC: | bshetty, godas, pasik, rhs-bugs, sabose | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHHI-V 1.6.z Async Update | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | gluster-ansible-roles-1.0.5-1.el7rhgs.noarch | Doc Type: | Bug Fix | |
Doc Text: |
During cleanup of a failed deployment, not all files were removed. This meant that when users tried to redeploy, they saw an option to use an existing deployment configuration even though any existing configuration should have been removed. All files are now correctly removed during cleanup, and the 'Use existing deployment' option is no longer visible after cleanup.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1654124 (view as bug list) | Environment: | ||
Last Closed: | 2019-10-03 12:23:57 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1654124 | |||
Bug Blocks: | 1683647 |
Description
SATHEESARAN
2018-10-19 02:23:31 UTC
Gobinda, can you check this? Create a bz clone in cockpit-ovirt if needed Calling this issue as the known_issue for RHHI-V 1.6. After cleaning the setup, and starting the fresh deployment, would still show up a opion - 'Use existing configuration' Once the gluster cleanup is done, always, do not use the option - "use existing configuration" Update on the bug based on the base bug: --------------------------------------- The bug is fixed in the latest bits Conmponents: =========== glusterfs-6.0-2.el7rhgs.x86_64 gluster-ansible-roles-1.0.5-1.el7rhgs.noarch gluster-ansible-infra-1.0.4-1.el7rhgs.noarch gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch gluster-ansible-features-1.0.5-1.el7rhgs.noarch Steps: ===== 1. Complete gluster deployment 2. Run the gluster_cleanup.yml 3. Verified that the file 'ansibleStatus.conf' is removed and also logged into the cockpit UI and verified that 'use existing gluster configuration' was not found The dependent bug is already in VERIFIED state, moving this bug to ON_QA Moving the bug as verified since the fix works. Conmponents: =========== glusterfs-6.0-2.el7rhgs.x86_64 gluster-ansible-roles-1.0.5-1.el7rhgs.noarch gluster-ansible-infra-1.0.4-1.el7rhgs.noarch gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch gluster-ansible-features-1.0.5-1.el7rhgs.noarch Steps: ===== 1. Complete gluster deployment 2. Run the gluster_cleanup.yml 3. Verified that the file 'ansibleStatus.conf' is removed and also logged into the cockpit UI and verified that 'use existing gluster configuration' was not found Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2963 |