Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1571280 - Unmanage doesn't start when more clusters are available
Unmanage doesn't start when more clusters are available
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: web-admin-tendrl-node-agent (Show other bugs)
3.4
Unspecified Unspecified
unspecified Severity unspecified
: ---
: RHGS 3.4.0
Assigned To: gowtham
Filip Balák
:
Depends On:
Blocks: 1503137 1526338
  Show dependency treegraph
 
Reported: 2018-04-24 08:40 EDT by Filip Balák
Modified: 2018-09-04 03:05 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-09-04 03:04:50 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2616 None None None 2018-09-04 03:05 EDT

  None (edit)
Description Filip Balák 2018-04-24 08:40:29 EDT
Description of problem:
I have a setup with 3 clusters without any volume. 1 cluster with 4 nodes and 2 clusters with 1 node.
When I import any of them into Tendrl then I can not unmanage it. The job remains in `new` state.

Version-Release number of selected component (if applicable):
glusterfs-3.12.2-8.el7rhgs.x86_64
tendrl-ansible-1.6.3-2.el7rhgs.noarch
tendrl-api-1.6.3-1.el7rhgs.noarch
tendrl-api-httpd-1.6.3-1.el7rhgs.noarch
tendrl-commons-1.6.3-2.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-1.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-1.el7rhgs.noarch
tendrl-node-agent-1.6.3-2.el7rhgs.noarch
tendrl-notifier-1.6.3-2.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.6.3-1.el7rhgs.noarch

How reproducible:
It happened every time I tried it from snapshot with the configuration (6 test runs).

Steps to Reproduce:
1. Create 3 gluster clusters and install Tendrl on top of them.
2. Import one or more of created clusters.
3. Unmanage one of the clusters.

Actual results:
Unmanage cluster job doesn't start and remains as 'new'.

Expected results:
Unmanage cluster job should start correctly.

Additional info:
Comment 1 gowtham 2018-05-01 10:33:52 EDT
can you please update ansible version in a server and try it again, I faced this issue and i updated serve ansible version and then it works fine. Just for confirmation can you please update server ansible and try once again.
Comment 2 Filip Balák 2018-05-02 04:02:10 EDT
I have updated ansible from ansible-2.5.1-1.el7ae.noarch to ansible-2.5.2-1.el7ae.noarch but I still see the issue. Which version should I use?
Comment 3 gowtham 2018-05-02 04:10:43 EDT
Rohan, fix for latest ansible is merged in this build?
Comment 5 gowtham 2018-05-02 07:37:19 EDT
Similar issue https://bugzilla.redhat.com/show_bug.cgi?id=1572118
Comment 8 Filip Balák 2018-05-10 04:08:55 EDT
Seems ok --> VERIFIED

Tested with:
tendrl-ansible-1.6.3-3.el7rhgs.noarch
tendrl-api-1.6.3-3.el7rhgs.noarch
tendrl-api-httpd-1.6.3-3.el7rhgs.noarch
tendrl-commons-1.6.3-4.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-2.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-2.el7rhgs.noarch
tendrl-node-agent-1.6.3-4.el7rhgs.noarch
tendrl-notifier-1.6.3-2.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.6.3-1.el7rhgs.noarch
ansible-2.5.2-1.el7ae.noarch
Comment 10 errata-xmlrpc 2018-09-04 03:04:50 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2616

Note You need to log in before you can comment on or make changes to this bug.