Description of problem: If a cluster is imported and un-managed in UI again and again multiple times, sometimes the import job gets completed but in UI cluster remains in un-managed state only. Version-Release number of selected component (if applicable): How reproducible: If import and un-manage is run back to back say like 5-6 times, this issues can be seen Steps to Reproduce: 1. create a gluster cluster 2. import the cluster in tendrl UI 3. Execute un-managed once the cluster is imported successfully 4, Repeat the steps 2 and 3 again and again 5-6 times Actual results: The import job is marked as completed but cluster remains un-managed only in UI Expected results: The import and un-manage should always work successfully as expected even if tried any no of times. Additional info:
This will be tested during feature testing for cluster unmanage.
Tested and Verified with our test_cluster_unmanage_valid[1] test with following simple bash script used for repeated execution of the test: # sleep 3000; date | tee -a logs/stdout.log; while ( set -o pipefail; python3 -m pytest usmqe_tests/api/gluster/test_gluster_cluster.py -k test_cluster_unmanage_valid 2>&1 | tee -a logs/stdout.log); do sleep 3000; date | tee -a logs/stdout.log; done Because of bug 1616005 discovered during testing of this scenario, I've tried it multiple times - usually 10-20 times on one cluster and issue described in Description didn't occurred. Version-Release number of selected component (if applicable): RHGS WA Server: Red Hat Enterprise Linux Server release 7.5 (Maipo) tendrl-ansible-1.6.3-6.el7rhgs.noarch tendrl-api-1.6.3-5.el7rhgs.noarch tendrl-api-httpd-1.6.3-5.el7rhgs.noarch tendrl-commons-1.6.3-11.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-8.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-8.el7rhgs.noarch tendrl-node-agent-1.6.3-9.el7rhgs.noarch tendrl-notifier-1.6.3-4.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-ui-1.6.3-9.el7rhgs.noarch Gluster Storage Server: Red Hat Enterprise Linux Server release 7.5 (Maipo) Red Hat Gluster Storage Server 3.4.0 tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch tendrl-commons-1.6.3-11.el7rhgs.noarch tendrl-gluster-integration-1.6.3-9.el7rhgs.noarch tendrl-node-agent-1.6.3-9.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch [1] https://github.com/usmqe/usmqe-tests/blob/master/usmqe_tests/api/gluster/test_gluster_cluster.py#L177 >> VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2616