Bug 1600113
Summary: | Invalid volume record when expand cluster is available | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Filip Balák <fbalak> | ||||
Component: | web-admin-tendrl-node-agent | Assignee: | gowtham <gshanmug> | ||||
Status: | CLOSED ERRATA | QA Contact: | Filip Balák <fbalak> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | rhgs-3.4 | CC: | amukherj, apaladug, fbalak, gshanmug, mbukatov, nthomas, rhs-bugs, sankarshan | ||||
Target Milestone: | --- | ||||||
Target Release: | RHGS 3.4.0 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | tendrl-node-agent-1.6.3-9.el7rhgs tendrl-commons-1.6.3-9.el7rhgs tendrl-api-1.6.3-4.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-7.el7rhgs.noarch tendrl-ui-1.6.3-7.el7rhgs.noarch tendrl-notifier-1.6.3-4.el7rhgs.noarch tendrl-gluster-integration | Doc Type: | If docs needed, set a value | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2018-09-04 07:08:56 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1503137 | ||||||
Attachments: |
|
This is also fixed part of https://bugzilla.redhat.com/show_bug.cgi?id=1599634. please verify this with latest build once. tendrl-node-agent-1.6.3-9.el7rhgs.noarch tendrl-commons-1.6.3-9.el7rhgs.noarch tendrl-api-1.6.3-4.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-7.el7rhgs.noarch tendrl-ui-1.6.3-7.el7rhgs.noarch tendrl-notifier-1.6.3-4.el7rhgs.noarch tendrl-gluster-integration-1.6.3-7.el7rhgs.noarch i tried with latest build i can't reproduce this filip please try this with new build Tested multiple times and looks ok. It was probably fixed with BZ 1599634. Tested with: tendrl-ansible-1.6.3-5.el7rhgs.noarch tendrl-api-1.6.3-4.el7rhgs.noarch tendrl-api-httpd-1.6.3-4.el7rhgs.noarch tendrl-commons-1.6.3-9.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-7.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-7.el7rhgs.noarch tendrl-node-agent-1.6.3-9.el7rhgs.noarch tendrl-notifier-1.6.3-4.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-ui-1.6.3-8.el7rhgs.noarch This bug is fixed in the latest build, So I am moving this to ON_QA. Thank you for resolution. Based on Comment 4 I move this to VERIFIED. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2616 |
Created attachment 1458071 [details] Volume list Description of problem: When there are added new nodes to already managed cluster and is created new volume and all machines are restarted, there appears to be something broken with managed volume. It seems that after Enable Profiling job is started it breaks the volume record even further and in UI is volume seen without volume data as the one from attachment. Version-Release number of selected component (if applicable): tendrl-ansible-1.6.3-5.el7rhgs.noarch tendrl-api-1.6.3-4.el7rhgs.noarch tendrl-api-httpd-1.6.3-4.el7rhgs.noarch tendrl-commons-1.6.3-8.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-6.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-6.el7rhgs.noarch tendrl-node-agent-1.6.3-8.el7rhgs.noarch tendrl-notifier-1.6.3-4.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-ui-1.6.3-6.el7rhgs.noarch How reproducible: 60% Steps to Reproduce: 1. Import cluster with 4 nodes into WA. 2. Add 2 more nodes to cluster and create distributed replicated volume. 3. Restart all nodes. 4. Go to cluster list. 5. Enable profiling for the cluster (it usually don't start). 6. Go to volume list for the cluster. Actual results: There is a broken record in Volume list. Expected results: Managed volume should be seen correctly and all related jobs should be possible for the volume. Additional info: