Bug 1519218
| Summary: | After performing volume stop,Tendrl web GUI shows mismatch status for few brick in "brick status" layout | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Manisha Saini <msaini> | ||||||
| Component: | web-admin-tendrl-gluster-integration | Assignee: | Nishanth Thomas <nthomas> | ||||||
| Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> | ||||||
| Severity: | high | Docs Contact: | |||||||
| Priority: | unspecified | ||||||||
| Version: | rhgs-3.3 | CC: | fbalak, msaini, nchilaka, rhs-bugs | ||||||
| Target Milestone: | --- | ||||||||
| Target Release: | RHGS 3.4.0 | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | tendrl-gluster-integration-1.6.1-1.el7rhgs, tendrl-api-1.6.1-1.el7rhgs.noarch.rpm, tendrl-commons-1.6.1-1.el7rhgs.noarch.rpm, tendrl-monitoring-integration-1.6.1-1.el7rhgs.noarch.rpm, tendrl-node-agent-1.6.1-1.el7, tendrl-ui-1.6.1-1.el7rhgs.noarch.rpm, | Doc Type: | If docs needed, set a value | ||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2018-09-04 07:00:31 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Bug Depends On: | |||||||||
| Bug Blocks: | 1503134 | ||||||||
| Attachments: |
|
||||||||
|
Description
Manisha Saini
2017-11-30 12:22:13 UTC
Created attachment 1360896 [details]
After 30 mins of performing Volume stop Sceenshot2
@Manisha, The screenshot attached to this bug(https://bugzilla.redhat.com/attachment.cgi?id=1360896) looks perfectly alight to me. Volume and bricks are marked as down. Can you please confirm? Created attachment 1389022 [details]
volume dashboard
Checked with:
tendrl-commons-1.5.4-9.el7rhgs.noarch
tendrl-api-1.5.4-4.el7rhgs.noarch
tendrl-monitoring-integration-1.5.4-14.el7rhgs.noarch
tendrl-ansible-1.5.4-7.el7rhgs.noarch
tendrl-node-agent-1.5.4-16.el7rhgs.noarch
tendrl-ui-1.5.4-6.el7rhgs.noarch
tendrl-grafana-plugins-1.5.4-14.el7rhgs.noarch
tendrl-notifier-1.5.4-6.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-api-httpd-1.5.4-4.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-gluster-integration-1.5.4-14.el7rhgs.noarch
tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch
I have removed, add, kill and replace some bricks. Then I stopped the volume. There were no bricks with other status than 10 on volume dashboard. See attachment 1389022 [details]
With given reproducer: half of the bricks changed status in grafana from `0` to `8` after one minute. The other half changed the status from `0` to `8` after another one minute. There was no status `-`. All bricks are marked correctly as down. All bricks are correctly marked as up when the volume starts. --> VERIFIED Tested with: tendrl-ansible-1.6.3-3.el7rhgs.noarch tendrl-api-1.6.3-3.el7rhgs.noarch tendrl-api-httpd-1.6.3-3.el7rhgs.noarch tendrl-commons-1.6.3-4.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-2.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-gluster-integration-1.6.3-2.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-2.el7rhgs.noarch tendrl-node-agent-1.6.3-4.el7rhgs.noarch tendrl-notifier-1.6.3-2.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-ui-1.6.3-1.el7rhgs.noarch glusterfs-3.12.2-9.el7rhgs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2616 |