Bug 1519178
Summary: | Brick Kill followed by Replace brick,shows incorrect brick status on RHGS WA | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Manisha Saini <msaini> | ||||
Component: | web-admin-tendrl-gluster-integration | Assignee: | Nishanth Thomas <nthomas> | ||||
Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | rhgs-3.3 | CC: | amukherj, fbalak, msaini, nchilaka, nthomas, rhs-bugs | ||||
Target Milestone: | --- | ||||||
Target Release: | RHGS 3.4.0 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | tendrl-gluster-integration-1.6.1-1.el7rhgs, tendrl-api-1.6.1-1.el7rhgs.noarch.rpm, tendrl-commons-1.6.1-1.el7rhgs.noarch.rpm, tendrl-monitoring-integration-1.6.1-1.el7rhgs.noarch.rpm, tendrl-node-agent-1.6.1-1.el7, tendrl-ui-1.6.1-1.el7rhgs.noarch.rpm, | Doc Type: | If docs needed, set a value | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2018-09-04 06:59:21 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1503134 | ||||||
Attachments: |
|
Description
Manisha Saini
2017-11-30 10:47:36 UTC
Created attachment 1360856 [details]
Status After Killed Brick is been replaced by new brick
This Status is after 20-25 mins post replace brick
I have been unable to reproduce this. It seems fixed. I used commands you provided. For a while the health of volume was `Unknown` but after few seconds the status changed to `Up` and the new brick was correctly shown in brick list on `Volumes` dashboard and is listed in navigation for `Bricks` dashboard. msaini Do you still see the issue? Tested with: tendrl-commons-1.5.4-9.el7rhgs.noarch tendrl-api-1.5.4-4.el7rhgs.noarch tendrl-monitoring-integration-1.5.4-14.el7rhgs.noarch tendrl-ansible-1.5.4-7.el7rhgs.noarch tendrl-node-agent-1.5.4-16.el7rhgs.noarch tendrl-ui-1.5.4-6.el7rhgs.noarch tendrl-grafana-plugins-1.5.4-14.el7rhgs.noarch tendrl-notifier-1.5.4-6.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-api-httpd-1.5.4-4.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-gluster-integration-1.5.4-14.el7rhgs.noarch Since this bug is not seen, moving this to ON_QA Looks ok. `Brick Status` in `Host` and `Volume` dashboards show correct bricks after brick replacement. Navigation in `Brick` dashboard looks ok too. --> VERIFIED Tested with: tendrl-ansible-1.6.3-3.el7rhgs.noarch tendrl-api-1.6.3-3.el7rhgs.noarch tendrl-api-httpd-1.6.3-3.el7rhgs.noarch tendrl-commons-1.6.3-4.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-2.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-gluster-integration-1.6.3-2.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-2.el7rhgs.noarch tendrl-node-agent-1.6.3-4.el7rhgs.noarch tendrl-notifier-1.6.3-2.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-ui-1.6.3-1.el7rhgs.noarch glusterfs-3.12.2-9.el7rhgs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2616 |