Bug 1518678
Summary: | bricks are marked as down in UI | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Martin Kudlej <mkudlej> |
Component: | web-admin-tendrl-gluster-integration | Assignee: | Nishanth Thomas <nthomas> |
Status: | CLOSED ERRATA | QA Contact: | Filip Balák <fbalak> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.3 | CC: | amukherj, asriram, fbalak, mkudlej, nthomas, rghatvis, rhs-bugs, sanandpa, sankarshan, srmukher, ssaha |
Target Milestone: | --- | ||
Target Release: | RHGS 3.4.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | tendrl-gluster-integration-1.6.1-1.el7rhgs, tendrl-api-1.6.1-1.el7rhgs.noarch.rpm, tendrl-commons-1.6.1-1.el7rhgs.noarch.rpm, tendrl-monitoring-integration-1.6.1-1.el7rhgs.noarch.rpm, tendrl-node-agent-1.6.1-1.el7, tendrl-ui-1.6.1-1.el7rhgs.noarch.rpm, | Doc Type: | Known Issue |
Doc Text: |
An unexpected reboot of storage nodes leads to service misconfiguration. As a result of this, bricks are marked ‘Down’ on the user interface.
Workaround:
To get the correct status of the brick, node-agent and gluster-integration services need to be restarted.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2018-09-04 06:59:21 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1503134 |
Description
Martin Kudlej
2017-11-29 12:58:54 UTC
I see this issue after rebooting machines. So now reproducibility is 2/3. Having discussed it with dev, it has been agreed to document this bug as a known_issue for this release. And have detailed steps mentioning that the node-agent and gluster-integration services need to be restarted explicitly (just to be on the safer side), when there is an unplanned reboot of storage node. Tried the steps as given in the description. But was not able to reproduce this. Was able to see bricks in started state after reboot. Since this bug is not seen, moving this to ON_QA Seems ok. --> VERIFIED Tested with: tendrl-ansible-1.6.3-3.el7rhgs.noarch tendrl-api-1.6.3-3.el7rhgs.noarch tendrl-api-httpd-1.6.3-3.el7rhgs.noarch tendrl-commons-1.6.3-4.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-2.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-gluster-integration-1.6.3-2.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-2.el7rhgs.noarch tendrl-node-agent-1.6.3-4.el7rhgs.noarch tendrl-notifier-1.6.3-2.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-ui-1.6.3-1.el7rhgs.noarch Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2616 |