Description of problem: Tendrl UI indicates a brick is stopped when it's actually up and running. Scenario: I stopped one of the nodes which has a brick on it. When node was brought back up, Tendrl is still reporting in the Tendrl UI and Grafana Bricks Dashboard that the brick is stopped. The Tendrl-API is reporting it has stopped as well. Version-Release number of selected component (if applicable): tendrl-ui-1.5.4-1.el7rhgs.noarch How reproducible: Steps to Reproduce: 1. Stop a Gluster node (and all bricks will be "stoppped" 2. Run gstatus or some other CLI command to confirm brick and node is down 3. Bring up the node and also the brick 4. Go to Tendrl UI, Grafana dashboard for the brick and you will notice the brick indicates "stopped" Actual results: Brick says it is stopped. Expected results: Brick should indicate it's up and running. Additional info: See https://github.com/Tendrl/ui/issues/748 for additional information and screenshots.
You are testing with older set of builds(tendrl-ui-1.5.4-1.el7rhgs.noarch). You might be seeing this issue due to https://bugzilla.redhat.com/show_bug.cgi?id=1509314, where gluster-integration is not running after reboot. Please re-test with latest builds and update.
fixing link to upstream github issue
Based on comment 3, moving to MODIFIED. Nishant, could you update FiV field in this BZ?
Looks ok. --> VERIFIED Tested with: tendrl-ansible-1.6.3-4.el7rhgs.noarch tendrl-api-1.6.3-3.el7rhgs.noarch tendrl-api-httpd-1.6.3-3.el7rhgs.noarch tendrl-commons-1.6.3-5.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-3.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-3.el7rhgs.noarch tendrl-node-agent-1.6.3-5.el7rhgs.noarch tendrl-notifier-1.6.3-3.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-ui-1.6.3-2.el7rhgs.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2616