Created attachment 1354748 [details] The host is shown as down, but it is up and running Description of problem: Created a storage cluster and imported the cluster successfully. Rebooted the storage and the host is up and running. In clusters tab. The host is shown as down in clusters tab(Check the attachment) but in hosts tab it is shown as green. Version-Release number of selected component (if applicable): tendrl-ui-1.5.4-2 How reproducible: 1:1 Steps to Reproduce: 1. Created a 3 node cluster and imported to the webadmin 2. Import is successful 3. Rebooted on of the storage node, the node is up and running. 4. Check the clusters tab for that host, it will be shown as down Actual results: The host is shown as down Expected results: The host is should be shown "Up". Additional info:
Questions: 1) After reboot of the storage node, was the tendrl-node-agent service running? 2) Assuming you didnt change the tendrl-node-agent "sync_interval" config, did you wait for 180 seconds for new data to show up on the UI? By design, at each restart of the node-agent the status is set to UP and this is used by monitoring stack as well.
Fixed: https://github.com/Tendrl/node-agent/issues/680
Tested with etcd-3.2.7-1.el7.x86_64 glusterfs-3.8.4-52.el7_4.x86_64 glusterfs-3.8.4-52.el7rhgs.x86_64 glusterfs-api-3.8.4-52.el7rhgs.x86_64 glusterfs-cli-3.8.4-52.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-52.el7_4.x86_64 glusterfs-client-xlators-3.8.4-52.el7rhgs.x86_64 glusterfs-events-3.8.4-52.el7rhgs.x86_64 glusterfs-fuse-3.8.4-52.el7_4.x86_64 glusterfs-fuse-3.8.4-52.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-52.el7rhgs.x86_64 glusterfs-libs-3.8.4-52.el7_4.x86_64 glusterfs-libs-3.8.4-52.el7rhgs.x86_64 glusterfs-rdma-3.8.4-52.el7rhgs.x86_64 glusterfs-server-3.8.4-52.el7rhgs.x86_64 gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.3.x86_64 python-etcd-0.4.5-1.el7rhgs.noarch python-gluster-3.8.4-52.el7rhgs.noarch rubygem-etcd-0.3.0-1.el7rhgs.noarch tendrl-ansible-1.5.4-2.el7rhgs.noarch tendrl-api-1.5.4-3.el7rhgs.noarch tendrl-api-httpd-1.5.4-3.el7rhgs.noarch tendrl-collectd-selinux-1.5.4-1.el7rhgs.noarch tendrl-commons-1.5.4-5.el7rhgs.noarch tendrl-gluster-integration-1.5.4-6.el7rhgs.noarch tendrl-grafana-plugins-1.5.4-8.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-1.el7rhgs.noarch tendrl-monitoring-integration-1.5.4-8.el7rhgs.noarch tendrl-node-agent-1.5.4-8.el7rhgs.noarch tendrl-notifier-1.5.4-5.el7rhgs.noarch tendrl-selinux-1.5.4-1.el7rhgs.noarch tendrl-ui-1.5.4-4.el7rhgs.noarch vdsm-gluster-4.17.33-1.2.el7rhgs.noarch and it works. -->VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:3478