Description of problem: I have imported gluster cluster with two volumes and I have left that setup for one day. Next day all volume details have vanished from tendrl-ui. The reason has we used sys.exist() function call in volume_utilization.py file. So any problem happens while collecting volume utilization details then sys.exit() will stop the sync thread execution. So no volume details are collected further and existing volume details in etcd are deleted by TTL. https://github.com/Tendrl/gluster-integration/blob/master/tendrl/gluster_integration/sds_sync/vol_utilization.py#L27 Version-Release number of selected component (if applicable): tendrl-gluster-integration-1.6.3-6.el7rhgs.noarch How reproducible: In my setup, it happens frequently, I feel it is 100% reprodusable Steps to Reproduce: 1. Import a cluster with at least one volume 2. leave the setup for one day as it is 3. Check volumes page in tendrl-ui, all volumes details have vanished Actual results: Volume details have vanished after some time from tendrl-ui Expected results: Non-deleted volume details always should present in tendrl-ui Additional info:
I've saw this issue earlier on my clusters around the time when this bug was reported. I've tested it and verified on: RHGS WA Server: Red Hat Enterprise Linux Server release 7.5 (Maipo) grafana-4.3.2-3.el7rhgs.x86_64 tendrl-ansible-1.6.3-5.el7rhgs.noarch tendrl-api-1.6.3-4.el7rhgs.noarch tendrl-api-httpd-1.6.3-4.el7rhgs.noarch tendrl-commons-1.6.3-9.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-7.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-7.el7rhgs.noarch tendrl-node-agent-1.6.3-9.el7rhgs.noarch tendrl-notifier-1.6.3-4.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-ui-1.6.3-8.el7rhgs.noarch Gluster Storage Server: Red Hat Enterprise Linux Server release 7.5 (Maipo) Red Hat Gluster Storage Server 3.4.0 glusterfs-3.12.2-14.el7rhgs.x86_64 glusterfs-api-3.12.2-14.el7rhgs.x86_64 glusterfs-cli-3.12.2-14.el7rhgs.x86_64 glusterfs-client-xlators-3.12.2-14.el7rhgs.x86_64 glusterfs-events-3.12.2-14.el7rhgs.x86_64 glusterfs-fuse-3.12.2-14.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-14.el7rhgs.x86_64 glusterfs-libs-3.12.2-14.el7rhgs.x86_64 glusterfs-rdma-3.12.2-14.el7rhgs.x86_64 glusterfs-server-3.12.2-14.el7rhgs.x86_64 gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 python2-gluster-3.12.2-14.el7rhgs.x86_64 tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch tendrl-commons-1.6.3-9.el7rhgs.noarch tendrl-gluster-integration-1.6.3-7.el7rhgs.noarch tendrl-node-agent-1.6.3-9.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch vdsm-gluster-4.19.43-2.3.el7rhgs.noarch After nearly three days, the volume details are still visible in the RHGS WA UI, and also no unexpected/unknown error was found in the logs. >> VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2616