Description of problem: In case get-state command doesn't provide the volume level information about profiling enabled/disabled, RHGS-WA always shows values for volumes as well as cluster as 'Disabled'. Even if user enables/disables profiling at cluster level from UI, it does the actual work for underlying volumes, but UI depicts the values still as Disabled. Version-Release number of selected component (if applicable): tendrl-gluster-integration-1.6.3-2.el7rhgs How reproducible: Always Steps to Reproduce: 1. Setup an upstream RHGS-WA setup with glusterfs-3.12.9 on storage nodes 2. Create few volumes in cluster 3. Start profiling on one of the volumes 4. Import the cluster in RHGS-WA 5. Enable profiling for all the volumes at cluster level from UI Actual results: The value of profiling enabled flag at cluster level as well volume level shows `Disabled` Expected results: After step-5, the volumes should show as `Unknown` and cluster level the value should be set as `Enabled` Additional info: The volume level values should be set as `Unknown` because get-state output with specific glusterfs version doesn't provide profiling information for individual volumes.
Reproduced on: RHGS WA Server: Red Hat Enterprise Linux Server release 7.5 (Maipo) collectd-5.7.2-3.1.el7rhgs.x86_64 collectd-ping-5.7.2-3.1.el7rhgs.x86_64 etcd-3.2.7-1.el7.x86_64 libcollectdclient-5.7.2-3.1.el7rhgs.x86_64 python-etcd-0.4.5-2.el7rhgs.noarch rubygem-etcd-0.3.0-2.el7rhgs.noarch tendrl-ansible-1.6.3-3.el7rhgs.noarch tendrl-api-1.6.3-2.el7rhgs.noarch tendrl-api-httpd-1.6.3-2.el7rhgs.noarch tendrl-commons-1.6.3-3.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-1.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-1.el7rhgs.noarch tendrl-node-agent-1.6.3-3.el7rhgs.noarch tendrl-notifier-1.6.3-2.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-ui-1.6.3-1.el7rhgs.noarch Gluster Storage Server: Red Hat Enterprise Linux Server release 7.5 (Maipo) Red Hat Gluster Storage Server 3.3.1 collectd-5.7.2-3.1.el7rhgs.x86_64 collectd-ping-5.7.2-3.1.el7rhgs.x86_64 glusterfs-3.8.4-54.10.el7rhgs.x86_64 glusterfs-api-3.8.4-54.10.el7rhgs.x86_64 glusterfs-cli-3.8.4-54.10.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-54.10.el7rhgs.x86_64 glusterfs-events-3.8.4-54.10.el7rhgs.x86_64 glusterfs-fuse-3.8.4-54.10.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-54.10.el7rhgs.x86_64 glusterfs-libs-3.8.4-54.10.el7rhgs.x86_64 glusterfs-rdma-3.8.4-54.10.el7rhgs.x86_64 glusterfs-server-3.8.4-54.10.el7rhgs.x86_64 gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch libcollectdclient-5.7.2-3.1.el7rhgs.x86_64 libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 python-etcd-0.4.5-2.el7rhgs.noarch python-gluster-3.8.4-54.10.el7rhgs.noarch tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch tendrl-commons-1.6.3-3.el7rhgs.noarch tendrl-gluster-integration-1.6.3-2.el7rhgs.noarch tendrl-node-agent-1.6.3-3.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch vdsm-gluster-4.17.33-1.2.el7rhgs.noarch Both cluster level Volume Profiling and Volume profiling for particular volume shows "Disabled", no matter in what state the profiling is in reality.
Tested and Verified using the steps from description. Volume Profiling for particular volume is always marked as Unknown. Cluster wide Volume profiling reflects the last action from RHGS WA UI. RHGS WA Server: Red Hat Enterprise Linux Server release 7.5 (Maipo) collectd-5.7.2-3.1.el7rhgs.x86_64 collectd-ping-5.7.2-3.1.el7rhgs.x86_64 etcd-3.2.7-1.el7.x86_64 libcollectdclient-5.7.2-3.1.el7rhgs.x86_64 python-etcd-0.4.5-2.el7rhgs.noarch rubygem-etcd-0.3.0-2.el7rhgs.noarch tendrl-ansible-1.6.3-5.el7rhgs.noarch tendrl-api-1.6.3-3.el7rhgs.noarch tendrl-api-httpd-1.6.3-3.el7rhgs.noarch tendrl-commons-1.6.3-7.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-5.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-5.el7rhgs.noarch tendrl-node-agent-1.6.3-7.el7rhgs.noarch tendrl-notifier-1.6.3-4.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-ui-1.6.3-4.el7rhgs.noarch Gluster Storage Server: Red Hat Enterprise Linux Server release 7.5 (Maipo) Red Hat Gluster Storage Server 3.3.1 collectd-5.7.2-3.1.el7rhgs.x86_64 collectd-ping-5.7.2-3.1.el7rhgs.x86_64 glusterfs-3.8.4-54.10.el7rhgs.x86_64 glusterfs-api-3.8.4-54.10.el7rhgs.x86_64 glusterfs-cli-3.8.4-54.10.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-54.10.el7rhgs.x86_64 glusterfs-events-3.8.4-54.10.el7rhgs.x86_64 glusterfs-fuse-3.8.4-54.10.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-54.10.el7rhgs.x86_64 glusterfs-libs-3.8.4-54.10.el7rhgs.x86_64 glusterfs-rdma-3.8.4-54.10.el7rhgs.x86_64 glusterfs-server-3.8.4-54.10.el7rhgs.x86_64 gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch libcollectdclient-5.7.2-3.1.el7rhgs.x86_64 libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 python-etcd-0.4.5-2.el7rhgs.noarch python-gluster-3.8.4-54.10.el7rhgs.noarch tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch tendrl-commons-1.6.3-7.el7rhgs.noarch tendrl-gluster-integration-1.6.3-5.el7rhgs.noarch tendrl-node-agent-1.6.3-7.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch vdsm-gluster-4.17.33-1.2.el7rhgs.noarch >> VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2616