Created attachment 1358611 [details] Screenshots Description of problem: I'm not sure if the behavior described below is a bug or intended feature and I didn't find any clarification about this, so I'm raising this bug mainly to clarify this question. I've created Gluster cluster with one volume with enabled Volume profiling and I've imported the cluster into Tendrl and uncheck the "Enable Volume Profiling" checkbox during the import process (see attachments). But once the cluster was imported, volume profiling is still enabled on the Gluster Volume. Version-Release number of selected component (if applicable): RHGS WA Server tendrl-ansible-1.5.4-1.el7rhgs.noarch tendrl-api-1.5.4-2.el7rhgs.noarch tendrl-api-httpd-1.5.4-2.el7rhgs.noarch tendrl-commons-1.5.4-4.el7rhgs.noarch tendrl-grafana-plugins-1.5.4-5.el7rhgs.noarch tendrl-grafana-selinux-1.5.3-2.el7rhgs.noarch tendrl-monitoring-integration-1.5.4-5.el7rhgs.noarch tendrl-node-agent-1.5.4-5.el7rhgs.noarch tendrl-notifier-1.5.4-3.el7rhgs.noarch tendrl-selinux-1.5.3-2.el7rhgs.noarch tendrl-ui-1.5.4-4.el7rhgs.noarch Gluster Storage servers tendrl-collectd-selinux-1.5.3-2.el7rhgs.noarch tendrl-commons-1.5.4-4.el7rhgs.noarch tendrl-gluster-integration-1.5.4-4.el7rhgs.noarch tendrl-node-agent-1.5.4-5.el7rhgs.noarch tendrl-selinux-1.5.3-2.el7rhgs.noarch How reproducible: 100% Steps to Reproduce: 1. Prepare Gluster storage cluster with one Volume. # gluster volume list volume_beta_arbiter_2_plus_1x2 2. Enable volume profiling on the volume: # gluster volume profile volume_beta_arbiter_2_plus_1x2 start 3. Prepare RHGS WA Server. 4. Configure RHGS WA node agents on Gluster storage nodes. 5. Import Gluster cluster into RHGS WA with uncheck the "Enable Volume profiling". 6. Check the volume profiling status # gluster volume profile volume_beta_arbiter_2_plus_1x2 info Actual results: Volume Profiling is correctly Disable on the Cluster level (see screenshot Clusters_List.png). But profiling is still enabled on the Volume. Expected results: That is the question - When "Enable Volume Profiling" is not check on the Import Cluster page, should be Volume profiling disable on all Volumes, or should be left untouched? Additional info: From my point of view, the current behaviour might make sense - as the checkbox on the Import Cluster page says "Enable Volume Profiling" - and when is uncheck, it might mean do not enable volume profiling (which is slightly different meaning than "Disable Volume Profiling". So could you please clarify, what is the expected behaviour? There is related discussion on github[1], but this specific scenario is not covered (if I didn't miss something). [1] https://github.com/Tendrl/gluster-integration/issues/405
Just noting there is a related upstream GitHub issue at the following: https://github.com/Tendrl/gluster-integration/issues/551
I've tested and verified the functionality related to Volume Profiling during cluster Import process. The Import Cluster page now provides three options how to configure volume profiling on the imported cluster based on design[1]. I've tested all three variants with Gluster cluster with three volumes. Volume profiling was initially enabled on one or two of them. After the cluster was imported, volume profiling was in expected state on all three volumes, based on the selection during import cluster process (enabled on all volumes, disabled on all volumes or kept in the state as it was before import). Missing info tip for volume profiling (as it is proposed in the design[1]) is covered by Bug 1576682. There seems to be also some small/cosmetics issue, with enable/disable volume profiling functionality on imported cluster, which will be covered in new Bug. [1] https://redhat.invisionapp.com/share/8QCOEVEY9#/screens/247445416_Import_Clusters Version-Release number of selected component: RHGS WA Server (aka Tendrl Server): # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.5 (Maipo) # rpm -qa | grep -e tendrl -e etcd -e collectd -e glusterfs | sort collectd-5.7.2-3.1.el7rhgs.x86_64 collectd-ping-5.7.2-3.1.el7rhgs.x86_64 etcd-3.2.7-1.el7.x86_64 libcollectdclient-5.7.2-3.1.el7rhgs.x86_64 python-etcd-0.4.5-2.el7rhgs.noarch rubygem-etcd-0.3.0-2.el7rhgs.noarch tendrl-ansible-1.6.3-3.el7rhgs.noarch tendrl-api-1.6.3-3.el7rhgs.noarch tendrl-api-httpd-1.6.3-3.el7rhgs.noarch tendrl-commons-1.6.3-4.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-2.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-2.el7rhgs.noarch tendrl-node-agent-1.6.3-4.el7rhgs.noarch tendrl-notifier-1.6.3-2.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-ui-1.6.3-1.el7rhgs.noarch Gluster Storage Server: # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.5 (Maipo) # cat /etc/redhat-storage-release Red Hat Gluster Storage Server 3.4.0 # rpm -qa | grep -e tendrl -e etcd -e collectd -e glusterfs | sort collectd-5.7.2-3.1.el7rhgs.x86_64 collectd-ping-5.7.2-3.1.el7rhgs.x86_64 glusterfs-3.12.2-9.el7rhgs.x86_64 glusterfs-api-3.12.2-9.el7rhgs.x86_64 glusterfs-cli-3.12.2-9.el7rhgs.x86_64 glusterfs-client-xlators-3.12.2-9.el7rhgs.x86_64 glusterfs-events-3.12.2-9.el7rhgs.x86_64 glusterfs-fuse-3.12.2-9.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-9.el7rhgs.x86_64 glusterfs-libs-3.12.2-9.el7rhgs.x86_64 glusterfs-rdma-3.12.2-9.el7rhgs.x86_64 glusterfs-server-3.12.2-9.el7rhgs.x86_64 libcollectdclient-5.7.2-3.1.el7rhgs.x86_64 python-etcd-0.4.5-2.el7rhgs.noarch tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch tendrl-commons-1.6.3-4.el7rhgs.noarch tendrl-gluster-integration-1.6.3-2.el7rhgs.noarch tendrl-node-agent-1.6.3-4.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch >> VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2616