Bug 1517215 - 'Disable' Volume Profiling during cluster import behavior
Summary: 'Disable' Volume Profiling during cluster import behavior
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: web-admin-tendrl-gluster-integration
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: Shubhendu Tripathi
QA Contact: Daniel Horák
URL:
Whiteboard:
Depends On: 1537357
Blocks: 1503134
TreeView+ depends on / blocked
 
Reported: 2017-11-24 10:35 UTC by Daniel Horák
Modified: 2018-09-04 06:59 UTC (History)
7 users (show)

Fixed In Version: tendrl-gluster-integration-1.6.1-1.el7rhgs, tendrl-api-1.6.1-1.el7rhgs.noarch.rpm, tendrl-commons-1.6.1-1.el7rhgs.noarch.rpm, tendrl-monitoring-integration-1.6.1-1.el7rhgs.noarch.rpm, tendrl-node-agent-1.6.1-1.el7, tendrl-ui-1.6.1-1.el7rhgs.noarch.rpm,
Doc Type: Bug Fix
Doc Text:
Cause: Earlier the cluster level flag set for volme profiling used to decide, whether profiling should be enabled for all the volumes or disabled. The possible values were only `Enable` and `Disable`. So if profiling is enabled for few volumes only in underlying cluster, after import the cluster level flag used to decide to enable/disable. There was no way to retain the cluster volume profiling state as is. Consequence: If use intentionally wanted to have profiling enabled for few volumes for cluster and while import cluster in RHGS-WA, the option is selected not to enable volume profiling, RHGS-WA used to disable profiling for existing volumes as well. This is not very acceptable situation. Fix: Now, profiling can be enabled/disable at individual volume level as well as at cluster level after import cluster as well. While import cluster now there is an additional option to retain the volume profiling as is for underlying volumes of the cluster which helps. If few volumes have profiling enabled and while import user selected to retain as is, the UI would display the profiling state at cluster level as `mixed`. Result: Now enable/disable of volume profiling at cluster levele and individual volume level are pretty clean and user gets the exact state which is there at underlying cluster in RHGS.
Clone Of:
Environment:
Last Closed: 2018-09-04 06:58:45 UTC
Embargoed:


Attachments (Terms of Use)
Screenshots (37.42 KB, application/zip)
2017-11-24 10:35 UTC, Daniel Horák
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github Tendrl gluster-integration issues 405 0 None None None 2017-11-24 14:09:57 UTC
Github https://github.com/Tendrl gluster-integration issues 553 0 None None None 2018-02-15 02:34:39 UTC
Red Hat Product Errata RHSA-2018:2616 0 None None None 2018-09-04 06:59:57 UTC

Description Daniel Horák 2017-11-24 10:35:07 UTC
Created attachment 1358611 [details]
Screenshots

Description of problem:
  I'm not sure if the behavior described below is a bug or intended feature and I didn't find any clarification about  this, so I'm raising this bug mainly to clarify this question.

  I've created Gluster cluster with one volume with enabled Volume profiling and I've imported the cluster into Tendrl and uncheck the "Enable Volume Profiling" checkbox during the import process (see attachments).
  But once the cluster was imported, volume profiling is still enabled on the Gluster Volume.

Version-Release number of selected component (if applicable):
  RHGS WA Server
  tendrl-ansible-1.5.4-1.el7rhgs.noarch
  tendrl-api-1.5.4-2.el7rhgs.noarch
  tendrl-api-httpd-1.5.4-2.el7rhgs.noarch
  tendrl-commons-1.5.4-4.el7rhgs.noarch
  tendrl-grafana-plugins-1.5.4-5.el7rhgs.noarch
  tendrl-grafana-selinux-1.5.3-2.el7rhgs.noarch
  tendrl-monitoring-integration-1.5.4-5.el7rhgs.noarch
  tendrl-node-agent-1.5.4-5.el7rhgs.noarch
  tendrl-notifier-1.5.4-3.el7rhgs.noarch
  tendrl-selinux-1.5.3-2.el7rhgs.noarch
  tendrl-ui-1.5.4-4.el7rhgs.noarch

  Gluster Storage servers
  tendrl-collectd-selinux-1.5.3-2.el7rhgs.noarch
  tendrl-commons-1.5.4-4.el7rhgs.noarch
  tendrl-gluster-integration-1.5.4-4.el7rhgs.noarch
  tendrl-node-agent-1.5.4-5.el7rhgs.noarch
  tendrl-selinux-1.5.3-2.el7rhgs.noarch

How reproducible:
  100%


Steps to Reproduce:
1. Prepare Gluster storage cluster with one Volume.
  # gluster volume list
  volume_beta_arbiter_2_plus_1x2
2. Enable volume profiling on the volume:
  # gluster volume profile volume_beta_arbiter_2_plus_1x2 start
3. Prepare RHGS WA Server.
4. Configure RHGS WA node agents on Gluster storage nodes.
5. Import Gluster cluster into RHGS WA with uncheck the "Enable Volume profiling".
6. Check the volume profiling status
  # gluster volume profile volume_beta_arbiter_2_plus_1x2 info

Actual results:
  Volume Profiling is correctly Disable on the Cluster level (see screenshot Clusters_List.png).

  But profiling is still enabled on the Volume.

Expected results:
  That is the question - When "Enable Volume Profiling" is not check on the Import Cluster page, should be Volume profiling disable on all Volumes, or should be left untouched?

Additional info:
  From my point of view, the current behaviour might make sense - as the checkbox on the Import Cluster page says "Enable Volume Profiling" - and when is uncheck, it might mean do not enable volume profiling (which is slightly different meaning than "Disable Volume Profiling".
  So could you please clarify, what is the expected behaviour?

  There is related discussion on github[1], but this specific scenario is not covered (if I didn't miss something).

[1] https://github.com/Tendrl/gluster-integration/issues/405

Comment 4 Ju Lim 2018-01-25 18:59:45 UTC
Just noting there is a related upstream GitHub issue at the following:
https://github.com/Tendrl/gluster-integration/issues/551

Comment 7 Daniel Horák 2018-05-11 12:15:34 UTC
I've tested and verified the functionality related to Volume Profiling during
cluster Import process.
The Import Cluster page now provides three options how to configure volume
profiling on the imported cluster based on design[1].

I've tested all three variants with Gluster cluster with three volumes.
Volume profiling was initially enabled on one or two of them.
After the cluster was imported, volume profiling was in expected state on all
three volumes, based on the selection during import cluster process
(enabled on all volumes, disabled on all volumes or kept in the state as it
was before import).

Missing info tip for volume profiling (as it is proposed in the design[1]) is
covered by Bug 1576682.

There seems to be also some small/cosmetics issue, with enable/disable volume
profiling functionality on imported cluster, which will be covered in new Bug.

[1] https://redhat.invisionapp.com/share/8QCOEVEY9#/screens/247445416_Import_Clusters

Version-Release number of selected component:
RHGS WA Server (aka Tendrl Server):
# cat /etc/redhat-release 
  Red Hat Enterprise Linux Server release 7.5 (Maipo)
# rpm -qa | grep -e tendrl -e etcd -e collectd -e glusterfs | sort
  collectd-5.7.2-3.1.el7rhgs.x86_64
  collectd-ping-5.7.2-3.1.el7rhgs.x86_64
  etcd-3.2.7-1.el7.x86_64
  libcollectdclient-5.7.2-3.1.el7rhgs.x86_64
  python-etcd-0.4.5-2.el7rhgs.noarch
  rubygem-etcd-0.3.0-2.el7rhgs.noarch
  tendrl-ansible-1.6.3-3.el7rhgs.noarch
  tendrl-api-1.6.3-3.el7rhgs.noarch
  tendrl-api-httpd-1.6.3-3.el7rhgs.noarch
  tendrl-commons-1.6.3-4.el7rhgs.noarch
  tendrl-grafana-plugins-1.6.3-2.el7rhgs.noarch
  tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
  tendrl-monitoring-integration-1.6.3-2.el7rhgs.noarch
  tendrl-node-agent-1.6.3-4.el7rhgs.noarch
  tendrl-notifier-1.6.3-2.el7rhgs.noarch
  tendrl-selinux-1.5.4-2.el7rhgs.noarch
  tendrl-ui-1.6.3-1.el7rhgs.noarch

Gluster Storage Server:
# cat /etc/redhat-release 
  Red Hat Enterprise Linux Server release 7.5 (Maipo)
# cat /etc/redhat-storage-release 
  Red Hat Gluster Storage Server 3.4.0
# rpm -qa | grep -e tendrl -e etcd -e collectd -e glusterfs | sort
  collectd-5.7.2-3.1.el7rhgs.x86_64
  collectd-ping-5.7.2-3.1.el7rhgs.x86_64
  glusterfs-3.12.2-9.el7rhgs.x86_64
  glusterfs-api-3.12.2-9.el7rhgs.x86_64
  glusterfs-cli-3.12.2-9.el7rhgs.x86_64
  glusterfs-client-xlators-3.12.2-9.el7rhgs.x86_64
  glusterfs-events-3.12.2-9.el7rhgs.x86_64
  glusterfs-fuse-3.12.2-9.el7rhgs.x86_64
  glusterfs-geo-replication-3.12.2-9.el7rhgs.x86_64
  glusterfs-libs-3.12.2-9.el7rhgs.x86_64
  glusterfs-rdma-3.12.2-9.el7rhgs.x86_64
  glusterfs-server-3.12.2-9.el7rhgs.x86_64
  libcollectdclient-5.7.2-3.1.el7rhgs.x86_64
  python-etcd-0.4.5-2.el7rhgs.noarch
  tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch
  tendrl-commons-1.6.3-4.el7rhgs.noarch
  tendrl-gluster-integration-1.6.3-2.el7rhgs.noarch
  tendrl-node-agent-1.6.3-4.el7rhgs.noarch
  tendrl-selinux-1.5.4-2.el7rhgs.noarch

>> VERIFIED

Comment 10 errata-xmlrpc 2018-09-04 06:58:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2616


Note You need to log in before you can comment on or make changes to this bug.