Bug 1517215
Summary: | 'Disable' Volume Profiling during cluster import behavior | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Daniel Horák <dahorak> | ||||
Component: | web-admin-tendrl-gluster-integration | Assignee: | Shubhendu Tripathi <shtripat> | ||||
Status: | CLOSED ERRATA | QA Contact: | Daniel Horák <dahorak> | ||||
Severity: | unspecified | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | rhgs-3.3 | CC: | dahorak, julim, mbukatov, nthomas, rghatvis, rhs-bugs, sankarshan | ||||
Target Milestone: | --- | ||||||
Target Release: | RHGS 3.4.0 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | tendrl-gluster-integration-1.6.1-1.el7rhgs, tendrl-api-1.6.1-1.el7rhgs.noarch.rpm, tendrl-commons-1.6.1-1.el7rhgs.noarch.rpm, tendrl-monitoring-integration-1.6.1-1.el7rhgs.noarch.rpm, tendrl-node-agent-1.6.1-1.el7, tendrl-ui-1.6.1-1.el7rhgs.noarch.rpm, | Doc Type: | Bug Fix | ||||
Doc Text: |
Cause: Earlier the cluster level flag set for volme profiling used to decide, whether profiling should be enabled for all the volumes or disabled. The possible values were only `Enable` and `Disable`. So if profiling is enabled for few volumes only in underlying cluster, after import the cluster level flag used to decide to enable/disable. There was no way to retain the cluster volume profiling state as is.
Consequence: If use intentionally wanted to have profiling enabled for few volumes for cluster and while import cluster in RHGS-WA, the option is selected not to enable volume profiling, RHGS-WA used to disable profiling for existing volumes as well. This is not very acceptable situation.
Fix: Now, profiling can be enabled/disable at individual volume level as well as at cluster level after import cluster as well. While import cluster now there is an additional option to retain the volume profiling as is for underlying volumes of the cluster which helps. If few volumes have profiling enabled and while import user selected to retain as is, the UI would display the profiling state at cluster level as `mixed`.
Result: Now enable/disable of volume profiling at cluster levele and individual volume level are pretty clean and user gets the exact state which is there at underlying cluster in RHGS.
|
Story Points: | --- | ||||
Clone Of: | Environment: | ||||||
Last Closed: | 2018-09-04 06:58:45 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1537357 | ||||||
Bug Blocks: | 1503134 | ||||||
Attachments: |
|
Description
Daniel Horák
2017-11-24 10:35:07 UTC
Just noting there is a related upstream GitHub issue at the following: https://github.com/Tendrl/gluster-integration/issues/551 I've tested and verified the functionality related to Volume Profiling during cluster Import process. The Import Cluster page now provides three options how to configure volume profiling on the imported cluster based on design[1]. I've tested all three variants with Gluster cluster with three volumes. Volume profiling was initially enabled on one or two of them. After the cluster was imported, volume profiling was in expected state on all three volumes, based on the selection during import cluster process (enabled on all volumes, disabled on all volumes or kept in the state as it was before import). Missing info tip for volume profiling (as it is proposed in the design[1]) is covered by Bug 1576682. There seems to be also some small/cosmetics issue, with enable/disable volume profiling functionality on imported cluster, which will be covered in new Bug. [1] https://redhat.invisionapp.com/share/8QCOEVEY9#/screens/247445416_Import_Clusters Version-Release number of selected component: RHGS WA Server (aka Tendrl Server): # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.5 (Maipo) # rpm -qa | grep -e tendrl -e etcd -e collectd -e glusterfs | sort collectd-5.7.2-3.1.el7rhgs.x86_64 collectd-ping-5.7.2-3.1.el7rhgs.x86_64 etcd-3.2.7-1.el7.x86_64 libcollectdclient-5.7.2-3.1.el7rhgs.x86_64 python-etcd-0.4.5-2.el7rhgs.noarch rubygem-etcd-0.3.0-2.el7rhgs.noarch tendrl-ansible-1.6.3-3.el7rhgs.noarch tendrl-api-1.6.3-3.el7rhgs.noarch tendrl-api-httpd-1.6.3-3.el7rhgs.noarch tendrl-commons-1.6.3-4.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-2.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-2.el7rhgs.noarch tendrl-node-agent-1.6.3-4.el7rhgs.noarch tendrl-notifier-1.6.3-2.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-ui-1.6.3-1.el7rhgs.noarch Gluster Storage Server: # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.5 (Maipo) # cat /etc/redhat-storage-release Red Hat Gluster Storage Server 3.4.0 # rpm -qa | grep -e tendrl -e etcd -e collectd -e glusterfs | sort collectd-5.7.2-3.1.el7rhgs.x86_64 collectd-ping-5.7.2-3.1.el7rhgs.x86_64 glusterfs-3.12.2-9.el7rhgs.x86_64 glusterfs-api-3.12.2-9.el7rhgs.x86_64 glusterfs-cli-3.12.2-9.el7rhgs.x86_64 glusterfs-client-xlators-3.12.2-9.el7rhgs.x86_64 glusterfs-events-3.12.2-9.el7rhgs.x86_64 glusterfs-fuse-3.12.2-9.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-9.el7rhgs.x86_64 glusterfs-libs-3.12.2-9.el7rhgs.x86_64 glusterfs-rdma-3.12.2-9.el7rhgs.x86_64 glusterfs-server-3.12.2-9.el7rhgs.x86_64 libcollectdclient-5.7.2-3.1.el7rhgs.x86_64 python-etcd-0.4.5-2.el7rhgs.noarch tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch tendrl-commons-1.6.3-4.el7rhgs.noarch tendrl-gluster-integration-1.6.3-2.el7rhgs.noarch tendrl-node-agent-1.6.3-4.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch >> VERIFIED Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2616 |