Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1234357 - Unable to set cluster.enable-shared-storage volume option from Console
Unable to set cluster.enable-shared-storage volume option from Console
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity medium
: ---
: RHGS 3.1.1
Assigned To: anmol babu
Triveni Rao
: ZStream
Depends On: 1234708
Blocks: 1216951 1251815 1274367
  Show dependency treegraph
 
Reported: 2015-06-22 08:24 EDT by Arthy Loganathan
Modified: 2016-05-16 00:39 EDT (History)
12 users (show)

See Also:
Fixed In Version: rhsc-3.1.1-63
Doc Type: Bug Fix
Doc Text:
Previously, as Red Hat Gluster Storage Console does not support cluster level option operations (set and reset), the user did not have a way to set cluster.enable-shared-storage volume option from the console. With this fix, this volume option is set automatically by the console when a new node is added to a volume that is participating as a master of a geo-replication session.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-10-05 05:21:30 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Error while setting gluster shared storage option (176.24 KB, image/png)
2015-06-26 01:13 EDT, RamaKasturi
no flags Details
cluster.enable-shared-storage: enabled (68.33 KB, image/png)
2015-08-19 05:08 EDT, Triveni Rao
no flags Details
metavolume from UI (61.87 KB, image/png)
2015-08-19 05:11 EDT, Triveni Rao
no flags Details
events logs (131.43 KB, image/png)
2015-08-19 05:11 EDT, Triveni Rao
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 43039 master MERGED engine : Enable "cluster.enable-shared-storage" Never
oVirt gerrit 44224 ovirt-engine-3.5-gluster MERGED engine : Enable "cluster.enable-shared-storage" Never
Red Hat Product Errata RHBA-2015:1848 normal SHIPPED_LIVE Red Hat Gluster Storage Console 3.1 update 1 bug fixes 2015-10-05 09:19:50 EDT

  None (edit)
Description Arthy Loganathan 2015-06-22 08:24:12 EDT
Description of problem:
Unable to set cluster.enable-shared-storage volume option from Console

Version-Release number of selected component (if applicable):
rhsc-3.1.0-0.60.el6.noarch

How reproducible:
Always

Steps to Reproduce:
1. Add hosts and Create a volume
2. Try to add cluster.enable-shared-storage volume option, by clicking Volume Options -> Add

Actual results:
cluster.enable-shared-storage option is not listed in 'Option Key' drop down box.

Expected results:
User should able to set cluster.enable-shared-storage option from UI

Additional info:
Comment 2 Darshan 2015-06-23 04:24:29 EDT
gluster command "volume set help-xml" is not returning the new option "cluster.enable-shared-storage", so this new option is not available in engine. BZ#1234708 is the bug to fix this issue in gluster.
Comment 3 Sahina Bose 2015-06-25 10:42:07 EDT
You should be able to directly type in "cluster.enable-shared-storage" to the editable dropdown option name in the Edit options screen.

Can you please try this? If so, we can close this bug.
Comment 4 RamaKasturi 2015-06-26 01:11:43 EDT
Hi Sahina,

   We are not able to set this option from UI. It displays operation cancelled with the message : Error while executing action Set Gluster Volume Option: Volume set failed. error:Not a valid option for single volume.

Attached the screenshot.

Thanks
kasturi
Comment 5 RamaKasturi 2015-06-26 01:13:25 EDT
Created attachment 1043382 [details]
Error while setting gluster shared storage option
Comment 6 Sahina Bose 2015-06-29 05:32:17 EDT
We do not have support for setting cluster options in Engine. Deferring this from 3.1
Comment 7 anmol babu 2015-07-15 01:46:51 EDT
Does it suffice to set this option implicitly from engine whenever user tries to use_meta_volume to true. The reason I am asking this question is, once the option is set it appears in the "Volume Options" sub tab from where the option value can be reset/altered. AFAIK, this option makes sense only to Snapshot and Geo-replication cases and under such a circumstance with only this option being a cluster level option (in my knowledge) the above proposal of implicit setting the option might suffice.
Comment 8 Sahina Bose 2015-07-21 02:50:39 EDT
Yes, we will set this implicitly
Comment 9 monti lawrence 2015-07-23 10:18:18 EDT
Doc text is edited. Please sign off to be included in Known Issues.
Comment 10 anmol babu 2015-07-24 01:07:35 EDT
Looks good to me
Comment 15 Triveni Rao 2015-08-19 05:08:58 EDT
Created attachment 1064689 [details]
cluster.enable-shared-storage: enabled
Comment 16 Triveni Rao 2015-08-19 05:09:57 EDT
This is bug is verified and found no issues.

Steps followed:

Referring to Comment 7 above that cluster.enable-shared-storage options is set implicitly from engine whenever user tries to use_meta_volume to true.

1. added hosts and created geo-rep setup to verify the bug.
2. created master and slave volume, created geo-rep session between them.
3. use_meta_volume option set to true which implicitly enabled cluster.enable-shared-storage.
4. i could verify that in the volume options cluster.enable-shared-storage set properly from both UI and back end.
5. I could verify that meta volume created on backend, seen in the UI and mounted.
6. attached the screen shots.


Output:

From Backend:

Volume Name: geo-rep-test
Type: Replicate
Volume ID: 3543e21d-9dc4-4671-91e6-ebeb58a0bdef
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.70.35.82:/rhgs/brick5/brick5
Brick2: 10.70.35.77:/rhgs/brick5/brick5
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
auth.allow: *
user.cifs: enable
nfs.disable: off
performance.readdir-ahead: on
cluster.enable-shared-storage: enable
 
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: 94117eaa-6bdc-4fab-95f9-e2082d433d77
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.70.35.77:/var/lib/glusterd/ss_brick
Brick2: dhcp35-82.lab.eng.blr.redhat.com:/var/lib/glusterd/ss_brick
Options Reconfigured:
performance.readdir-ahead: on
cluster.enable-shared-storage: enable
[root@casino-vm3 ~]# 
[root@casino-vm3 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_casinovm3-lv_root
                       18G  2.8G   14G  17% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/vda1             477M   36M  416M   8% /boot
/dev/mapper/vg--brick1-brick1
                       50G   33M   50G   1% /rhgs/brick1
/dev/mapper/vg--brick2-brick2
                       50G   33M   50G   1% /rhgs/brick2
/dev/mapper/vg--brick3-brick3
                       50G   33M   50G   1% /rhgs/brick3
/dev/mapper/vg--brick4-brick4
                       50G   33M   50G   1% /rhgs/brick4
/dev/mapper/vg--brick5-brick5
                       50G   33M   50G   1% /rhgs/brick5
dhcp35-82.lab.eng.blr.redhat.com:/gluster_shared_storage
                       18G  2.8G   14G  17% /var/run/gluster/shared_storage
[root@casino-vm3 ~]#
Comment 17 Triveni Rao 2015-08-19 05:11:02 EDT
Created attachment 1064694 [details]
metavolume from UI
Comment 18 Triveni Rao 2015-08-19 05:11:42 EDT
Created attachment 1064695 [details]
events logs
Comment 19 Bhavana 2015-09-22 03:00:24 EDT
Hi Anmol,

The doc text is updated. Please review it and share your technical review comments. If it looks ok, then sign-off on the same.
Comment 20 anmol babu 2015-09-22 04:04:50 EDT
Looks good to me
Comment 22 errata-xmlrpc 2015-10-05 05:21:30 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1848.html

Note You need to log in before you can comment on or make changes to this bug.