Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1228127 - Volume needs restart after editing auth.ssl-allow list for volume options which otherwise has to be automatic
Volume needs restart after editing auth.ssl-allow list for volume options whi...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core (Show other bugs)
3.1
x86_64 Linux
medium Severity medium
: ---
: RHGS 3.1.0
Assigned To: Kaushal
krishnaram Karthick
:
Depends On: 1238072
Blocks: 1202842 1211643
  Show dependency treegraph
 
Reported: 2015-06-04 05:29 EDT by Triveni Rao
Modified: 2016-09-17 10:40 EDT (History)
11 users (show)

See Also:
Fixed In Version: glusterfs-3.7.1-7.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-07-29 00:56:18 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 04:26:26 EDT

  None (edit)
Description Triveni Rao 2015-06-04 05:29:14 EDT
Description of problem:

Volume needs restart after editing auth.ssl-allow list for volume options which otherwise has to be automatic

Version-Release number of selected component (if applicable):

[root@rhsqa14-vm4 glusterd]# rpm -qa | grep gluster
glusterfs-rdma-3.7.0-3.el6rhs.x86_64
glusterfs-3.7.0-3.el6rhs.x86_64
glusterfs-cli-3.7.0-3.el6rhs.x86_64
glusterfs-geo-replication-3.7.0-3.el6rhs.x86_64
glusterfs-debuginfo-3.7.0-3.el6rhs.x86_64
glusterfs-libs-3.7.0-3.el6rhs.x86_64
glusterfs-client-xlators-3.7.0-3.el6rhs.x86_64
glusterfs-fuse-3.7.0-3.el6rhs.x86_64
glusterfs-server-3.7.0-3.el6rhs.x86_64
glusterfs-api-3.7.0-3.el6rhs.x86_64
You have mail in /var/spool/mail/root
[root@rhsqa14-vm4 glusterd]# 

How reproducible:
easily

Steps to Reproduce:
1.create volume dont start it, configure ssl on th volume.
2.start volume,set volume options auth.ssl-allow with list of hostnames.
3. check vol info, stop volume, start volume, try mounting, mount success.



OUtput:

[root@rhsqa14-vm4 glusterd]# gluster vol info
 
Volume Name: testing
Type: Distributed-Replicate
Volume ID: 6e2276db-c312-4a32-9f68-47e455b7655c
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.46.233:/rhs/brick1/T0
Brick2: 10.70.46.236:/rhs/brick1/T0
Brick3: 10.70.46.233:/rhs/brick2/T0
Brick4: 10.70.46.236:/rhs/brick2/T0
Options Reconfigured:
performance.readdir-ahead: on
client.ssl: on
server.ssl: on
auth.ssl-allow: 10.70.46.236,10.70.46.240,10.70.46.243
(reverse-i-search)`se': ^Crvice glusterd start
[root@rhsqa14-vm4 glusterd]# gluster volume set testing auth.ssl-allow rhaqa14-vm1,rhaqa14-vm2,rhaqa14-vm3,rhaqa14-vm4
volume set: success
You have mail in /var/spool/mail/root
[root@rhsqa14-vm4 glusterd]# mount -t glusterfs 10.70.46.233:/testing /mnt
Mount failed. Please check the log file for more details.
[root@rhsqa14-vm4 glusterd]# 

[root@rhsqa14-vm4 glusterd]# gluster vol info
 
Volume Name: testing
Type: Distributed-Replicate
Volume ID: 6e2276db-c312-4a32-9f68-47e455b7655c
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.46.233:/rhs/brick1/T0
Brick2: 10.70.46.236:/rhs/brick1/T0
Brick3: 10.70.46.233:/rhs/brick2/T0
Brick4: 10.70.46.236:/rhs/brick2/T0
Options Reconfigured:
performance.readdir-ahead: on
client.ssl: on
server.ssl: on
auth.ssl-allow: rhaqa14-vm1,rhaqa14-vm2,rhaqa14-vm3,rhaqa14-vm4
[root@rhsqa14-vm4 glusterd]# gluster volume stop testing
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: testing: success
[root@rhsqa14-vm4 glusterd]# gluster volume start testing
volume start: testing: success
[root@rhsqa14-vm4 glusterd]# mount -t glusterfs 10.70.46.233:/testing /mnt
[root@rhsqa14-vm4 glusterd]#
Comment 2 Kaushal 2015-06-22 07:14:51 EDT
Turns out auth.ssl-allow is handled in a slightly different manner to auth.allow, which is why the brick isn't able to identify changes to it. I had assumed wrongly that auth.ssl-allow is handled in the same way as auth.allow.
Comment 5 Deepak C Shetty 2015-06-30 04:39:30 EDT
FWIW, a feature for this was opened long back in Sept. 2014 (see the end of the feature page for last modified date) @

http://www.gluster.org/community/documentation/index.php/Features/auto-refresh-volume-set-ssl
Comment 7 Kaushal 2015-07-01 08:55:34 EDT
Change posted for review at https://review.gluster.org/11395
Comment 8 Atin Mukherjee 2015-07-02 00:12:32 EDT
Downstream patch : https://code.engineering.redhat.com/gerrit/52103/ is now merged. Moving the status to Modified.
Comment 10 Deepak C Shetty 2015-07-13 02:01:17 EDT
(In reply to Kaushal from comment #7)
> Change posted for review at https://review.gluster.org/11395

The link looks incorrect, it fixes some peer hostname issue not the ssl.allow
Is that the right link ?
Comment 11 Deepak C Shetty 2015-07-13 02:05:18 EDT
The right upstream patch link happens to be:
http://review.gluster.org/#/c/11487/
Comment 13 errata-xmlrpc 2015-07-29 00:56:18 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html

Note You need to log in before you can comment on or make changes to this bug.