Description of problem: Volume needs restart after editing auth.ssl-allow list for volume options which otherwise has to be automatic Version-Release number of selected component (if applicable): [root@rhsqa14-vm4 glusterd]# rpm -qa | grep gluster glusterfs-rdma-3.7.0-3.el6rhs.x86_64 glusterfs-3.7.0-3.el6rhs.x86_64 glusterfs-cli-3.7.0-3.el6rhs.x86_64 glusterfs-geo-replication-3.7.0-3.el6rhs.x86_64 glusterfs-debuginfo-3.7.0-3.el6rhs.x86_64 glusterfs-libs-3.7.0-3.el6rhs.x86_64 glusterfs-client-xlators-3.7.0-3.el6rhs.x86_64 glusterfs-fuse-3.7.0-3.el6rhs.x86_64 glusterfs-server-3.7.0-3.el6rhs.x86_64 glusterfs-api-3.7.0-3.el6rhs.x86_64 You have mail in /var/spool/mail/root [root@rhsqa14-vm4 glusterd]# How reproducible: easily Steps to Reproduce: 1.create volume dont start it, configure ssl on th volume. 2.start volume,set volume options auth.ssl-allow with list of hostnames. 3. check vol info, stop volume, start volume, try mounting, mount success. OUtput: [root@rhsqa14-vm4 glusterd]# gluster vol info Volume Name: testing Type: Distributed-Replicate Volume ID: 6e2276db-c312-4a32-9f68-47e455b7655c Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.46.233:/rhs/brick1/T0 Brick2: 10.70.46.236:/rhs/brick1/T0 Brick3: 10.70.46.233:/rhs/brick2/T0 Brick4: 10.70.46.236:/rhs/brick2/T0 Options Reconfigured: performance.readdir-ahead: on client.ssl: on server.ssl: on auth.ssl-allow: 10.70.46.236,10.70.46.240,10.70.46.243 (reverse-i-search)`se': ^Crvice glusterd start [root@rhsqa14-vm4 glusterd]# gluster volume set testing auth.ssl-allow rhaqa14-vm1,rhaqa14-vm2,rhaqa14-vm3,rhaqa14-vm4 volume set: success You have mail in /var/spool/mail/root [root@rhsqa14-vm4 glusterd]# mount -t glusterfs 10.70.46.233:/testing /mnt Mount failed. Please check the log file for more details. [root@rhsqa14-vm4 glusterd]# [root@rhsqa14-vm4 glusterd]# gluster vol info Volume Name: testing Type: Distributed-Replicate Volume ID: 6e2276db-c312-4a32-9f68-47e455b7655c Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.46.233:/rhs/brick1/T0 Brick2: 10.70.46.236:/rhs/brick1/T0 Brick3: 10.70.46.233:/rhs/brick2/T0 Brick4: 10.70.46.236:/rhs/brick2/T0 Options Reconfigured: performance.readdir-ahead: on client.ssl: on server.ssl: on auth.ssl-allow: rhaqa14-vm1,rhaqa14-vm2,rhaqa14-vm3,rhaqa14-vm4 [root@rhsqa14-vm4 glusterd]# gluster volume stop testing Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: testing: success [root@rhsqa14-vm4 glusterd]# gluster volume start testing volume start: testing: success [root@rhsqa14-vm4 glusterd]# mount -t glusterfs 10.70.46.233:/testing /mnt [root@rhsqa14-vm4 glusterd]#
Turns out auth.ssl-allow is handled in a slightly different manner to auth.allow, which is why the brick isn't able to identify changes to it. I had assumed wrongly that auth.ssl-allow is handled in the same way as auth.allow.
FWIW, a feature for this was opened long back in Sept. 2014 (see the end of the feature page for last modified date) @ http://www.gluster.org/community/documentation/index.php/Features/auto-refresh-volume-set-ssl
Change posted for review at https://review.gluster.org/11395
Downstream patch : https://code.engineering.redhat.com/gerrit/52103/ is now merged. Moving the status to Modified.
(In reply to Kaushal from comment #7) > Change posted for review at https://review.gluster.org/11395 The link looks incorrect, it fixes some peer hostname issue not the ssl.allow Is that the right link ?
The right upstream patch link happens to be: http://review.gluster.org/#/c/11487/
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html