Description of problem: ======================= If volume is in stopped state, generally the volume set operation works. But In case of USS if the volume is in stopped state, the volume set operation of uss to "on" fails and setting uss to "off" is successful. Volume set operations for uss should be independent, should be allowed to set to ON when volume is stopped and when volume is started, it should start the snapd process. [root@inception ~]# gluster v stop vol1 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: vol1: success [root@inception ~]# gluster v set vol1 uss on volume set: failed: Commit failed on localhost. Please check the log file for more details. [root@inception ~]# gluster v set vol0 uss on volume set: failed: Commit failed on localhost. Please check the log file for more details. [root@inception ~]# gluster v set vol0 uss off volume set: success [root@inception ~]# gluster v set vol1 uss off volume set: success [root@inception ~]# [root@inception ~]# gluster v info vol0 | grep "Status:" Status: Stopped [root@inception ~]# gluster v info vol1 | grep "Status:" Status: Stopped [root@inception ~]# [ inception ] [ 0*$ bas Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.6.0.34-1.el6rhs.x86_64 How reproducible: ================= always Steps to Reproduce: =================== 1. create a cluster 2. create a volume 3. set the volume option uss to on Actual results: =============== It fails with "volume set: failed: Commit failed on localhost. Please check the log file for more details." Expected results: ================= It should success, and once the volume is started, it should start the snapd process Additional info: ================ when volume is in stopped state we can set the volume options such as: self-heal-daemon on/off uss off nfs.disable 0/1 etc...
Has been fixed in master as part of http://review.gluster.org/#/c/9206
Marking this as MODIFIED, as this will be part of the next build
Till the rebase happens, the bug should be kept in POST state.
Upstream mainline : http://review.gluster.org/9206 Upstream 3.8 : Available as part of branching from mainline And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html