Description of problem: When cluster.enable-shared-storage is enable , however if gluster-shared-storage is stopped, Create snapshot command fails. Version-Release number of selected component (if applicable): [root@rhs-client46 ~]# rpm -qa | grep glusterfs glusterfs-debuginfo-3.8.4-6.el7rhgs.x86_64 glusterfs-fuse-3.8.4-7.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-7.el7rhgs.x86_64 glusterfs-libs-3.8.4-7.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-7.el7rhgs.x86_64 glusterfs-api-3.8.4-7.el7rhgs.x86_64 glusterfs-server-3.8.4-7.el7rhgs.x86_64 glusterfs-3.8.4-7.el7rhgs.x86_64 glusterfs-cli-3.8.4-7.el7rhgs.x86_64 samba-vfs-glusterfs-4.4.6-2.el7rhgs.x86_64 How reproducible: 100% Steps to Reproduce: 1. Create 2*2 distribute volume 2. enabled cluster.enable-shared-storage 3. stop gluster-shared-storage 4. Create snapshot Actual results: Create snapshot command fails Expected results: Create snapshot command should not fail Additional info:
Master Url : http://review.gluster.org/16094
Master Url : http://review.gluster.org/16094 Release 3.9 Url : http://review.gluster.org/#/c/16112/1 RHGS 3.2.0 Url : https://code.engineering.redhat.com/gerrit/#/c/92781/
[root@rhs-client46 ~]# gluster v stop gluster_shared_storage Stopping the shared storage volume(gluster_shared_storage), will affect features like snapshot scheduler, geo-replication and NFS-Ganesha. Do you still want to continue? (y/n) y volume stop: gluster_shared_storage: success [root@rhs-client46 ~]# gluster snapshot create snap1 test-volume no-timestamp snapshot create: success: Snap snap1 created successfully [root@rhs-client46 ~]# gluster snapshot activate snap1 Snapshot activate: snap1: Snap activated successfully [root@rhs-client46 ~]# gluster snapshot clone clone2 snap1 snapshot clone: success: Clone clone2 created successfully [root@rhs-client46 ~]# gluster v start clone2 volume start: clone2: success Able to create Snapshots, when gluster_shared_storage is down. Bug verified on build glusterfs-3.8.4-9.el7rhgs.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html