Description of problem: ======================= gluster volume reset is used to "reset all the reconfigured options" and USS is one of them. But snapd process continues to be online [root@inception ~]# gluster v i vol_test | grep uss features.uss: on [root@inception ~]# ps -eaf | grep snapd root 4182 1 0 15:10 ? 00:00:00 /usr/sbin/glusterfsd -s localhost --volfile-id snapd/vol_test -p /var/lib/glusterd/vols/vol_test/run/vol_test-snapd.pid -l /var/log/glusterfs/snaps/vol_test/snapd.log --brick-name snapd-vol_test -S /var/run/acfb4e617ee26303307f24cfb63c442e.socket --brick-port 49158 --xlator-option vol_test-server.listen-port=49158 root 4228 4111 0 15:10 pts/0 00:00:00 grep snapd [root@inception ~]# gluster v reset vol_test volume reset: success: reset volume successful [root@inception ~]# gluster v i vol_test | grep uss [root@inception ~]# ps -eaf | grep snapd root 4182 1 0 15:10 ? 00:00:00 /usr/sbin/glusterfsd -s localhost --volfile-id snapd/vol_test -p /var/lib/glusterd/vols/vol_test/run/vol_test-snapd.pid -l /var/log/glusterfs/snaps/vol_test/snapd.log --brick-name snapd-vol_test -S /var/run/acfb4e617ee26303307f24cfb63c442e.socket --brick-port 49158 --xlator-option vol_test-server.listen-port=49158 root 4274 4111 0 15:11 pts/0 00:00:00 grep snapd [root@inception ~]# Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.6.0.40-1.el6rhs.x86_64 How reproducible: ================ always Steps to Reproduce: =================== 1. Create 4 node cluster 2. Create and start a volume 3. Enable USS, confirm snapd process is started on all node 4. Check gluster v i vol-name, uss should be enabled 5. gluster v status should show snapshot-daemon online 6. Reset the volume using "gluster volume reset <vol-name>" 7. Check gluster v i vol-name, uss should not be listed 8. gluster v status should not show snapshot-daemon 9. check snapd process using "ps -eaf | grep snapd" on all nodes Actual results: =============== snapd process is online Expected results: ================= snapd process should not be running
upstream patch http://review.gluster.org/#/c/10138/
Fixed with https://code.engineering.redhat.com/gerrit/#/c/55146/
Verified with glusterfs-3.7.1-12 build and its working as expected. Snapd process gets killed if we do a gluster volume reset vol-name. Snippets below: [root@dhcp35-181 ~]# gluster volume info | grep uss [root@dhcp35-181 ~]# gluster volume set testvolume features.uss enable volume set: success [root@dhcp35-181 ~]# gluster volume info | grep uss features.uss: enable [root@dhcp35-181 ~]# ps -aef|grep snapd root 23522 1 0 15:24 ? 00:00:00 /usr/sbin/glusterfsd -s localhost --volfile-id snapd/testvolume -p /var/lib/glusterd/vols/testvolume/run/testvolume-snapd.pid -l /var/log/glusterfs/snaps/testvolume/snapd.log --brick-name snapd-testvolume -S /var/run/gluster /9ffa63ccf61f4d11fb9fee6cc646d9eb.socket --brick-port 49157 --xlator-option testvolume-server.listen-port=49157 --no-mem-accounting root 23566 19662 0 15:25 pts/0 00:00:00 grep --color=auto snapd [root@dhcp35-181 ~]# gluster volume status Status of volume: testvolume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.35.18:/rhs/brick1/b1 49152 0 Y 8249 Brick 10.70.35.164:/rhs/brick1/b1 49152 0 Y 22036 Brick 10.70.35.138:/rhs/brick1/b1 49152 0 Y 21901 Brick 10.70.35.181:/rhs/brick1/b1 49152 0 Y 21197 Brick 10.70.35.18:/rhs/brick2/b2 49153 0 Y 8267 Brick 10.70.35.164:/rhs/brick2/b2 49153 0 Y 22054 Brick 10.70.35.138:/rhs/brick2/b2 49153 0 Y 21919 Brick 10.70.35.181:/rhs/brick2/b2 49153 0 Y 21215 Brick 10.70.35.18:/rhs/brick3/b3 49154 0 Y 8285 Brick 10.70.35.164:/rhs/brick3/b3 49154 0 Y 22072 Brick 10.70.35.138:/rhs/brick3/b3 49154 0 Y 21937 Brick 10.70.35.181:/rhs/brick3/b3 49154 0 Y 21233 Snapshot Daemon on localhost 49159 0 Y 23841 NFS Server on localhost 2049 0 Y 23857 Snapshot Daemon on 10.70.35.138 49159 0 Y 24095 NFS Server on 10.70.35.138 2049 0 Y 24103 Snapshot Daemon on 10.70.35.164 49159 0 Y 24227 NFS Server on 10.70.35.164 2049 0 Y 24235 Snapshot Daemon on dhcp35-18.lab.eng.blr.re dhat.com 49159 0 Y 10749 NFS Server on dhcp35-18.lab.eng.blr.redhat. com 2049 0 Y 10764 Task Status of Volume testvolume ------------------------------------------------------------------------------ There are no active volume tasks So after we reset the volume, uss and snapd gets stopped. [root@dhcp35-181 ~]# gluster volume reset testvolume volume reset: success: reset volume successful [root@dhcp35-181 ~]# gluster volume info | grep uss [root@dhcp35-181 ~]# gluster volume status Status of volume: testvolume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.35.18:/rhs/brick1/b1 49152 0 Y 8249 Brick 10.70.35.164:/rhs/brick1/b1 49152 0 Y 22036 Brick 10.70.35.138:/rhs/brick1/b1 49152 0 Y 21901 Brick 10.70.35.181:/rhs/brick1/b1 49152 0 Y 21197 Brick 10.70.35.18:/rhs/brick2/b2 49153 0 Y 8267 Brick 10.70.35.164:/rhs/brick2/b2 49153 0 Y 22054 Brick 10.70.35.138:/rhs/brick2/b2 49153 0 Y 21919 Brick 10.70.35.181:/rhs/brick2/b2 49153 0 Y 21215 Brick 10.70.35.18:/rhs/brick3/b3 49154 0 Y 8285 Brick 10.70.35.164:/rhs/brick3/b3 49154 0 Y 22072 Brick 10.70.35.138:/rhs/brick3/b3 49154 0 Y 21937 Brick 10.70.35.181:/rhs/brick3/b3 49154 0 Y 21233 NFS Server on localhost 2049 0 Y 23932 NFS Server on 10.70.35.164 N/A N/A N N/A NFS Server on 10.70.35.138 N/A N/A N N/A NFS Server on dhcp35-18.lab.eng.blr.redhat. com N/A N/A N N/A Task Status of Volume testvolume ------------------------------------------------------------------------------ There are no active volume tasks [root@dhcp35-181 ~]# ps -aef|grep snapd root 23614 19662 0 15:26 pts/0 00:00:00 grep --color=auto snapd Since the earlier issue is no more reproducible. hence marking this bug as Verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1845.html