Bug 1178100 - [USS]: gluster volume reset <vol-name>, resets the uss configured option but snapd process continues to run
Summary: [USS]: gluster volume reset <vol-name>, resets the uss configured option but ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: RHGS 3.1.1
Assignee: Mohammed Rafi KC
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard: SNAPSHOT
Depends On:
Blocks: 1209123 1245926 1251815
TreeView+ depends on / blocked
 
Reported: 2015-01-02 10:02 UTC by Rahul Hinduja
Modified: 2016-09-17 13:05 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.1-12
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1209123 (view as bug list)
Environment:
Last Closed: 2015-10-05 07:08:03 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1845 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.1 update 2015-10-05 11:06:22 UTC

Description Rahul Hinduja 2015-01-02 10:02:34 UTC
Description of problem:
=======================

gluster volume reset is used to "reset all the reconfigured options" and USS is one of them. But snapd process continues to be online 

[root@inception ~]# gluster v i vol_test | grep uss
features.uss: on
[root@inception ~]# ps -eaf | grep snapd
root      4182     1  0 15:10 ?        00:00:00 /usr/sbin/glusterfsd -s localhost --volfile-id snapd/vol_test -p /var/lib/glusterd/vols/vol_test/run/vol_test-snapd.pid -l /var/log/glusterfs/snaps/vol_test/snapd.log --brick-name snapd-vol_test -S /var/run/acfb4e617ee26303307f24cfb63c442e.socket --brick-port 49158 --xlator-option vol_test-server.listen-port=49158
root      4228  4111  0 15:10 pts/0    00:00:00 grep snapd
[root@inception ~]# gluster v reset vol_test
volume reset: success: reset volume successful
[root@inception ~]# gluster v i vol_test | grep uss
[root@inception ~]# ps -eaf | grep snapd
root      4182     1  0 15:10 ?        00:00:00 /usr/sbin/glusterfsd -s localhost --volfile-id snapd/vol_test -p /var/lib/glusterd/vols/vol_test/run/vol_test-snapd.pid -l /var/log/glusterfs/snaps/vol_test/snapd.log --brick-name snapd-vol_test -S /var/run/acfb4e617ee26303307f24cfb63c442e.socket --brick-port 49158 --xlator-option vol_test-server.listen-port=49158
root      4274  4111  0 15:11 pts/0    00:00:00 grep snapd
[root@inception ~]# 



Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.6.0.40-1.el6rhs.x86_64


How reproducible:
================

always


Steps to Reproduce:
===================
1. Create 4 node cluster
2. Create and start a volume
3. Enable USS, confirm snapd process is started on all node
4. Check gluster v i vol-name, uss should be enabled
5. gluster v status should show snapshot-daemon online
6. Reset the volume using "gluster volume reset <vol-name>"
7. Check gluster v i vol-name, uss should not be listed
8. gluster v status should not show snapshot-daemon
9. check snapd process using "ps -eaf | grep snapd" on all nodes

Actual results:
===============
snapd process is online


Expected results:
=================
snapd process should not be running

Comment 3 Mohammed Rafi KC 2015-07-23 08:33:51 UTC
upstream patch http://review.gluster.org/#/c/10138/

Comment 5 Avra Sengupta 2015-08-18 09:55:26 UTC
Fixed with https://code.engineering.redhat.com/gerrit/#/c/55146/

Comment 6 Shashank Raj 2015-08-25 12:24:56 UTC
Verified with glusterfs-3.7.1-12 build and its working as expected.

Snapd process gets killed if we do a gluster volume reset vol-name.

Snippets below:

[root@dhcp35-181 ~]# gluster volume info | grep uss

[root@dhcp35-181 ~]# gluster volume set testvolume features.uss enable
volume set: success

[root@dhcp35-181 ~]# gluster volume info | grep uss
features.uss: enable

[root@dhcp35-181 ~]# ps -aef|grep snapd
root     23522     1  0 15:24 ?        00:00:00 /usr/sbin/glusterfsd -s localhost --volfile-id snapd/testvolume -p /var/lib/glusterd/vols/testvolume/run/testvolume-snapd.pid -l /var/log/glusterfs/snaps/testvolume/snapd.log --brick-name snapd-testvolume -S /var/run/gluster
/9ffa63ccf61f4d11fb9fee6cc646d9eb.socket --brick-port 49157 --xlator-option testvolume-server.listen-port=49157 --no-mem-accounting
root     23566 19662  0 15:25 pts/0    00:00:00 grep --color=auto snapd

[root@dhcp35-181 ~]# gluster volume status
Status of volume: testvolume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.18:/rhs/brick1/b1            49152     0          Y       8249 
Brick 10.70.35.164:/rhs/brick1/b1           49152     0          Y       22036
Brick 10.70.35.138:/rhs/brick1/b1           49152     0          Y       21901
Brick 10.70.35.181:/rhs/brick1/b1           49152     0          Y       21197
Brick 10.70.35.18:/rhs/brick2/b2            49153     0          Y       8267 
Brick 10.70.35.164:/rhs/brick2/b2           49153     0          Y       22054
Brick 10.70.35.138:/rhs/brick2/b2           49153     0          Y       21919
Brick 10.70.35.181:/rhs/brick2/b2           49153     0          Y       21215
Brick 10.70.35.18:/rhs/brick3/b3            49154     0          Y       8285 
Brick 10.70.35.164:/rhs/brick3/b3           49154     0          Y       22072
Brick 10.70.35.138:/rhs/brick3/b3           49154     0          Y       21937
Brick 10.70.35.181:/rhs/brick3/b3           49154     0          Y       21233
Snapshot Daemon on localhost                49159     0          Y       23841
NFS Server on localhost                     2049      0          Y       23857
Snapshot Daemon on 10.70.35.138             49159     0          Y       24095
NFS Server on 10.70.35.138                  2049      0          Y       24103
Snapshot Daemon on 10.70.35.164             49159     0          Y       24227
NFS Server on 10.70.35.164                  2049      0          Y       24235
Snapshot Daemon on dhcp35-18.lab.eng.blr.re
dhat.com                                    49159     0          Y       10749
NFS Server on dhcp35-18.lab.eng.blr.redhat.
com                                         2049      0          Y       10764
 
Task Status of Volume testvolume
------------------------------------------------------------------------------
There are no active volume tasks
 
So after we reset the volume, uss and snapd gets stopped.

[root@dhcp35-181 ~]# gluster volume reset testvolume
volume reset: success: reset volume successful

[root@dhcp35-181 ~]# gluster volume info | grep uss

[root@dhcp35-181 ~]# gluster volume status
Status of volume: testvolume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.18:/rhs/brick1/b1            49152     0          Y       8249
Brick 10.70.35.164:/rhs/brick1/b1           49152     0          Y       22036
Brick 10.70.35.138:/rhs/brick1/b1           49152     0          Y       21901
Brick 10.70.35.181:/rhs/brick1/b1           49152     0          Y       21197
Brick 10.70.35.18:/rhs/brick2/b2            49153     0          Y       8267
Brick 10.70.35.164:/rhs/brick2/b2           49153     0          Y       22054
Brick 10.70.35.138:/rhs/brick2/b2           49153     0          Y       21919
Brick 10.70.35.181:/rhs/brick2/b2           49153     0          Y       21215
Brick 10.70.35.18:/rhs/brick3/b3            49154     0          Y       8285
Brick 10.70.35.164:/rhs/brick3/b3           49154     0          Y       22072
Brick 10.70.35.138:/rhs/brick3/b3           49154     0          Y       21937
Brick 10.70.35.181:/rhs/brick3/b3           49154     0          Y       21233
NFS Server on localhost                     2049      0          Y       23932
NFS Server on 10.70.35.164                  N/A       N/A        N       N/A
NFS Server on 10.70.35.138                  N/A       N/A        N       N/A
NFS Server on dhcp35-18.lab.eng.blr.redhat.
com                                         N/A       N/A        N       N/A

Task Status of Volume testvolume
------------------------------------------------------------------------------
There are no active volume tasks

[root@dhcp35-181 ~]# ps -aef|grep snapd
root     23614 19662  0 15:26 pts/0    00:00:00 grep --color=auto snapd

Since the earlier issue is no more reproducible. hence marking this bug as Verified

Comment 8 errata-xmlrpc 2015-10-05 07:08:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1845.html


Note You need to log in before you can comment on or make changes to this bug.