Bug 1178100
Summary: | [USS]: gluster volume reset <vol-name>, resets the uss configured option but snapd process continues to run | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rahul Hinduja <rhinduja> | |
Component: | snapshot | Assignee: | Mohammed Rafi KC <rkavunga> | |
Status: | CLOSED ERRATA | QA Contact: | storage-qa-internal <storage-qa-internal> | |
Severity: | medium | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.0 | CC: | asengupt, ashah, asrivast, nsathyan, rhs-bugs, rkavunga, sraj, storage-qa-internal | |
Target Milestone: | --- | Keywords: | Triaged, ZStream | |
Target Release: | RHGS 3.1.1 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | SNAPSHOT | |||
Fixed In Version: | glusterfs-3.7.1-12 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1209123 (view as bug list) | Environment: | ||
Last Closed: | 2015-10-05 07:08:03 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1209123, 1245926, 1251815 |
Description
Rahul Hinduja
2015-01-02 10:02:34 UTC
upstream patch http://review.gluster.org/#/c/10138/ Verified with glusterfs-3.7.1-12 build and its working as expected. Snapd process gets killed if we do a gluster volume reset vol-name. Snippets below: [root@dhcp35-181 ~]# gluster volume info | grep uss [root@dhcp35-181 ~]# gluster volume set testvolume features.uss enable volume set: success [root@dhcp35-181 ~]# gluster volume info | grep uss features.uss: enable [root@dhcp35-181 ~]# ps -aef|grep snapd root 23522 1 0 15:24 ? 00:00:00 /usr/sbin/glusterfsd -s localhost --volfile-id snapd/testvolume -p /var/lib/glusterd/vols/testvolume/run/testvolume-snapd.pid -l /var/log/glusterfs/snaps/testvolume/snapd.log --brick-name snapd-testvolume -S /var/run/gluster /9ffa63ccf61f4d11fb9fee6cc646d9eb.socket --brick-port 49157 --xlator-option testvolume-server.listen-port=49157 --no-mem-accounting root 23566 19662 0 15:25 pts/0 00:00:00 grep --color=auto snapd [root@dhcp35-181 ~]# gluster volume status Status of volume: testvolume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.35.18:/rhs/brick1/b1 49152 0 Y 8249 Brick 10.70.35.164:/rhs/brick1/b1 49152 0 Y 22036 Brick 10.70.35.138:/rhs/brick1/b1 49152 0 Y 21901 Brick 10.70.35.181:/rhs/brick1/b1 49152 0 Y 21197 Brick 10.70.35.18:/rhs/brick2/b2 49153 0 Y 8267 Brick 10.70.35.164:/rhs/brick2/b2 49153 0 Y 22054 Brick 10.70.35.138:/rhs/brick2/b2 49153 0 Y 21919 Brick 10.70.35.181:/rhs/brick2/b2 49153 0 Y 21215 Brick 10.70.35.18:/rhs/brick3/b3 49154 0 Y 8285 Brick 10.70.35.164:/rhs/brick3/b3 49154 0 Y 22072 Brick 10.70.35.138:/rhs/brick3/b3 49154 0 Y 21937 Brick 10.70.35.181:/rhs/brick3/b3 49154 0 Y 21233 Snapshot Daemon on localhost 49159 0 Y 23841 NFS Server on localhost 2049 0 Y 23857 Snapshot Daemon on 10.70.35.138 49159 0 Y 24095 NFS Server on 10.70.35.138 2049 0 Y 24103 Snapshot Daemon on 10.70.35.164 49159 0 Y 24227 NFS Server on 10.70.35.164 2049 0 Y 24235 Snapshot Daemon on dhcp35-18.lab.eng.blr.re dhat.com 49159 0 Y 10749 NFS Server on dhcp35-18.lab.eng.blr.redhat. com 2049 0 Y 10764 Task Status of Volume testvolume ------------------------------------------------------------------------------ There are no active volume tasks So after we reset the volume, uss and snapd gets stopped. [root@dhcp35-181 ~]# gluster volume reset testvolume volume reset: success: reset volume successful [root@dhcp35-181 ~]# gluster volume info | grep uss [root@dhcp35-181 ~]# gluster volume status Status of volume: testvolume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.35.18:/rhs/brick1/b1 49152 0 Y 8249 Brick 10.70.35.164:/rhs/brick1/b1 49152 0 Y 22036 Brick 10.70.35.138:/rhs/brick1/b1 49152 0 Y 21901 Brick 10.70.35.181:/rhs/brick1/b1 49152 0 Y 21197 Brick 10.70.35.18:/rhs/brick2/b2 49153 0 Y 8267 Brick 10.70.35.164:/rhs/brick2/b2 49153 0 Y 22054 Brick 10.70.35.138:/rhs/brick2/b2 49153 0 Y 21919 Brick 10.70.35.181:/rhs/brick2/b2 49153 0 Y 21215 Brick 10.70.35.18:/rhs/brick3/b3 49154 0 Y 8285 Brick 10.70.35.164:/rhs/brick3/b3 49154 0 Y 22072 Brick 10.70.35.138:/rhs/brick3/b3 49154 0 Y 21937 Brick 10.70.35.181:/rhs/brick3/b3 49154 0 Y 21233 NFS Server on localhost 2049 0 Y 23932 NFS Server on 10.70.35.164 N/A N/A N N/A NFS Server on 10.70.35.138 N/A N/A N N/A NFS Server on dhcp35-18.lab.eng.blr.redhat. com N/A N/A N N/A Task Status of Volume testvolume ------------------------------------------------------------------------------ There are no active volume tasks [root@dhcp35-181 ~]# ps -aef|grep snapd root 23614 19662 0 15:26 pts/0 00:00:00 grep --color=auto snapd Since the earlier issue is no more reproducible. hence marking this bug as Verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1845.html |