Bug 1233333
| Summary: | glusterfs-resource-agents - volume - doesn't stop all processes | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | JohnJerome <jeromep3000> |
| Component: | scripts | Assignee: | Niels de Vos <ndevos> |
| Status: | CLOSED EOL | QA Contact: | |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.7.1 | CC: | bugs, jeromep3000, ndevos |
| Target Milestone: | --- | Keywords: | Triaged |
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-03-08 11:00:33 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
JohnJerome
2015-06-18 16:19:02 UTC
Could you confirm that there are no started volumes in the Gluster Trusted Storage Pool anymore? The processes that are still running are for the Gluster/NFS service and self-heal-daemon. These processes keep running until the last volume in the Trusted Storage Pool is stopped. Here is the test with 'gluster volume status' results : 1) Pacemaker services FS, Volume and Daemon are started : [root@centos71-2 ~]# gluster volume status Status of volume: gv0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick centos71-2:/export/sdb1/brick 49152 0 Y 5633 Brick centos71-3:/export/sdb1/brick 49152 0 Y 49180 NFS Server on localhost 2049 0 Y 5618 Self-heal Daemon on localhost N/A N/A Y 5626 NFS Server on centos71-3 2049 0 Y 49169 Self-heal Daemon on centos71-3 N/A N/A Y 49179 Task Status of Volume gv0 ------------------------------------------------------------------------------ There are no active volume tasks 2) We stop the FS service : [root@centos71-2 ~]# pcs resource disable gluster_fs [root@centos71-2 ~]# gluster volume status Status of volume: gv0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick centos71-2:/export/sdb1/brick 49152 0 Y 5633 Brick centos71-3:/export/sdb1/brick 49152 0 Y 49180 NFS Server on localhost 2049 0 Y 5618 Self-heal Daemon on localhost N/A N/A Y 5626 NFS Server on centos71-3 2049 0 Y 49169 Self-heal Daemon on centos71-3 N/A N/A Y 49179 Task Status of Volume gv0 ------------------------------------------------------------------------------ There are no active volume tasks 3) We stop the volume service : [root@centos71-2 ~]# pcs resource disable gluster_volume [root@centos71-2 ~]# gluster volume status Status of volume: gv0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick centos71-2:/export/sdb1/brick N/A N/A N N/A Brick centos71-3:/export/sdb1/brick N/A N/A N N/A NFS Server on localhost 2049 0 Y 5618 Self-heal Daemon on localhost N/A N/A Y 5626 NFS Server on centos71-3 2049 0 Y 49169 Self-heal Daemon on centos71-3 N/A N/A Y 49179 Task Status of Volume gv0 ------------------------------------------------------------------------------ There are no active volume tasks 4) We stop the daemon service : [root@centos71-2 ~]# pcs resource disable gluster_d [root@centos71-2 ~]# gluster volume status Connection failed. Please check if gluster daemon is operational. 5) We check the gluster processes left : [root@centos71-2 ~]# ps -edf|grep -i glusterfs root 5618 1 0 14:47 ? 00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/gluster/912c297362be7dc78c27d4b7703d516e.socket root 5626 1 0 14:47 ? 00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/7c2c0bb36dfa97e1b4561df9663f5591.socket --xlator-option *replicate*.node-uuid=5bfcd8ce-2935-4ea5-b73f-8a217b869fa2 root 6212 1458 0 14:50 pts/0 00:00:00 grep --color=auto -i glusterfs I think the 'disable' action from the RA 'ocf:glusterfs:volume' should behave the same way than the command 'gluster volume stop gv0' and not just kill the main process. This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release. |