Bug 1233333 - glusterfs-resource-agents - volume - doesn't stop all processes
Summary: glusterfs-resource-agents - volume - doesn't stop all processes
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: scripts
Version: 3.7.1
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Niels de Vos
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-06-18 16:19 UTC by JohnJerome
Modified: 2017-03-08 11:00 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-03-08 11:00:33 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1233344 0 unspecified CLOSED glusterfs-resource-agents - volume - voldir is not properly set 2021-02-22 00:41:40 UTC

Internal Links: 1233344

Description JohnJerome 2015-06-18 16:19:02 UTC
Description of problem:
With Pacemaker/Corosync/pcs/glusterfs
When enabling a resource with the RA 'ocf:glusterfs:volume' three processes are created.
But when we disable the resource, only one process is killed.

Version-Release number of selected component (if applicable):
glusterfs-resource-agents-3.7.1-1.el7.noarch.rpm

How reproducible:
Everytime

Steps to Reproduce:
1. Create the resource (all prerequisites are OK, ie: the cluster is operational, the FS has been tested without Pacemaker, the resource glustered is created)
pcs resource create gluster_volume ocf:glusterfs:volume volname='gv0' op monitor interval=60s

2. Enable
pcs resource enable gluster_volume

3. Verify processes
# ps -edf|grep -i glusterfs
root     24939     1  0 17:55 ?        00:00:00 /usr/sbin/glusterfsd -s centos71-2 --volfile-id gv0.centos71-2.export-sdb1-brick -p /var/lib/glusterd/vols/gv0/run/centos71-2-export-sdb1-brick.pid -S /var/run/gluster/33545c44468ba9f9288b2ebb4c6a1bba.socket --brick-name /export/sdb1/brick -l /var/log/glusterfs/bricks/export-sdb1-brick.log --xlator-option *-posix.glusterd-uuid=5bfcd8ce-2935-4ea5-b73f-8a217b869fa2 --brick-port 49152 --xlator-option gv0-server.listen-port=49152
root     24958     1  0 17:55 ?        00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/gl ster/912c297362be7dc78c27d4b7703d516e.socket
root     24965     1  0 17:55 ?        00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/7c2c0bb36dfa97e1b4561df9663f5591.socket --xlator-option *replicate*.node-uuid=5bfcd8ce-2935-4ea5-b73f-8a217b869fa2

4. Disable
pcs resource disable gluster_volume

5. Verify processes
# ps -edf|grep -i glusterfs

Actual results:
root     24958     1  0 17:55 ?        00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/gl ster/912c297362be7dc78c27d4b7703d516e.socket
root     24965     1  0 17:55 ?        00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/7c2c0bb36dfa97e1b4561df9663f5591.socket --xlator-option *replicate*.node-uuid=5bfcd8ce-2935-4ea5-b73f-8a217b869fa2

Expected results:
No glusterfs processes

Additional info:

Comment 1 Niels de Vos 2015-06-23 12:29:21 UTC
Could you confirm that there are no started volumes in the Gluster Trusted Storage Pool anymore? The processes that are still running are for the Gluster/NFS service and self-heal-daemon. These processes keep running until the last volume in the Trusted Storage Pool is stopped.

Comment 2 JohnJerome 2015-06-30 13:10:49 UTC
Here is the test with 'gluster volume status' results :


1) Pacemaker services FS, Volume and Daemon are started :

[root@centos71-2 ~]# gluster volume status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos71-2:/export/sdb1/brick         49152     0          Y       5633
Brick centos71-3:/export/sdb1/brick         49152     0          Y       49180
NFS Server on localhost                     2049      0          Y       5618
Self-heal Daemon on localhost               N/A       N/A        Y       5626
NFS Server on centos71-3                    2049      0          Y       49169
Self-heal Daemon on centos71-3              N/A       N/A        Y       49179

Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks



2) We stop the FS service :

[root@centos71-2 ~]# pcs resource disable gluster_fs

[root@centos71-2 ~]# gluster volume status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos71-2:/export/sdb1/brick         49152     0          Y       5633
Brick centos71-3:/export/sdb1/brick         49152     0          Y       49180
NFS Server on localhost                     2049      0          Y       5618
Self-heal Daemon on localhost               N/A       N/A        Y       5626
NFS Server on centos71-3                    2049      0          Y       49169
Self-heal Daemon on centos71-3              N/A       N/A        Y       49179

Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks



3) We stop the volume service :

[root@centos71-2 ~]# pcs resource disable gluster_volume

[root@centos71-2 ~]# gluster volume status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick centos71-2:/export/sdb1/brick         N/A       N/A        N       N/A
Brick centos71-3:/export/sdb1/brick         N/A       N/A        N       N/A
NFS Server on localhost                     2049      0          Y       5618
Self-heal Daemon on localhost               N/A       N/A        Y       5626
NFS Server on centos71-3                    2049      0          Y       49169
Self-heal Daemon on centos71-3              N/A       N/A        Y       49179

Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks



4) We stop the daemon service :

[root@centos71-2 ~]# pcs resource disable gluster_d

[root@centos71-2 ~]# gluster volume status
Connection failed. Please check if gluster daemon is operational.



5) We check the gluster processes left :

[root@centos71-2 ~]# ps -edf|grep -i glusterfs
root      5618     1  0 14:47 ?        00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/gluster/912c297362be7dc78c27d4b7703d516e.socket
root      5626     1  0 14:47 ?        00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/7c2c0bb36dfa97e1b4561df9663f5591.socket --xlator-option *replicate*.node-uuid=5bfcd8ce-2935-4ea5-b73f-8a217b869fa2
root      6212  1458  0 14:50 pts/0    00:00:00 grep --color=auto -i glusterfs





I think the 'disable' action from the RA 'ocf:glusterfs:volume' should behave the same way than the command 'gluster volume stop gv0' and not just kill the main process.

Comment 3 Kaushal 2017-03-08 11:00:33 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.


Note You need to log in before you can comment on or make changes to this bug.