Bug 763692 (GLUSTER-1960)

Summary: gluster volume stop does not stop the NFS daemon on all servers
Product: [Community] GlusterFS Reporter: Vikas Gorur <vikas>
Component: cliAssignee: Amar Tumballi <amarts>
Status: CLOSED NOTABUG QA Contact:
Severity: medium Docs Contact:
Priority: high    
Version: 3.1.0CC: gluster-bugs, vraman
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vikas Gorur 2010-10-14 19:25:30 UTC
There was another volume running during the tests. Invalid bug.

Comment 1 Vikas Gorur 2010-10-14 22:08:40 UTC
vikas_test01: 10.1.10.141
vikas_test02: 10.1.10.142

"g" = "gluster"

[root@vikas_test01 ~]# g peer status
No peers present
[root@vikas_test01 ~]# g peer probe 10.1.10.142
Probe successful
[root@vikas_test01 ~]# g volume create qa 10.1.10.141:/e/1 10.1.10.142:/e/1
Creation of volume qa has been successful
[root@vikas_test01 ~]# g volume info

Volume Name: qa
Type: Distribute
Status: Created
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.1.10.141:/e/1
Brick2: 10.1.10.142:/e/1
[root@vikas_test01 ~]# g volume start qa
Starting volume qa has been successful

At this point I can mount the volume from both 141 and 142.

[root@vikas_test01 ~]# g volume stop qa
Stopping volume will make its data inaccessible. Do you want to Continue? (y/n) y
Stopping volume qa has been successful
[root@vikas_test01 ~]# showmount -e localhost
mount clntudp_create: RPC: Program not registered

However, if we check on 10.1.10.142:

[root@vikas_test02 ~]# showmount -e localhost
Export list for localhost:
/qa *

Mounting from 10.1.10.142 still works and all the data is still accessible.

[root@vikas_test02 ~]# g volume info

Volume Name: qa
Type: Distribute
Status: Stopped
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.1.10.141:/e/1
Brick2: 10.1.10.142:/e/1

Shouldn't "volume stop" kill the NFS server on all the peers?