There was another volume running during the tests. Invalid bug.
vikas_test01: 10.1.10.141 vikas_test02: 10.1.10.142 "g" = "gluster" [root@vikas_test01 ~]# g peer status No peers present [root@vikas_test01 ~]# g peer probe 10.1.10.142 Probe successful [root@vikas_test01 ~]# g volume create qa 10.1.10.141:/e/1 10.1.10.142:/e/1 Creation of volume qa has been successful [root@vikas_test01 ~]# g volume info Volume Name: qa Type: Distribute Status: Created Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 10.1.10.141:/e/1 Brick2: 10.1.10.142:/e/1 [root@vikas_test01 ~]# g volume start qa Starting volume qa has been successful At this point I can mount the volume from both 141 and 142. [root@vikas_test01 ~]# g volume stop qa Stopping volume will make its data inaccessible. Do you want to Continue? (y/n) y Stopping volume qa has been successful [root@vikas_test01 ~]# showmount -e localhost mount clntudp_create: RPC: Program not registered However, if we check on 10.1.10.142: [root@vikas_test02 ~]# showmount -e localhost Export list for localhost: /qa * Mounting from 10.1.10.142 still works and all the data is still accessible. [root@vikas_test02 ~]# g volume info Volume Name: qa Type: Distribute Status: Stopped Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 10.1.10.141:/e/1 Brick2: 10.1.10.142:/e/1 Shouldn't "volume stop" kill the NFS server on all the peers?