Bug 985886 - NFS : nfs mount hung when glusterd, glusterfsd is killed on the mounted host.
NFS : nfs mount hung when glusterd, glusterfsd is killed on the mounted host.
Status: CLOSED NOTABUG
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Amar Tumballi
Sudhir D
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-18 08:39 EDT by spandura
Modified: 2013-12-18 19:09 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-18 08:46:16 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description spandura 2013-07-18 08:39:01 EDT
Description of problem:
======================
NFS mount hangs when glusterd and all the bricks process is killed on the host to which the nfs mount is mounted to. 

The nfs server process on the host is still running. 

Version-Release number of selected component (if applicable):
==============================================================
root@king [Jul-18-2013-18:02:14] >rpm -qa | grep glusterfs-server
glusterfs-server-3.4.0.12rhs.beta4-1.el6rhs.x86_64

root@king [Jul-18-2013-18:02:20] >gluster --version
glusterfs 3.4.0.12rhs.beta4 built on Jul 11 2013 23:37:17

How reproducible:


Steps to Reproduce:
===================
1. Create a distribute-replicate volume 6 x 2 ( 4 storage nodes : node1, node2, node3 and node4 . create 3 bricks on each storage node )

2. Start the volume. 

3. Set the following volume options to the values : 

cluster.self-heal-daemon: off
cluster.metadata-self-heal: off
cluster.data-self-heal: off
cluster.entry-self-heal: off

4. Create fuse/nfs mount. { nfs mount point host is node1 }

5. Create a directory from both mount points. 

6. "killall glusterd ; killall glusterfsd ; killall glusterfs" on node2 and node4. 

7. Create a file under the directory created in step 5. 

8. On node2 and node4 execute: "service glusterd start"

9. Execute: "killall glusterd ; killall glusterfsd" on node1 and "killall glusterfsd ; killall glusterfs ; killall glusterd" on node3. 

Actual results:
==============
NFS mount hangs. 

The nfs process on node1 is running. 

Expected results:
==================
nfs mount shouldn't hang. 

Additional info:
==================

root@king [Jul-18-2013-17:48:54] >gluster v info
 
Volume Name: dis_rep_vol1
Type: Distributed-Replicate
Volume ID: 6996ec08-c056-417e-b490-e4ab0e3549e8
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: king:/rhs/brick1/b0
Brick2: hicks:/rhs/brick1/b1
Brick3: king:/rhs/brick1/b2
Brick4: hicks:/rhs/brick1/b3
Brick5: king:/rhs/brick1/b4
Brick6: hicks:/rhs/brick1/b5
Brick7: luigi:/rhs/brick1/b6
Brick8: lizzie:/rhs/brick1/b7
Brick9: luigi:/rhs/brick1/b8
Brick10: lizzie:/rhs/brick1/b9
Brick11: luigi:/rhs/brick1/b10
Brick12: lizzie:/rhs/brick1/b11
Options Reconfigured:
cluster.self-heal-daemon: off
cluster.metadata-self-heal: off
cluster.data-self-heal: off
cluster.entry-self-heal: off
Comment 1 spandura 2013-07-18 08:46:16 EDT
This is because NFS process crashed because of bug : https://bugzilla.redhat.com/show_bug.cgi?id=982181.

Closing this as NOTABUG

Note You need to log in before you can comment on or make changes to this bug.