Bug 993692 - client machine reboots when unmounting fuse mount point
client machine reboots when unmounting fuse mount point
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
Depends On:
  Show dependency treegraph
Reported: 2013-08-06 07:41 EDT by spandura
Modified: 2015-12-03 12:11 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2015-12-03 12:11:33 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
SOS Reports (5.88 MB, application/x-gzip)
2013-08-06 08:08 EDT, spandura
no flags Details

  None (edit)
Description spandura 2013-08-06 07:41:02 EDT
Description of problem:
client machine is rebooting when we unmount a fuse mount after running dbench on the fuse mount. Refer to bug 990479 for failure in running dbench. 

Version-Release number of selected component (if applicable):
glusterfs built on Aug  4 2013 22:34:17

How reproducible:

Steps to Reproduce:
1.Create a replicate volume ( 1 x 2 )

2.Set open-behind volume option to value "on"

3.Create 2 fuse mount and 1 nfs mount on a client machine. 

4. Run "dbench -s -F -S -x  --one-byte-write-fix --stat-check 10" on all the mount points simultaneously.

5. Once the dbench fails, stop dbench on other mount. 

6. Execute: "rm -rf *" from mount point.

7. Execute: "dbench -s -F -S -x  --one-byte-write-fix --stat-check 10" only on the mount point which failed to execute the dbench.

dbench fails to execute. 

8. unmount one of the mount point (unmounted mount1)

Actual results:
Client node reboots. 

Expected results:
client node shouldn't reboot

Additional info:
root@darrel [Aug-06-2013-17:06:24] >uname -a
Linux darrel.lab.eng.blr.redhat.com 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux

root@darrel [Aug-06-2013-16:10:53] >cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.4 (Santiago)

root@king [Aug-06-2013-17:07:39] >gluster v info
Volume Name: vol_rep
Type: Replicate
Volume ID: 880a0464-66a7-45f5-a59c-4bba68d39d6d
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: king:/rhs/bricks/b0
Brick2: hicks:/rhs/bricks/b1
Options Reconfigured:
performance.open-behind: on

Note: No cores generated.
Comment 2 spandura 2013-08-06 08:08:07 EDT
Created attachment 783296 [details]
SOS Reports
Comment 3 spandura 2013-08-14 07:06:50 EDT
Able to recreate the same issue on build:
glusterfs built on Aug 14 2013 00:11:42
Comment 4 Vivek Agarwal 2015-12-03 12:11:33 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.