Bug 993692 - client machine reboots when unmounting fuse mount point
Summary: client machine reboots when unmounting fuse mount point
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: spandura
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-06 11:41 UTC by spandura
Modified: 2015-12-03 17:11 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:11:33 UTC
Embargoed:


Attachments (Terms of Use)
SOS Reports (5.88 MB, application/x-gzip)
2013-08-06 12:08 UTC, spandura
no flags Details

Description spandura 2013-08-06 11:41:02 UTC
Description of problem:
=======================
client machine is rebooting when we unmount a fuse mount after running dbench on the fuse mount. Refer to bug 990479 for failure in running dbench. 

Version-Release number of selected component (if applicable):
==============================================================
glusterfs 3.4.0.15rhs built on Aug  4 2013 22:34:17


How reproducible:
================
Often

Steps to Reproduce:
======================
1.Create a replicate volume ( 1 x 2 )

2.Set open-behind volume option to value "on"

3.Create 2 fuse mount and 1 nfs mount on a client machine. 

4. Run "dbench -s -F -S -x  --one-byte-write-fix --stat-check 10" on all the mount points simultaneously.

5. Once the dbench fails, stop dbench on other mount. 

6. Execute: "rm -rf *" from mount point.

7. Execute: "dbench -s -F -S -x  --one-byte-write-fix --stat-check 10" only on the mount point which failed to execute the dbench.

dbench fails to execute. 

8. unmount one of the mount point (unmounted mount1)

Actual results:
===============
Client node reboots. 

Expected results:
=================
client node shouldn't reboot

Additional info:
=================
root@darrel [Aug-06-2013-17:06:24] >uname -a
Linux darrel.lab.eng.blr.redhat.com 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux

root@darrel [Aug-06-2013-16:10:53] >cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.4 (Santiago)

root@king [Aug-06-2013-17:07:39] >gluster v info
 
Volume Name: vol_rep
Type: Replicate
Volume ID: 880a0464-66a7-45f5-a59c-4bba68d39d6d
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: king:/rhs/bricks/b0
Brick2: hicks:/rhs/bricks/b1
Options Reconfigured:
performance.open-behind: on


Note: No cores generated.

Comment 2 spandura 2013-08-06 12:08:07 UTC
Created attachment 783296 [details]
SOS Reports

Comment 3 spandura 2013-08-14 11:06:50 UTC
Able to recreate the same issue on build:
=========================================
glusterfs 3.4.0.19rhs built on Aug 14 2013 00:11:42

Comment 4 Vivek Agarwal 2015-12-03 17:11:33 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.