Description of problem: ======================= client machine is rebooting when we unmount a fuse mount after running dbench on the fuse mount. Refer to bug 990479 for failure in running dbench. Version-Release number of selected component (if applicable): ============================================================== glusterfs 3.4.0.15rhs built on Aug 4 2013 22:34:17 How reproducible: ================ Often Steps to Reproduce: ====================== 1.Create a replicate volume ( 1 x 2 ) 2.Set open-behind volume option to value "on" 3.Create 2 fuse mount and 1 nfs mount on a client machine. 4. Run "dbench -s -F -S -x --one-byte-write-fix --stat-check 10" on all the mount points simultaneously. 5. Once the dbench fails, stop dbench on other mount. 6. Execute: "rm -rf *" from mount point. 7. Execute: "dbench -s -F -S -x --one-byte-write-fix --stat-check 10" only on the mount point which failed to execute the dbench. dbench fails to execute. 8. unmount one of the mount point (unmounted mount1) Actual results: =============== Client node reboots. Expected results: ================= client node shouldn't reboot Additional info: ================= root@darrel [Aug-06-2013-17:06:24] >uname -a Linux darrel.lab.eng.blr.redhat.com 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux root@darrel [Aug-06-2013-16:10:53] >cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.4 (Santiago) root@king [Aug-06-2013-17:07:39] >gluster v info Volume Name: vol_rep Type: Replicate Volume ID: 880a0464-66a7-45f5-a59c-4bba68d39d6d Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: king:/rhs/bricks/b0 Brick2: hicks:/rhs/bricks/b1 Options Reconfigured: performance.open-behind: on Note: No cores generated.
Created attachment 783296 [details] SOS Reports
Able to recreate the same issue on build: ========================================= glusterfs 3.4.0.19rhs built on Aug 14 2013 00:11:42
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.