Bug 991053 - fuse mount continuously reports : "fsync(file_gfid) failed on subvolume <sub_volume_id>
fuse mount continuously reports : "fsync(file_gfid) failed on subvolume <sub_...
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-01 09:25 EDT by spandura
Modified: 2015-12-03 12:22 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:22:56 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
SOS Reports (5.63 MB, application/x-gzip)
2013-08-01 09:30 EDT, spandura
no flags Details

  None (edit)
Description spandura 2013-08-01 09:25:15 EDT
Description of problem:
========================
In a replicate volume ( 1 x 2 ) a brick is replaced by bringing the brick process offline, un-mounting , formatting , remounting the brick directory and bringing the brick online. "heal full" is triggered on the volume to self-heal the files/dirs. Heal is successfully completed. 

When we write to the file from mount point , fuse mount log reports the following warning message continuously:

[2013-08-01 13:18:41.318791] W [client-rpc-fops.c:4125:client3_3_fsync] 0-vol_rep-client-1:  (0b43ec03-40ff-46de-be0f-496869352c5d) remote_fd is -1. EBADFD
[2013-08-01 13:18:41.318867] W [afr-transaction.c:1466:afr_changelog_fsync_cbk] 0-vol_rep-replicate-0: fsync(0b43ec03-40ff-46de-be0f-496869352c5d) failed on subvolume vol_rep-client-1. Transaction was WRITE
[2013-08-01 13:18:41.318972] W [client-rpc-fops.c:4065:client3_3_flush] 0-vol_rep-client-1:  (0b43ec03-40ff-46de-be0f-496869352c5d) remote_fd is -1. EBADFD
[2013-08-01 13:18:41.322925] W [client-rpc-fops.c:4065:client3_3_flush] 0-vol_rep-client-1:  (0b43ec03-40ff-46de-be0f-496869352c5d) remote_fd is -1. EBADFD


Version-Release number of selected component (if applicable):
================================================================
root@king [Aug-01-2013-18:53:11] >rpm -qa | grep glusterfs-server
glusterfs-server-3.4.0.14rhs-1.el6rhs.x86_64

root@king [Aug-01-2013-18:53:20] >glusterfs --version
glusterfs 3.4.0.14rhs built on Jul 30 2013 09:09:34

How reproducible:
=================
Often

Steps to Reproduce:
======================
1. Create replica volume 1 x 2

2. Start the volume

3. Create a fuse mount

4. From fuse mount execute : "exec 5>>test_file" ( to close the fd use : exec 5>>&- ) 

5. Kill all gluster process on storage_node1 (killall glusterfs glusterfsd glusterd)

6. Get the extended attribute of the brick1 directory on storage_node1 (getfattr -d -e hex -m . <path_to_brick1>)

7. Remove the brick1 directory on storage_node1(rm -rf <path_to_brick1>)

8. Create the brick1 directory on storage_node1(mkdir <path_to_brick1>)

9. Set the extended attribute "trusted.glusterfs.volume-id" to the value captured at step 7 for the brick1 on storage_node1. 

10. Start glusterd on storage_node1. (service glusterd start)

11. Execute: "gluster volume heal <volume_name> full" from any of the storage_node. This will self-heal the file "test_file" from brick0 to brick1

12. From mount point execute: for i in `seq 1 100` ; do echo "Hello World" >&5" ; done

Expected Results:
===================
Comment 1 spandura 2013-08-01 09:30:05 EDT
Created attachment 781599 [details]
SOS Reports
Comment 3 Vivek Agarwal 2015-12-03 12:22:56 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.