Red Hat Bugzilla – Bug 993569
Incorrect information in the "error" message of self-heal-completion status
Last modified: 2016-09-17 08:11:18 EDT
Description of problem:
In a 1 x 2 replicate volume even when a file doesn't exist on one of the brick ,the self-heal-completion status reports meta-data self-heal as successful on that file.
Version-Release number of selected component (if applicable):
glusterfs 188.8.131.52rhs built on Aug 4 2013 22:34:17
Steps to Reproduce:
1. Create replica volume 1 x 2
2. Start the volume
3. Create a fuse mount
4. From fuse mount execute : "exec 5>>test_file" ( to close the fd use : exec 5>>&- )
5. Kill all gluster process on storage_node1 (killall glusterfs glusterfsd glusterd)
6. Get the extended attribute of the brick1 directory on storage_node1 (getfattr -d -e hex -m . <path_to_brick1>)
7. Remove the brick1 directory on storage_node1(rm -rf <path_to_brick1>)
8. Create the brick1 directory on storage_node1(mkdir <path_to_brick1>)
9. Set the extended attribute "trusted.glusterfs.volume-id" to the value captured at step 7 for the brick1 on storage_node1.
10. Start glusterd on storage_node1. (service glusterd start)
11. echo "Hello World" >&5 from mount point.
Actual results: fuse mount log messages
[2013-08-06 07:52:55.816212] E [afr-self-heal-data.c:1453:afr_sh_data_open_cbk] 0-vol_rep-replicate-0: open of /test_file failed on child vol_rep-client-1 (No such file or directory)
[2013-08-06 07:52:55.816273] E [afr-self-heal-common.c:2744:afr_log_self_heal_completion_status] 0-vol_rep-replicate-0: metadata self heal is successfully completed, backgroung data self heal failed, on /test_file
When there is no "/test_file" on client-1 , how metadata self-heal is successful on that replicate sub-volume?
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/
If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.