Bug 993569

Summary: Incorrect information in the "error" message of self-heal-completion status
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: spandura
Component: replicateAssignee: Pranith Kumar K <pkarampu>
Status: CLOSED EOL QA Contact: storage-qa-internal <storage-qa-internal>
Severity: unspecified Docs Contact:
Priority: medium    
Version: 2.1CC: nsathyan, rhs-bugs, storage-qa-internal, vagarwal, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-03 17:12:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description spandura 2013-08-06 08:13:58 UTC
Description of problem:
======================
In a 1 x 2 replicate volume even when a file doesn't exist on one of the brick ,the self-heal-completion status reports meta-data self-heal as successful on that file. 

Version-Release number of selected component (if applicable):
===============================================================
glusterfs 3.4.0.15rhs built on Aug  4 2013 22:34:17

How reproducible:
=================
Often

Steps to Reproduce:
=====================
1. Create replica volume 1 x 2

2. Start the volume

3. Create a fuse mount

4. From fuse mount execute : "exec 5>>test_file" ( to close the fd use : exec 5>>&- ) 

5. Kill all gluster process on storage_node1 (killall glusterfs glusterfsd glusterd)

6. Get the extended attribute of the brick1 directory on storage_node1 (getfattr -d -e hex -m . <path_to_brick1>)

7. Remove the brick1 directory on storage_node1(rm -rf <path_to_brick1>)

8. Create the brick1 directory on storage_node1(mkdir <path_to_brick1>)

9. Set the extended attribute "trusted.glusterfs.volume-id" to the value captured at step 7 for the brick1 on storage_node1. 

10. Start glusterd on storage_node1. (service glusterd start)

11. echo "Hello World" >&5 from mount point. 

Actual results: fuse mount log messages
===========================================
[2013-08-06 07:52:55.816212] E [afr-self-heal-data.c:1453:afr_sh_data_open_cbk] 0-vol_rep-replicate-0: open of /test_file failed on child vol_rep-client-1 (No such file or directory)

[2013-08-06 07:52:55.816273] E [afr-self-heal-common.c:2744:afr_log_self_heal_completion_status] 0-vol_rep-replicate-0:  metadata self heal  is successfully completed, backgroung data self heal  failed, on /test_file

When there is no "/test_file" on client-1 , how metadata self-heal is successful on that replicate sub-volume?

Comment 2 Vivek Agarwal 2015-12-03 17:12:09 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.