Bug 993569 - Incorrect information in the "error" message of self-heal-completion status
Incorrect information in the "error" message of self-heal-completion status
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
2.1
Unspecified Unspecified
medium Severity unspecified
: ---
: ---
Assigned To: Pranith Kumar K
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-06 04:13 EDT by spandura
Modified: 2016-09-17 08:11 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:12:09 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description spandura 2013-08-06 04:13:58 EDT
Description of problem:
======================
In a 1 x 2 replicate volume even when a file doesn't exist on one of the brick ,the self-heal-completion status reports meta-data self-heal as successful on that file. 

Version-Release number of selected component (if applicable):
===============================================================
glusterfs 3.4.0.15rhs built on Aug  4 2013 22:34:17

How reproducible:
=================
Often

Steps to Reproduce:
=====================
1. Create replica volume 1 x 2

2. Start the volume

3. Create a fuse mount

4. From fuse mount execute : "exec 5>>test_file" ( to close the fd use : exec 5>>&- ) 

5. Kill all gluster process on storage_node1 (killall glusterfs glusterfsd glusterd)

6. Get the extended attribute of the brick1 directory on storage_node1 (getfattr -d -e hex -m . <path_to_brick1>)

7. Remove the brick1 directory on storage_node1(rm -rf <path_to_brick1>)

8. Create the brick1 directory on storage_node1(mkdir <path_to_brick1>)

9. Set the extended attribute "trusted.glusterfs.volume-id" to the value captured at step 7 for the brick1 on storage_node1. 

10. Start glusterd on storage_node1. (service glusterd start)

11. echo "Hello World" >&5 from mount point. 

Actual results: fuse mount log messages
===========================================
[2013-08-06 07:52:55.816212] E [afr-self-heal-data.c:1453:afr_sh_data_open_cbk] 0-vol_rep-replicate-0: open of /test_file failed on child vol_rep-client-1 (No such file or directory)

[2013-08-06 07:52:55.816273] E [afr-self-heal-common.c:2744:afr_log_self_heal_completion_status] 0-vol_rep-replicate-0:  metadata self heal  is successfully completed, backgroung data self heal  failed, on /test_file

When there is no "/test_file" on client-1 , how metadata self-heal is successful on that replicate sub-volume?
Comment 2 Vivek Agarwal 2015-12-03 12:12:09 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.