Bug 993569 - Incorrect information in the "error" message of self-heal-completion status
Summary: Incorrect information in the "error" message of self-heal-completion status
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: 2.1
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: ---
: ---
Assignee: Pranith Kumar K
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-06 08:13 UTC by spandura
Modified: 2016-09-17 12:11 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:12:09 UTC
Embargoed:


Attachments (Terms of Use)

Description spandura 2013-08-06 08:13:58 UTC
Description of problem:
======================
In a 1 x 2 replicate volume even when a file doesn't exist on one of the brick ,the self-heal-completion status reports meta-data self-heal as successful on that file. 

Version-Release number of selected component (if applicable):
===============================================================
glusterfs 3.4.0.15rhs built on Aug  4 2013 22:34:17

How reproducible:
=================
Often

Steps to Reproduce:
=====================
1. Create replica volume 1 x 2

2. Start the volume

3. Create a fuse mount

4. From fuse mount execute : "exec 5>>test_file" ( to close the fd use : exec 5>>&- ) 

5. Kill all gluster process on storage_node1 (killall glusterfs glusterfsd glusterd)

6. Get the extended attribute of the brick1 directory on storage_node1 (getfattr -d -e hex -m . <path_to_brick1>)

7. Remove the brick1 directory on storage_node1(rm -rf <path_to_brick1>)

8. Create the brick1 directory on storage_node1(mkdir <path_to_brick1>)

9. Set the extended attribute "trusted.glusterfs.volume-id" to the value captured at step 7 for the brick1 on storage_node1. 

10. Start glusterd on storage_node1. (service glusterd start)

11. echo "Hello World" >&5 from mount point. 

Actual results: fuse mount log messages
===========================================
[2013-08-06 07:52:55.816212] E [afr-self-heal-data.c:1453:afr_sh_data_open_cbk] 0-vol_rep-replicate-0: open of /test_file failed on child vol_rep-client-1 (No such file or directory)

[2013-08-06 07:52:55.816273] E [afr-self-heal-common.c:2744:afr_log_self_heal_completion_status] 0-vol_rep-replicate-0:  metadata self heal  is successfully completed, backgroung data self heal  failed, on /test_file

When there is no "/test_file" on client-1 , how metadata self-heal is successful on that replicate sub-volume?

Comment 2 Vivek Agarwal 2015-12-03 17:12:09 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.