Bug 882797 - Files are not self-healed by self-heal daemon proactively when distribute volume is changed to distribute-replicate volume
Summary: Files are not self-healed by self-heal daemon proactively when distribute vol...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
: ---
Assignee: Divya
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-12-03 06:45 UTC by spandura
Modified: 2013-06-21 10:26 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
Cause: No information about self-heal is maintained anywhere because the volume is distribute. Consequence: when distribute volume is changed to distribute-replicate volume, Files are not self-healed by self-heal daemon pro-actively. This happens because index xlator does not have the information necessary in the index. Workaround (if any): User needs to perform following explicitly: 1. "gluster volume heal <volume_name> full" command on one of the storage nodes. 2. "find | xargs stat" from mount point Result: Until the step suggested above is done, bricks may not replicate all the files until they are accessed from mount point.
Clone Of:
Environment:
Last Closed: 2013-06-13 07:10:11 UTC
Embargoed:


Attachments (Terms of Use)

Description spandura 2012-12-03 06:45:13 UTC
Description of problem:
==========================
Files are not self-healed by self-heal daemon when a distribute volume type is changed to distribute-replicate volume . 

When add-brick is performed on distribute volume to change the volume type to distribute-replicate volume, one has to explicitly initiate self-heal either by executing : 

1. "gluster volume heal <volume_name> full" command on one of the storage nodes.  

Or

2. "find | xargs stat" from mount point 


Version-Release number of selected component (if applicable):
===============================================================
[12/03/12 - 11:29:24 root@flea ~]# rpm -qa | grep gluster
glusterfs-3.3.0.5rhs-38.el6rhs.x86_64

[12/03/12 - 11:29:20 root@flea ~]# glusterfs --version
glusterfs 3.3.0.5rhs built on Nov 15 2012 01:30:13


How reproducible:
======================
Often

Steps to Reproduce:
=====================
1. Create a distribute volume with 2 bricks . Start the volume. 

2. Create fuse mount and Create dirs/files from the mount point. 

3. Add-Brick to the volume to change the volume type to distribute-replicate volume with replica count 2.

  
Actual results:
=================
Self-Heal daemon not triggering self-heal to heal files to the newly added bricks.

Comment 2 Amar Tumballi 2012-12-03 06:52:32 UTC
Will be looking into this for understanding the problem, meantime, thinking if this is the _almost_ the same situation as having/creating a replicate volume with one brick which has existing data, and another which doesn't has any data, but self-heal xattrs are missing totally.

Comment 3 Amar Tumballi 2012-12-21 07:40:20 UTC
Pranith, assigning it to you for having a look (for comment #2), once your analysis is done, don't hesitate to reassign it to back to me.

Comment 4 Pranith Kumar K 2013-02-22 10:07:46 UTC
Divya,
    I provided the doc text necessary. Let me know if you need any more information.

Pranith

Comment 5 Divya 2013-06-13 07:10:11 UTC
Documented as Known Issue and is available at: http://documentation-devel.engineering.redhat.com/docs/en-US/Red_Hat_Storage/2.0/html-single/2.0_Update_4_Release_Notes/index.html. Hence, closing the bug.


Note You need to log in before you can comment on or make changes to this bug.