Description of problem: ========================== Files are not self-healed by self-heal daemon when a distribute volume type is changed to distribute-replicate volume . When add-brick is performed on distribute volume to change the volume type to distribute-replicate volume, one has to explicitly initiate self-heal either by executing : 1. "gluster volume heal <volume_name> full" command on one of the storage nodes. Or 2. "find | xargs stat" from mount point Version-Release number of selected component (if applicable): =============================================================== [12/03/12 - 11:29:24 root@flea ~]# rpm -qa | grep gluster glusterfs-3.3.0.5rhs-38.el6rhs.x86_64 [12/03/12 - 11:29:20 root@flea ~]# glusterfs --version glusterfs 3.3.0.5rhs built on Nov 15 2012 01:30:13 How reproducible: ====================== Often Steps to Reproduce: ===================== 1. Create a distribute volume with 2 bricks . Start the volume. 2. Create fuse mount and Create dirs/files from the mount point. 3. Add-Brick to the volume to change the volume type to distribute-replicate volume with replica count 2. Actual results: ================= Self-Heal daemon not triggering self-heal to heal files to the newly added bricks.
Will be looking into this for understanding the problem, meantime, thinking if this is the _almost_ the same situation as having/creating a replicate volume with one brick which has existing data, and another which doesn't has any data, but self-heal xattrs are missing totally.
Pranith, assigning it to you for having a look (for comment #2), once your analysis is done, don't hesitate to reassign it to back to me.
Divya, I provided the doc text necessary. Let me know if you need any more information. Pranith
Documented as Known Issue and is available at: http://documentation-devel.engineering.redhat.com/docs/en-US/Red_Hat_Storage/2.0/html-single/2.0_Update_4_Release_Notes/index.html. Hence, closing the bug.