Bug 860548

Summary: "gluster volume heal <vol_name> info healed" command output should list the files that are really healed and shouldn't list the files that were attempted to self-heal
Product: Red Hat Gluster Storage Reporter: spandura
Component: glusterfsAssignee: Vijay Bellur <vbellur>
Status: CLOSED DUPLICATE QA Contact: spandura
Severity: medium Docs Contact:
Priority: medium    
Version: 2.0CC: rhs-bugs, shaines, vbellur, vsomyaju
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-11-23 09:36:49 EST Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Description spandura 2012-09-26 02:34:38 EDT
Description of problem:
------------------------
"gluster volume heal <vol_name> info healed" command currently lists all the files that were attempted to self-heal + files that were really self-healed by the self-heal daemon process. 

But not all the files in the ".glusterfs/indices/xattrop" directory requires self-heal as they are written to indices directory during the writes from the mount point. By the time the self-heal daemon sees the files in indices directory and tries to self-heal, there might not be anything to self-heal at all. But these files are also listed in the "info healed" output.This is a bit confusing. Hence it would be appropriate to only list files that were synced to bricks which needed self-heal. 

Version-Release number of selected component (if applicable):
-----------------------------------------------------------
glusterfs 3.3.0rhs built on Sep 10 2012 00:49:11
(glusterfs-server-3.3.0rhs-28.el6rhs.x86_64)

Steps to Reproduce:
------------------
1.Create a distribute-replicate volume (2x2. 4 servers with 1 brick on each server)
2. start the volume.
3. create a fuse mount. create files and directories from the mount point. 
4. bring down brick2 from the replicate-0 
5. create new files and directories from mount point.
6. bring back the brick "brick2" of replicate-0. 
7. self-heal daemon starts the self-heal within 10 minutes or execute "gluster volume heal <vol_name>" to start the self-heal imeediately. 
7. execute: "gluster volume heal <vol_name> info healed" 
  
Actual results:
-----------------
Lists the files that were healed to brick2 of the replicate-0. Also saw a file healed to brick4 of the replicate-1. 


[root@darrel arequal]# gluster volume heal vol-abcd info healed
Heal operation on volume vol-abcd has been successful

Brick 10.70.34.115:/home/export
Number of entries: 1
at                    path on brick
-----------------------------------
2012-09-25 05:08:13 /test_gfid_self_heal/l1_dir.2/l2_dir.5/l3_dir.4/file.4

Brick 10.70.34.119:/home/export
Number of entries: 0

Brick 10.70.34.118:/home/export
Number of entries: 0

Brick 10.70.34.102:/home/export
Number of entries: 1
at                    path on brick
-----------------------------------
2012-09-25 05:08:10 /test_gfid_self_heal/l1_dir.2/l2_dir.5/l3_dir.2/file.6


The "/test_gfid_self_heal/l1_dir.2/l2_dir.5/l3_dir.2/file.6" file actually was not self-healed or synced from brick3 (Brick 10.70.34.118:/home/export) to brick4 (Brick 10.70.34.102:/home/export) . But still it's showed in the info healed output.
Comment 2 vsomyaju 2012-11-23 09:36:49 EST

*** This bug has been marked as a duplicate of bug 863068 ***