Bug 860548 - "gluster volume heal <vol_name> info healed" command output should list the files that are really healed and shouldn't list the files that were attempted to self-heal
Summary: "gluster volume heal <vol_name> info healed" command output should list the f...
Keywords:
Status: CLOSED DUPLICATE of bug 863068
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Vijay Bellur
QA Contact: spandura
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-09-26 06:34 UTC by spandura
Modified: 2012-11-23 14:36 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-11-23 14:36:49 UTC
Embargoed:


Attachments (Terms of Use)

Description spandura 2012-09-26 06:34:38 UTC
Description of problem:
------------------------
"gluster volume heal <vol_name> info healed" command currently lists all the files that were attempted to self-heal + files that were really self-healed by the self-heal daemon process. 

But not all the files in the ".glusterfs/indices/xattrop" directory requires self-heal as they are written to indices directory during the writes from the mount point. By the time the self-heal daemon sees the files in indices directory and tries to self-heal, there might not be anything to self-heal at all. But these files are also listed in the "info healed" output.This is a bit confusing. Hence it would be appropriate to only list files that were synced to bricks which needed self-heal. 

Version-Release number of selected component (if applicable):
-----------------------------------------------------------
glusterfs 3.3.0rhs built on Sep 10 2012 00:49:11
(glusterfs-server-3.3.0rhs-28.el6rhs.x86_64)

Steps to Reproduce:
------------------
1.Create a distribute-replicate volume (2x2. 4 servers with 1 brick on each server)
2. start the volume.
3. create a fuse mount. create files and directories from the mount point. 
4. bring down brick2 from the replicate-0 
5. create new files and directories from mount point.
6. bring back the brick "brick2" of replicate-0. 
7. self-heal daemon starts the self-heal within 10 minutes or execute "gluster volume heal <vol_name>" to start the self-heal imeediately. 
7. execute: "gluster volume heal <vol_name> info healed" 
  
Actual results:
-----------------
Lists the files that were healed to brick2 of the replicate-0. Also saw a file healed to brick4 of the replicate-1. 


[root@darrel arequal]# gluster volume heal vol-abcd info healed
Heal operation on volume vol-abcd has been successful

Brick 10.70.34.115:/home/export
Number of entries: 1
at                    path on brick
-----------------------------------
2012-09-25 05:08:13 /test_gfid_self_heal/l1_dir.2/l2_dir.5/l3_dir.4/file.4

Brick 10.70.34.119:/home/export
Number of entries: 0

Brick 10.70.34.118:/home/export
Number of entries: 0

Brick 10.70.34.102:/home/export
Number of entries: 1
at                    path on brick
-----------------------------------
2012-09-25 05:08:10 /test_gfid_self_heal/l1_dir.2/l2_dir.5/l3_dir.2/file.6


The "/test_gfid_self_heal/l1_dir.2/l2_dir.5/l3_dir.2/file.6" file actually was not self-healed or synced from brick3 (Brick 10.70.34.118:/home/export) to brick4 (Brick 10.70.34.102:/home/export) . But still it's showed in the info healed output.

Comment 2 vsomyaju 2012-11-23 14:36:49 UTC

*** This bug has been marked as a duplicate of bug 863068 ***


Note You need to log in before you can comment on or make changes to this bug.