Bug 857443

Summary: Output of "gluster volume heal <vol_name> info healed" command execution is inconsistent
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: spandura
Component: glusterfsAssignee: vsomyaju
Status: CLOSED DUPLICATE QA Contact: spandura
Severity: medium Docs Contact:
Priority: medium    
Version: 2.0CC: grajaiya, nsathyan, pkarampu, rhs-bugs, sdharane, shaines, vbellur
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-12-11 09:08:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Output of Case1 - Case4 none

Description spandura 2012-09-14 12:51:16 UTC
Created attachment 612854 [details]
Output of Case1 - Case4

Description of problem:
-----------------------

gluster volume heal <vol_name> info healed command execution outputs different number of files that were self-healed every time it is executed. 

Case 1 :- Healed 10 directories. gluster volume heal <vol_name> info healed command execution listed 10 directories that got self-healed. 


Case 2 :- Healed 10 files.gluster volume heal <vol_name> info healed command execution listed 10 files self-healed along with 10 directories which were previously self-healed in Case-1. 

Case 3 :-  Healed 15 files. gluster volume heal <vol_name> info healed command execution listed only 15 files that were self-healed 

Case 4 :- Healed 1 file. gluster volume heal <vol_name> info healed command execution listed 1 files that got self-healed in case 4 + 15 files of Case 3 + 10 files of Case-2  + 10 directories of Case-1 . IN total displaying 36 entries. 

Why is the output different when every time the command is executed. ?


Version-Release number of selected component (if applicable):
--------------------------------------------------------------
[root@hicks ~]# gluster --version
glusterfs 3.3.0rhs built on Sep 10 2012 00:49:11 

(glusterfs-3.3.0rhs-28.el6rhs.x86_64)

How reproducible:
----------------
1/1

Steps to Reproduce:
1.Create replicate volume with replica count 3 (1x3: brick1, brick2, brick3)
2.Start the volume
3.Create a fuse mount. 
4.Bring down brick2 and brick3.
5.Follow the cases from Case1 - Case4 to simulate self-heal. 
  
Actual results:
--------------
Attached the output 

Expected results:
---------------
Should be consistent across the cases.

Comment 2 Pranith Kumar K 2012-09-17 08:44:55 UTC
The output is different because the commands are designed to show the last 1K files that were healed. It prints time-stamps of the time of self-heal so that things can be compared.

I don't consider it as a bug. Please feel free to re-open if you have any doubts that you need clarification.

Comment 3 spandura 2012-09-17 12:02:08 UTC
Agreed. Every time it has to show the last 1K files added. This is consistent behavior. 

In case1 there were 10 files to self-heal . Hence "info healed" showed 10 files.

In case 2 there were 10 files to self-heal. Hence "info  healed" showed 20 files (10 files from case1)

In case 3 there were 15 files to self-heal. But "info healed" showed only 15 files. Expected behavior is 35 files to be displayed in the output. (limit is up to 1k. But only 15 files were showed)

In case 4 there was only 1 file to self-heal. "info healed" now showed 36 files (this is expected behavior). 

Why in case 3 the "info healed" output was inconsistent ? 

For more reference please do refer the attached output.

Comment 4 Pranith Kumar K 2012-09-17 15:39:07 UTC
Was the self-heal daemon re-started?

Comment 5 spandura 2012-09-20 05:30:35 UTC
No , the self-heal daemon wasn't re-started.

Comment 6 Pranith Kumar K 2012-09-20 06:03:06 UTC
Could you provide the EXACT steps to recreate the issue. case1-4 are at high-level. I need the actual commands executed to re-create the issue.


Thanks
Pranith

Comment 8 Pranith Kumar K 2012-12-11 09:08:31 UTC

*** This bug has been marked as a duplicate of bug 863068 ***