Bug 857443 - Output of "gluster volume heal <vol_name> info healed" command execution is inconsistent
Output of "gluster volume heal <vol_name> info healed" command execution is i...
Status: CLOSED DUPLICATE of bug 863068
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.0
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: vsomyaju
spandura
: Reopened
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-09-14 08:51 EDT by spandura
Modified: 2015-03-04 19:06 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-12-11 04:08:31 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
Output of Case1 - Case4 (7.14 KB, application/octet-stream)
2012-09-14 08:51 EDT, spandura
no flags Details

  None (edit)
Description spandura 2012-09-14 08:51:16 EDT
Created attachment 612854 [details]
Output of Case1 - Case4

Description of problem:
-----------------------

gluster volume heal <vol_name> info healed command execution outputs different number of files that were self-healed every time it is executed. 

Case 1 :- Healed 10 directories. gluster volume heal <vol_name> info healed command execution listed 10 directories that got self-healed. 


Case 2 :- Healed 10 files.gluster volume heal <vol_name> info healed command execution listed 10 files self-healed along with 10 directories which were previously self-healed in Case-1. 

Case 3 :-  Healed 15 files. gluster volume heal <vol_name> info healed command execution listed only 15 files that were self-healed 

Case 4 :- Healed 1 file. gluster volume heal <vol_name> info healed command execution listed 1 files that got self-healed in case 4 + 15 files of Case 3 + 10 files of Case-2  + 10 directories of Case-1 . IN total displaying 36 entries. 

Why is the output different when every time the command is executed. ?


Version-Release number of selected component (if applicable):
--------------------------------------------------------------
[root@hicks ~]# gluster --version
glusterfs 3.3.0rhs built on Sep 10 2012 00:49:11 

(glusterfs-3.3.0rhs-28.el6rhs.x86_64)

How reproducible:
----------------
1/1

Steps to Reproduce:
1.Create replicate volume with replica count 3 (1x3: brick1, brick2, brick3)
2.Start the volume
3.Create a fuse mount. 
4.Bring down brick2 and brick3.
5.Follow the cases from Case1 - Case4 to simulate self-heal. 
  
Actual results:
--------------
Attached the output 

Expected results:
---------------
Should be consistent across the cases.
Comment 2 Pranith Kumar K 2012-09-17 04:44:55 EDT
The output is different because the commands are designed to show the last 1K files that were healed. It prints time-stamps of the time of self-heal so that things can be compared.

I don't consider it as a bug. Please feel free to re-open if you have any doubts that you need clarification.
Comment 3 spandura 2012-09-17 08:02:08 EDT
Agreed. Every time it has to show the last 1K files added. This is consistent behavior. 

In case1 there were 10 files to self-heal . Hence "info healed" showed 10 files.

In case 2 there were 10 files to self-heal. Hence "info  healed" showed 20 files (10 files from case1)

In case 3 there were 15 files to self-heal. But "info healed" showed only 15 files. Expected behavior is 35 files to be displayed in the output. (limit is up to 1k. But only 15 files were showed)

In case 4 there was only 1 file to self-heal. "info healed" now showed 36 files (this is expected behavior). 

Why in case 3 the "info healed" output was inconsistent ? 

For more reference please do refer the attached output.
Comment 4 Pranith Kumar K 2012-09-17 11:39:07 EDT
Was the self-heal daemon re-started?
Comment 5 spandura 2012-09-20 01:30:35 EDT
No , the self-heal daemon wasn't re-started.
Comment 6 Pranith Kumar K 2012-09-20 02:03:06 EDT
Could you provide the EXACT steps to recreate the issue. case1-4 are at high-level. I need the actual commands executed to re-create the issue.


Thanks
Pranith
Comment 8 Pranith Kumar K 2012-12-11 04:08:31 EST

*** This bug has been marked as a duplicate of bug 863068 ***

Note You need to log in before you can comment on or make changes to this bug.