Bug 1203581

Summary: Disperse volume: No output with gluster volume heal info
Product: [Community] GlusterFS Reporter: Bhaskarakiran <byarlaga>
Component: disperseAssignee: Pranith Kumar K <pkarampu>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, byarlaga, gluster-bugs, mzywusko, pkarampu
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-05-14 17:29:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1186580    

Description Bhaskarakiran 2015-03-19 07:58:39 UTC
Description of problem:
======================

When healed with heal / heal full and try to see info with gluster volume heal info, it doesn't list anything though the heal runs at the back-end. It just returns to the prompt.

[root@ninja rhs]# gluster v heal dispersevol info
[root@ninja rhs]# 

[root@vertigo ~]# tail -f /var/log/glusterfs/glustershd.log  | more
[2015-03-19 07:58:11.301428] I [ec-heal.c:546:ec_heal_init] 0-ec: Heali
ng '<gfid:4a5d0f67-9c3c-4da1-917c-ddf54fb4a811>', gfid 4a5d0f67-9c3c-4d
a1-917c-ddf54fb4a811
[2015-03-19 07:58:11.304795] I [ec-heal.c:546:ec_heal_init] 0-ec: Heali
ng '<gfid:a13afab3-b68f-44a3-a51f-7e9a12938abc>', gfid a13afab3-b68f-44
a3-a51f-7e9a12938abc
[2015-03-19 07:58:11.308316] I [ec-heal.c:546:ec_heal_init] 0-ec: Heali
ng '<gfid:2a613762-6949-4ddc-b73b-691c1ee064a8>', gfid 2a613762-6949-4d
dc-b73b-691c1ee064a8
[2015-03-19 07:58:11.312788] I [ec-heal.c:546:ec_heal_init] 0-ec: Heali
ng '<gfid:dd2d5fe7-a7a8-477e-a840-26f92e5fcef7>', gfid dd2d5fe7-a7a8-47
7e-a840-26f92e5fcef7
[2015-03-19 07:58:11.321873] I [ec-heal.c:546:ec_heal_init] 0-ec: Heali
ng '<gfid:12321ba6-7a15-4e31-8d82-608249f24649>', gfid 12321ba6-7a15-4e
31-8d82-608249f24649
[2015-03-19 07:58:11.326745] I [ec-heal.c:546:ec_heal_init] 0-ec: Heali
ng '<gfid:45c618f9-04c8-4180-9d48-452e4653f5be>', gfid 45c618f9-04c8-41
80-9d48-452e4653f5be
[2015-03-19 07:58:11.331357] I [ec-heal.c:546:ec_heal_init] 0-ec: Heali
ng '<gfid:f9727f82-4dc6-4ed0-8e9a-6ca4132c6b5d>', gfid f9727f82-4dc6-4e
d0-8e9a-6ca4132c6b5d
[2015-03-19 07:58:11.335237] I [ec-heal.c:546:ec_heal_init] 0-ec: Heali
ng '<gfid:99ab9045-650b-466a-a7a4-a39c0ef1b34b>', gfid 99ab9045-650b-46
6a-a7a4-a39c0ef1b34b
[2015-03-19 07:58:11.345449] I [ec-heal.c:546:ec_heal_init] 0-ec: Heali
ng '<gfid:b9ae3c14-6457-49cd-bb8a-e8b428c44135>', gfid b9ae3c14-6457-49
cd-bb8a-e8b428c44135
[2015-03-19 07:58:11.351597] I [ec-heal.c:546:ec_heal_init] 0-ec: Heali
ng '<gfid:e19a3e42-3887-408f-b02c-32b4a7a4f8b8>', gfid e19a3e42-3887-40
8f-b02c-32b4a7a4f8b8
[2015-03-19 07:58:11.359056] I [ec-heal.c:546:ec_heal_init] 0-ec: Heali
ng '<gfid:b3c5245c-3a2a-47d7-94f1-146db32f8241>', gfid b3c5245c-3a2a-47
d7-94f1-146db32f8241
[2015-03-19 07:58:11.362454] I [ec-heal.c:546:ec_heal_init] 0-ec: Heali
ng '<gfid:dc38efee-cca1-4fa1-ba1a-34098a1c5392>', gfid dc38efee-cca1-4f
a1-ba1a-34098a1c5392



Version-Release number of selected component (if applicable):
=============================================================
[root@ninja ~]# gluster --version
glusterfs 3.7dev built on Mar 17 2015 01:06:35
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@ninja ~]# 


How reproducible:
=================
100%


Steps to Reproduce:
1. Fuse / NFS mount a disperse volume. (1x(8+2)
2. Bring down 2 of the bricks and continue to create files and directories.
3. Force start the volume to bring the bricks up
4. Trigger heal with gluster volume heal <volname>
5. Try to list the entries to be healed with gluster volume heal <volname> info

Actual results:


Expected results:


Additional info:
================
Attaching the sosreports


Gluster v status & info:
========================


[root@ninja ~]# gluster v status
Status of volume: dispersevol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick vertigo:/rhs/brick1/b1                49152     49153      Y       12385
Brick ninja:/rhs/brick1/b1                  N/A       N/A        N       12527
Brick vertigo:/rhs/brick2/b2                49154     49155      Y       12398
Brick ninja:/rhs/brick2/b2                  N/A       N/A        N       12540
Brick vertigo:/rhs/brick3/b3                49156     49157      Y       12411
Brick ninja:/rhs/brick3/b3                  N/A       N/A        N       12553
Brick vertigo:/rhs/brick4/b4                49158     49159      Y       12424
Brick ninja:/rhs/brick4/b4                  N/A       N/A        N       12566
Brick vertigo:/rhs/brick1/b1-1              49160     49161      Y       12437
Brick ninja:/rhs/brick1/b1-1                49160     49161      Y       12579
Brick vertigo:/rhs/brick2/b2-1              49162     49163      Y       12450
Brick ninja:/rhs/brick2/b2-1                49162     49163      Y       12592
NFS Server on localhost                     2049      0          Y       12609
Quota Daemon on localhost                   N/A       N/A        Y       12758
NFS Server on 10.70.34.56                   2049      0          Y       12466
Quota Daemon on 10.70.34.56                 N/A       N/A        Y       12679
 
Task Status of Volume dispersevol
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@ninja ~]# 

[root@ninja rhs]# gluster v info
 
Volume Name: dispersevol
Type: Disperse
Volume ID: 379cdb77-0f53-4d4e-86d4-851ded4e7f79
Status: Started
Number of Bricks: 1 x (8 + 4) = 12
Transport-type: tcp
Bricks:
Brick1: vertigo:/rhs/brick1/b1
Brick2: ninja:/rhs/brick1/b1
Brick3: vertigo:/rhs/brick2/b2
Brick4: ninja:/rhs/brick2/b2
Brick5: vertigo:/rhs/brick3/b3
Brick6: ninja:/rhs/brick3/b3
Brick7: vertigo:/rhs/brick4/b4
Brick8: ninja:/rhs/brick4/b4
Brick9: vertigo:/rhs/brick1/b1-1
Brick10: ninja:/rhs/brick1/b1-1
Brick11: vertigo:/rhs/brick2/b2-1
Brick12: ninja:/rhs/brick2/b2-1
Options Reconfigured:
cluster.disperse-self-heal-daemon: enable
features.uss: on
features.quota: on
client.event-threads: 4
server.event-threads: 4
[root@ninja rhs]#

Comment 3 Anand Avati 2015-03-26 17:46:15 UTC
REVIEW: http://review.gluster.org/10020 (cluster/ec: Implement heal info for ec) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 4 Anand Avati 2015-03-31 06:38:59 UTC
COMMIT: http://review.gluster.org/10020 committed in master by Vijay Bellur (vbellur) 
------
commit f9ee09abd29002d8612bcdcbeaf4cf3e404b4cc6
Author: Pranith Kumar K <pkarampu>
Date:   Thu Mar 26 16:06:36 2015 +0530

    cluster/ec: Implement heal info for ec
    
    This also lists the files that are on-going I/O, which
    will be fixed later.
    
    Change-Id: Ib3f60a8b7e8798d068658cf38eaef2a904f9e327
    BUG: 1203581
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/10020
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Dan Lambright <dlambrig>

Comment 5 Bhaskarakiran 2015-04-10 07:13:29 UTC
The output gets displayed for one time. Later on either its not displayed or just gets stuck after the command is run. Moving back to assigned.

Comment 6 Bhaskarakiran 2015-04-10 07:14:39 UTC
There are entries that need to be healed but doesn't showup with the command second time on.

Comment 8 Bhaskarakiran 2015-05-13 10:24:33 UTC
verified on latest 3.7 beta2 build and its not seen.

Comment 9 Niels de Vos 2015-05-14 17:29:22 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 10 Niels de Vos 2015-05-14 17:35:54 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 11 Niels de Vos 2015-05-14 17:38:16 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 12 Niels de Vos 2015-05-14 17:46:26 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user