Bug 863084 - "gluster volume heal <vol_name> info healed" command execution unsuccessful
Summary: "gluster volume heal <vol_name> info healed" command execution unsuccessful
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: 2.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Pranith Kumar K
QA Contact: spandura
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-10-04 12:06 UTC by spandura
Modified: 2016-09-17 12:13 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-06-11 05:25:48 UTC
Embargoed:


Attachments (Terms of Use)
glustershd log file. (2.32 MB, text/x-log)
2012-10-04 12:06 UTC, spandura
no flags Details

Description spandura 2012-10-04 12:06:27 UTC
Created attachment 621592 [details]
glustershd log file.

Description of problem:
-----------------------
"gluster volume heal <volume_name> info healed" command execution was unsuccessful 

Following is the glustershd log message:-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

[2012-10-04 14:42:47.276313] W [dict.c:2339:dict_unserialize] (-->/lib64/libc.so.6() [0x322d243610] (-->/usr/lib64/libglusterfs.so.0(synctask_wrap+0x12) [0x3a9f24bd72] (-->/usr/sbin/glusterfs(glusterfs_handle_translator_op+0x16f) [0x40907f]))) 0-dict: buf is null!

[2012-10-04 14:42:47.276397] E [glusterfsd-mgmt.c:672:glusterfs_handle_translator_op] 0-glusterfs: failed to unserialize req-buffer to dictionary

Version-Release number of selected component (if applicable):
-----------------------------------------------------------
[10/04/12 - 16:57:25 root@rhs-client7 ~]# rpm -qa | grep gluster
glusterfs-3.3.0rhsvirt1-6.el6rhs.x86_64
glusterfs-rdma-3.3.0rhsvirt1-6.el6rhs.x86_64
vdsm-gluster-4.9.6-14.el6rhs.noarch
gluster-swift-plugin-1.0-5.noarch
gluster-swift-container-1.4.8-4.el6.noarch
org.apache.hadoop.fs.glusterfs-glusterfs-0.20.2_0.2-1.noarch
glusterfs-fuse-3.3.0rhsvirt1-6.el6rhs.x86_64
glusterfs-geo-replication-3.3.0rhsvirt1-6.el6rhs.x86_64
gluster-swift-proxy-1.4.8-4.el6.noarch
gluster-swift-account-1.4.8-4.el6.noarch
gluster-swift-doc-1.4.8-4.el6.noarch
glusterfs-server-3.3.0rhsvirt1-6.el6rhs.x86_64
gluster-swift-1.4.8-4.el6.noarch
gluster-swift-object-1.4.8-4.el6.noarch

[10/04/12 - 16:57:31 root@rhs-client7 ~]# gluster --version
glusterfs 3.3.0rhsvirt1 built on Sep 25 2012 14:53:06

Steps to Reproduce:
--------------------
1.Create a pure replicate volume (1x2) with 2 servers and 1 brick on each server. This is the storage for the VM's. start the volume.
2.Set-up the KVM to use the volume as VM store. 
3.Bring down brick1.
4.Create a VM. 
5.Bring back brick1.
6.execute: "gluster volume heal <volume_name>"
7.execute: "gluster volume heal <volume_name> info" and "gluster volume heal <volume_name> info healed" commands
  
Actual results:
---------------
[10/04/12 - 14:41:18 root@rhs-client7 ~]# gluster volume heal replicate-rhevh2 info
Heal operation on volume replicate-rhevh2 has been successful

Brick rhs-client6.lab.eng.blr.redhat.com:/replicate-disk
Number of entries: 0
Status: Brick is Not connected

Brick rhs-client7.lab.eng.blr.redhat.com:/replicate-disk
Number of entries: 6
/baa779f1-7935-4083-bd62-c33d43b242c3/dom_md/ids
/baa779f1-7935-4083-bd62-c33d43b242c3/images
/baa779f1-7935-4083-bd62-c33d43b242c3/images/1f41c42f-d8c8-4869-9466-9c0294f735c9
/baa779f1-7935-4083-bd62-c33d43b242c3/images/1f41c42f-d8c8-4869-9466-9c0294f735c9/a77e5a15-3492-4832-988d-40c96e71f624
/baa779f1-7935-4083-bd62-c33d43b242c3/images/1f41c42f-d8c8-4869-9466-9c0294f735c9/a77e5a15-3492-4832-988d-40c96e71f624.lease
/baa779f1-7935-4083-bd62-c33d43b242c3/images/1f41c42f-d8c8-4869-9466-9c0294f735c9/a77e5a15-3492-4832-988d-40c96e71f624.meta

[10/04/12 - 14:41:37 root@rhs-client7 ~]# 

[10/04/12 - 14:41:39 root@rhs-client7 ~]# gluster volume heal replicate-rhevh2 info
Heal operation on volume replicate-rhevh2 has been successful

Brick rhs-client6.lab.eng.blr.redhat.com:/replicate-disk
Number of entries: 0

Brick rhs-client7.lab.eng.blr.redhat.com:/replicate-disk
Number of entries: 6
/baa779f1-7935-4083-bd62-c33d43b242c3/dom_md/ids
/baa779f1-7935-4083-bd62-c33d43b242c3/images
/baa779f1-7935-4083-bd62-c33d43b242c3/images/1f41c42f-d8c8-4869-9466-9c0294f735c9
/baa779f1-7935-4083-bd62-c33d43b242c3/images/1f41c42f-d8c8-4869-9466-9c0294f735c9/a77e5a15-3492-4832-988d-40c96e71f624
/baa779f1-7935-4083-bd62-c33d43b242c3/images/1f41c42f-d8c8-4869-9466-9c0294f735c9/a77e5a15-3492-4832-988d-40c96e71f624.lease
/baa779f1-7935-4083-bd62-c33d43b242c3/images/1f41c42f-d8c8-4869-9466-9c0294f735c9/a77e5a15-3492-4832-988d-40c96e71f624.meta

[10/04/12 - 14:42:05 root@rhs-client7 ~]# gluster volume heal replicate-rhevh2 info healed
Heal operation on volume replicate-rhevh2 has been successful

Brick rhs-client6.lab.eng.blr.redhat.com:/replicate-disk
Number of entries: 0

Brick rhs-client7.lab.eng.blr.redhat.com:/replicate-disk
Number of entries: 0

[10/04/12 - 14:42:10 root@rhs-client7 ~]# gluster volume heal replicate-rhevh2
Heal operation on volume replicate-rhevh2 has been successful


[10/04/12 - 14:42:44 root@rhs-client7 ~]# gluster volume heal replicate-rhevh2 info healed
Heal operation on volume replicate-rhevh2 has been unsuccessful

Brick rhs-client6.lab.eng.blr.redhat.com:/replicate-disk
Number of entries: 0

Brick rhs-client7.lab.eng.blr.redhat.com:/replicate-disk
Number of entries: 0

Additional info:-
------------------
Check the log messages after the following message:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[2012-10-04 14:41:49.526305] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'.
[2012-10-04 14:41:49.526343] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds
[2012-10-04 14:41:49.526439] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online.
[2012-10-04 14:41:49.526634] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1

Comment 4 spandura 2013-02-04 05:39:05 UTC
Able to recreate this issue once again:

Steps to Reproduce:
======================
1.Create a replicate volume ( 1 x 2 ). Start the volume.

2.Set "entry-self-heal" , "metadata-self-heal" , "data-self-heal" to off

3.Create a fuse mount

4.Bring down brick "brick1"

5.Create 50k files from the mount point. 

6. "gluster volume heal <volume_name> info healed" command

Actual Output:-
============

root@rhsauto015 [10:20:02]> gluster v heal `gluster v list` info healed
Heal operation on volume vol1 has been unsuccessful

Brick rhsauto015.lab.eng.blr.redhat.com:/brick/b
Number of entries: 0

Brick rhsauto016.lab.eng.blr.redhat.com:/brick/b
Number of entries: 0


Log File Messages:-
==================
[2013-02-04 10:20:06.971564] W [dict.c:2339:dict_unserialize] (-->/lib64/libc.so.6() [0x34ffa43610] (-->/usr/lib64/libglusterfs.so.0(synctask_wrap+0x12) [0x3500e4bc52] (-->/usr/sbin/glusterfs(glusterfs_handle_translator_op+0x16f) [0x40912f]))) 0-dict: buf is null!
[2013-02-04 10:20:06.971630] E [glusterfsd-mgmt.c:672:glusterfs_handle_translator_op] 0-glusterfs: failed to unserialize req-buffer to dictionary

Comment 5 spandura 2013-02-04 05:41:12 UTC
Steps to Reproduce:
======================
1.Create a replicate volume ( 1 x 2 ). Start the volume.

2.Set "entry-self-heal" , "metadata-self-heal" , "data-self-heal" to off

3.Create a fuse mount

4.Bring down brick "brick1"

5.Create 50k files from the mount point. 

6. Bring back brick "brick1" online

7. "gluster volume heal <volume_name> info healed" command

Had forgot to add the STEP 6 in the Comment 4.

Comment 6 Scott Haines 2013-04-11 17:02:31 UTC
Per 04-10-2013 Storage bug triage meeting, targeting for Big Bend.

Comment 7 Scott Haines 2013-09-27 17:07:26 UTC
Targeting for 3.0.0 (Denali) release.

Comment 10 spandura 2014-06-11 05:25:48 UTC
The command "gluster volume heal <volume_name> info healed" is not supported anymore from the gluster build : 

"[root@rhs-client11 ~]# gluster --version
glusterfs 3.6.0.15 built on Jun  9 2014 11:03:54"

Refer to bug : https://bugzilla.redhat.com/show_bug.cgi?id=1104486

Hence this bug is not valid anymore. Moving the bug to CLOSED state.


Note You need to log in before you can comment on or make changes to this bug.