Bug 1113778 - gluster volume heal info keep reports "Volume heal failed"
Summary: gluster volume heal info keep reports "Volume heal failed"
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.5.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1348894 1369454 1369455
TreeView+ depends on / blocked
 
Reported: 2014-06-26 22:09 UTC by Peter Auyeung
Modified: 2016-08-23 13:02 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1348894 (view as bug list)
Environment:
Last Closed: 2016-06-17 16:23:41 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Peter Auyeung 2014-06-26 22:09:50 UTC
Description of problem:
gluster volume heal info keep reports "Volume heal failed" even after a fresh install of gluster 3.5.1 and new created replicated volume.


Version-Release number of selected component (if applicable):
gluster 3.5.1

How reproducible:
volume create <vol> replica 2 storage1:/brick1 storage2:/brick2

Steps to Reproduce:
1. volume create <vol> replica 2 storage1:/brick1 storage2:/brick2
2. volume heal <vol> info


Actual results:
Volume heal failed

Expected results:
Brick: storage1:/brick1
Number of entries: 0

Additional info:

Comment 1 Peter Auyeung 2014-06-26 22:12:31 UTC
Volume appeared works fine and files are able to to heal under info healed.

Need to confirm if that's only cosmetic.

Comment 2 Peter Auyeung 2014-06-26 22:32:26 UTC
cli.log give out the following when run heal info:
[2014-06-26 22:31:34.461244] W [cli-rl.c:106:cli_rl_process_line] 0-glusterfs: failed to process line

Comment 3 Pranith Kumar K 2014-06-27 02:19:22 UTC
Peter I just tried it on my machine and it work. Krutika (Another developer) was wondering if you have readline installed on your machine.

What ouput do you get when you execute:
root@localhost - ~ 
07:46:45 :) ⚡ rpm -qa | grep readline
readline-devel-6.2-8.fc20.x86_64
readline-6.2-8.fc20.x86_64

Pranith

Comment 4 Peter Auyeung 2014-06-27 02:49:04 UTC
I am on ubuntu and do have readline installed

# dpkg -l | grep readline
ii  libreadline5                     5.2-11                            GNU readline and history libraries, run-time libraries
ii  libreadline6                     6.2-8                             GNU readline and history libraries, run-time libraries
ii  readline-common                  6.2-8                             GNU readline and history libraries, common files

Comment 5 Pranith Kumar K 2014-06-27 02:54:13 UTC
Stupid question but let me ask anyway. In the steps to reproduce. There is no step about starting the volume. Did you start the volume? Does it give the info after starting the volume?

What version of Ubuntu are you using. I can probably install a VM and test it once. Also give me the location of debs you used for installing.

Comment 6 Peter Auyeung 2014-06-27 03:04:21 UTC
I am on 12.04

# cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=12.04
DISTRIB_CODENAME=precise
DISTRIB_DESCRIPTION="Ubuntu 12.04.4 LTS"
NAME="Ubuntu"
VERSION="12.04.4 LTS, Precise Pangolin"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu precise (12.04.4 LTS)"
VERSION_ID="12.04"

I am using semiosis's ppa

add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.5

Thanks
Peter

Comment 7 Peter Auyeung 2014-06-27 03:05:25 UTC
and yes i did started the volume and have live traffic over nfs

Comment 8 craigyk 2014-06-30 23:21:05 UTC
I am having a similar problem.  I had a brick fail in a x3 replica set, but while everything seems to have recovered I still get a "Volume heal failed" when running gluster volume heal <vol> info.

gluster volume heal <vol> info heal-failed 
and
gluster volume heal <vol> statistics

report no failures.

I'm running 3.5.1

Comment 9 Joe Julian 2014-07-07 15:38:48 UTC
According to Pranith, 3.5 ubuntu debs don't have /usr/bin/glfsheal binary. Semiosis will repackage asap.

Comment 10 craigyk 2014-07-17 17:25:03 UTC
Has this been done?  I've been checking for updates to the repo and haven't seen any.

Comment 11 Igor Biryulin 2015-02-18 18:48:18 UTC
This problem exist on gluster 3.6.2.
OS: Ubuntu 12.04.5 LTS

Comment 12 Igor Biryulin 2015-02-18 19:09:58 UTC
Sorry! I understood it is problem of Ubuntu.
Try to write to their mainteners.

Comment 13 James Carson 2015-03-06 20:50:32 UTC
I have a similar problem. I upgraded to 3.6.2. As you can see from below, I have a volume named "james_test", but when I try to get heal info on that volume it tells me the volume does not exist. 

Note: heal statistics does work

[root@appdev0 glusterfs-3.6.2]# gluster volume heal james_test info
Volume james_test does not exist
Volume heal failed


[root@appdev0 glusterfs-3.6.2]# gluster volume info
 
Volume Name: james_test
Type: Replicate
Volume ID: 044ca3d6-1a89-49a1-b563-b1a2f6d15900
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: appdev0:/export/james_test
Brick2: appdev1:/export/james_test
Brick3: hpbxdev:/export/james_test

[root@appdev0 glusterfs-3.6.2]# gluster volume heal james_test statistics
Gathering crawl statistics on volume james_test has been successful 
------------------------------------------------

Crawl statistics for brick no 0
Hostname of brick appdev0
....

Comment 14 mailbox 2015-05-20 12:18:23 UTC
Just installed GlusterFS 3.6.2 from Ubuntu PPA (http://ppa.launchpad.net/gluster/glusterfs-3.6/ubuntu) on Ubuntu trusty 14.04, experiencing the same issue. Readline installed, glfsheal installed under /usr/sbin; 6x2 replicated distributed volume, started.

Comment 15 Roger Lehmann 2015-06-10 14:30:34 UTC
Same problem here updating from 3.6.1 to 3.6.3 on one of my three cluster nodes. Now I'm afraid to update the other ones. Using Debian Wheezy.

Comment 16 Niels de Vos 2016-06-17 16:23:41 UTC
This bug is getting closed because the 3.5 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.


Note You need to log in before you can comment on or make changes to this bug.