Bug 1283956 - Self-heal triggered every couple of seconds and a 3-node 1-arbiter setup
Summary: Self-heal triggered every couple of seconds and a 3-node 1-arbiter setup
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: unclassified
Version: 3.7.6
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1285634
TreeView+ depends on / blocked
 
Reported: 2015-11-20 11:39 UTC by Adrian Gruntkowski
Modified: 2016-04-19 07:49 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.7.7
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1285634 (view as bug list)
Environment:
Last Closed: 2016-02-19 04:46:45 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
Gluster logs from all the nodes (4.77 MB, application/x-gzip)
2015-11-20 11:41 UTC, Adrian Gruntkowski
no flags Details

Description Adrian Gruntkowski 2015-11-20 11:39:34 UTC
Description of problem:

I have a 3 node setup with 1 arbiter brick for every volume. Every volume contains a couple of big files with KVM disk images.

Every couple of minutes/seconds (probably depends on activity), a self-heal operation is triggered on one or more files on the volumes. During that time, there's no noticable loss of connectivity or anything like that.

How reproducible:

Run "gluster volume heal volume-name info" a couple of times and observe the output.


Actual results:

root@web-vm:~# gluster volume heal system_www1 info
Brick cluster-rep:/GFS/system/www1
/images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal

Number of entries: 1

Brick web-rep:/GFS/system/www1
/images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal

Number of entries: 1

Brick mail-rep:/GFS/system/www1
/images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal

Number of entries: 1

Expected results:

Heal not being triggered without a reason.

Additional info:

Setting "cluster.self-heal-daemon" to "off" on the volumes does not change the behavior.

Comment 1 Adrian Gruntkowski 2015-11-20 11:41:06 UTC
Created attachment 1097125 [details]
Gluster logs from all the nodes

Comment 2 Ravishankar N 2015-11-20 12:07:51 UTC
Had a quick look at one of the mount logs for the 'system_ww1 volume', i.e. glusterfs_cluster-vm/mnt-pve-system_www1.log.1 where I do see disconnects to the bricks.

#grep -rne "disconnected from" mnt-pve-system_www1.log.1|tail -n3
2177:[2015-11-19 15:58:32.687248] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-system_www1-client-0: disconnected from system_www1-client-0. Client process will keep trying to connect to glusterd until brick's port is available
2283:[2015-11-19 15:58:43.486658] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-system_www1-client-0: disconnected from system_www1-client-0. Client process will keep trying to connect to glusterd until brick's port is available
2385:[2015-11-19 15:58:43.557338] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-system_www1-client-2: disconnected from system_www1-client-2. Client process will keep trying to connect to glusterd until brick's port is available

So it appears that there are network disconnects from the mount to the bricks. If I/O was happening during the disconnects, when the connection is re-established, self-heal woll get triggered.

Adrian, could you confirm like you said on IRC if it could be an issue with your firewall/ network? If yes, Ill close it as NOTABUG

Comment 3 Ravishankar N 2015-11-20 12:11:14 UTC
(In reply to Adrian Gruntkowski from comment #0)

> Setting "cluster.self-heal-daemon" to "off" on the volumes does not change
> the behavior.

Clients (mounts) can also trigger self-heals in addition to the self-heal daemon. If you want to disable client side heal, you need to set cluster.metadata-self-heal, cluster.data-self-heal and cluster.entry-self-heal to off.

Comment 4 Adrian Gruntkowski 2015-11-20 13:06:01 UTC
The entries that you mentioned have timestamps from yesterday. I did a couple of server restarts and was fiddling applying the patch and so forth. The logs look clean for today in that regard.

I have double checked the logs for interface flapping and firewall rules but everything seems fine. The pings on the interfaces dedicated to gluster between the nodes go through without any losses.

Ravishankar: Sure, I was changing that setting in the course of experiment that Pranith wanted me to do. Just mentioned it for completeness.

Comment 5 Pranith Kumar K 2015-11-24 14:22:00 UTC
hi Adrian,
      I looked at the pcap files and found nothing unusual. So I think we are left with trying to re-create the problem. Do you think we can come up with a way to recreate this problem consistently?

Pranith

Comment 6 Adrian Gruntkowski 2015-11-24 14:30:06 UTC
My setup is pretty basic, save for crossover configuration of 2 sets of volumes.
I have actually laid it out in the initial post on ML about the issue:

http://www.gluster.org/pipermail/gluster-users/2015-October/024078.html

For the time being, I'm rolling back to a 2-node setup. I will also try to setup a cluster with arbiter in a local test env on virtualbox based VMs.

Adrian

Comment 7 Pranith Kumar K 2015-11-24 16:11:32 UTC
Adrian,
     So you don't see this without Arbiter?

Pranith

Comment 8 Adrian Gruntkowski 2015-11-25 09:00:23 UTC
Yes, I see it only when in arbiter setup.

Adrian

Comment 9 Vijay Bellur 2015-11-26 05:01:57 UTC
REVIEW: http://review.gluster.org/12755 (cluster/afr: change data self-heal size check for arbiter) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 10 Pranith Kumar K 2015-11-26 05:02:43 UTC
Adrian,
    I was able to recreate this issue.

Steps to recreate:
1) Create a volume with arbiter, start the volume and mount the volume
2) On the mount point execute "dd if=/dev/zero of=a.txt"
3) While the command above is running, execute "gluster volume heal <volname> info" in a loop. We will see pending entries to be healed.

With the patch in https://bugzilla.redhat.com/show_bug.cgi?id=1283956#c9
I don't see the issue anymore. Let me know how your testing goes with this patch.

Pranith

Comment 11 Vijay Bellur 2015-11-26 06:53:58 UTC
REVIEW: http://review.gluster.org/12755 (cluster/afr: change data self-heal size check for arbiter) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 12 Vijay Bellur 2015-11-26 18:25:01 UTC
REVIEW: http://review.gluster.org/12768 (cluster/afr: change data self-heal size check for arbiter) posted (#1) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)

Comment 13 Vijay Bellur 2015-11-27 10:17:38 UTC
REVIEW: http://review.gluster.org/12768 (cluster/afr: change data self-heal size check for arbiter) posted (#2) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)

Comment 14 Vijay Bellur 2016-01-23 05:20:48 UTC
REVIEW: http://review.gluster.org/12768 (cluster/afr: change data self-heal size check for arbiter) posted (#3) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)

Comment 15 Vijay Bellur 2016-01-27 02:46:38 UTC
REVIEW: http://review.gluster.org/12768 (cluster/afr: change data self-heal size check for arbiter) posted (#4) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)

Comment 16 Vijay Bellur 2016-01-28 14:19:14 UTC
REVIEW: http://review.gluster.org/12768 (cluster/afr: change data self-heal size check for arbiter) posted (#5) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)

Comment 17 Vijay Bellur 2016-01-31 02:27:03 UTC
COMMIT: http://review.gluster.org/12768 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) 
------
commit 5907d0b4d097cc625c7205963197d9b7e9b40573
Author: Pranith Kumar K <pkarampu>
Date:   Thu Nov 26 10:27:37 2015 +0530

    cluster/afr: change data self-heal size check for arbiter
    
    Size mismatch should consider that arbiter brick will have zero size file to
    prevent data self-heal to spuriously trigger/assuming need of self-heals.
    
     >Change-Id: I179775d604236b9c8abfa360657abbb36abae829
     >BUG: 1285634
     >Signed-off-by: Pranith Kumar K <pkarampu>
     >Reviewed-on: http://review.gluster.org/12755
     >Reviewed-by: Ravishankar N <ravishankar>
     >Tested-by: Gluster Build System <jenkins.com>
     >Tested-by: NetBSD Build System <jenkins.org>
     >(cherry picked from commit 8d2594d77127ba7ea07a0d68afca0939e1817e39)
    
    Change-Id: I90243c01d6d83f46475c975a9bd34d9de84b87da
    BUG: 1283956
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/12768
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 18 Ravishankar N 2016-02-19 04:46:45 UTC
v3.7.7 contains a fix

Comment 19 Kaushal 2016-04-19 07:49:02 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.7, please open a new bug report.

glusterfs-3.7.7 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-February/025292.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.