Description of problem: I have a 3 node setup with 1 arbiter brick for every volume. Every volume contains a couple of big files with KVM disk images. Every couple of minutes/seconds (probably depends on activity), a self-heal operation is triggered on one or more files on the volumes. During that time, there's no noticable loss of connectivity or anything like that. How reproducible: Run "gluster volume heal volume-name info" a couple of times and observe the output. Actual results: root@web-vm:~# gluster volume heal system_www1 info Brick cluster-rep:/GFS/system/www1 /images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal Number of entries: 1 Brick web-rep:/GFS/system/www1 /images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal Number of entries: 1 Brick mail-rep:/GFS/system/www1 /images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal Number of entries: 1 Expected results: Heal not being triggered without a reason. Additional info: Setting "cluster.self-heal-daemon" to "off" on the volumes does not change the behavior.
Created attachment 1097125 [details] Gluster logs from all the nodes
Had a quick look at one of the mount logs for the 'system_ww1 volume', i.e. glusterfs_cluster-vm/mnt-pve-system_www1.log.1 where I do see disconnects to the bricks. #grep -rne "disconnected from" mnt-pve-system_www1.log.1|tail -n3 2177:[2015-11-19 15:58:32.687248] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-system_www1-client-0: disconnected from system_www1-client-0. Client process will keep trying to connect to glusterd until brick's port is available 2283:[2015-11-19 15:58:43.486658] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-system_www1-client-0: disconnected from system_www1-client-0. Client process will keep trying to connect to glusterd until brick's port is available 2385:[2015-11-19 15:58:43.557338] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-system_www1-client-2: disconnected from system_www1-client-2. Client process will keep trying to connect to glusterd until brick's port is available So it appears that there are network disconnects from the mount to the bricks. If I/O was happening during the disconnects, when the connection is re-established, self-heal woll get triggered. Adrian, could you confirm like you said on IRC if it could be an issue with your firewall/ network? If yes, Ill close it as NOTABUG
(In reply to Adrian Gruntkowski from comment #0) > Setting "cluster.self-heal-daemon" to "off" on the volumes does not change > the behavior. Clients (mounts) can also trigger self-heals in addition to the self-heal daemon. If you want to disable client side heal, you need to set cluster.metadata-self-heal, cluster.data-self-heal and cluster.entry-self-heal to off.
The entries that you mentioned have timestamps from yesterday. I did a couple of server restarts and was fiddling applying the patch and so forth. The logs look clean for today in that regard. I have double checked the logs for interface flapping and firewall rules but everything seems fine. The pings on the interfaces dedicated to gluster between the nodes go through without any losses. Ravishankar: Sure, I was changing that setting in the course of experiment that Pranith wanted me to do. Just mentioned it for completeness.
hi Adrian, I looked at the pcap files and found nothing unusual. So I think we are left with trying to re-create the problem. Do you think we can come up with a way to recreate this problem consistently? Pranith
My setup is pretty basic, save for crossover configuration of 2 sets of volumes. I have actually laid it out in the initial post on ML about the issue: http://www.gluster.org/pipermail/gluster-users/2015-October/024078.html For the time being, I'm rolling back to a 2-node setup. I will also try to setup a cluster with arbiter in a local test env on virtualbox based VMs. Adrian
Adrian, So you don't see this without Arbiter? Pranith
Yes, I see it only when in arbiter setup. Adrian
REVIEW: http://review.gluster.org/12755 (cluster/afr: change data self-heal size check for arbiter) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
Adrian, I was able to recreate this issue. Steps to recreate: 1) Create a volume with arbiter, start the volume and mount the volume 2) On the mount point execute "dd if=/dev/zero of=a.txt" 3) While the command above is running, execute "gluster volume heal <volname> info" in a loop. We will see pending entries to be healed. With the patch in https://bugzilla.redhat.com/show_bug.cgi?id=1283956#c9 I don't see the issue anymore. Let me know how your testing goes with this patch. Pranith
REVIEW: http://review.gluster.org/12755 (cluster/afr: change data self-heal size check for arbiter) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/12768 (cluster/afr: change data self-heal size check for arbiter) posted (#1) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/12768 (cluster/afr: change data self-heal size check for arbiter) posted (#2) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/12768 (cluster/afr: change data self-heal size check for arbiter) posted (#3) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/12768 (cluster/afr: change data self-heal size check for arbiter) posted (#4) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/12768 (cluster/afr: change data self-heal size check for arbiter) posted (#5) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/12768 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) ------ commit 5907d0b4d097cc625c7205963197d9b7e9b40573 Author: Pranith Kumar K <pkarampu> Date: Thu Nov 26 10:27:37 2015 +0530 cluster/afr: change data self-heal size check for arbiter Size mismatch should consider that arbiter brick will have zero size file to prevent data self-heal to spuriously trigger/assuming need of self-heals. >Change-Id: I179775d604236b9c8abfa360657abbb36abae829 >BUG: 1285634 >Signed-off-by: Pranith Kumar K <pkarampu> >Reviewed-on: http://review.gluster.org/12755 >Reviewed-by: Ravishankar N <ravishankar> >Tested-by: Gluster Build System <jenkins.com> >Tested-by: NetBSD Build System <jenkins.org> >(cherry picked from commit 8d2594d77127ba7ea07a0d68afca0939e1817e39) Change-Id: I90243c01d6d83f46475c975a9bd34d9de84b87da BUG: 1283956 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/12768 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com>
v3.7.7 contains a fix
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.7, please open a new bug report. glusterfs-3.7.7 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-February/025292.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user