Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1.add nameserver in file "/etc/resolv.conf" 2.execute cli command "gluster volume heal storage info split-brain" 3.cli command will hang some secends, this depends on how much nameserver you config on host Actual results: # time gluster volume heal storage info split-brain Brick 192.168.2.5:/.storage Status: Connected Number of entries in split-brain: 0 Brick 192.168.2.6:/.storage Status: Connected Number of entries in split-brain: 0 real 0m45.451s user 0m0.044s sys 0m0.027s Expected results: sbg_SC-1:~ # time gluster volume heal storage info split-brain Brick 192.168.2.5:/.storage Status: Connected Number of entries in split-brain: 0 Brick 192.168.2.6:/.storage Status: Connected Number of entries in split-brain: 0 real 0m0.699s user 0m0.058s sys 0m0.011s Additional info: according to the tcpdump, I found glusterfs will always send DNS query to the nameserver configure in host, the query name is "/var/run/glusterd.socket", I think this cause cli command hangs.
# gluster --version glusterfs 4.1.5 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation.
Created attachment 1613025 [details] dns tcp dump
there are 2 workaround we used now: 1. delete nameserver in /etc/resolv.conf 2. add "192.168.2.5 /var/run/glusterd.socket" into "/etc/hosts"
Hi Tim, I have nameserver added into /etc/resolve.conf which is generated by NetworkManager. I am still able to execute all the cli commands without any delay. I suspect, it is a configuration issue on your setup. Please check and get back. Thanks, Sanju
As this is working in my setup without any issue, I'm closing this bug as worksforme. If you happen to see this issue, please feel free to re-open.