Description of problem: 'gluster peer status' shows State: Peer in Cluster (Connected) when one host is inaccessible (network down) & 'gluster volume info' shows online status of 'Y' for remote bricks. Version-Release number of selected component (if applicable): Gluster 3.7.3 How reproducible: Always Steps to Reproduce: 1. Create a replica 2 cluster, bring volumes online 2. run 'gluster peer status' 3. take one of the two hosts offline (disconnect network interfaces) 4. run 'gluster peer status' from either host 5. run 'gluster volume info' Actual results: State: Peer in Cluster (Connected) & Bricks on both nodes have an Online status of 'Y'. Expected results: State should be Peer in Cluster (Disconnected) & Bricks on both nodes should have an Online status of 'N' for the remote node's brick.
Could you upload all glusterd log files?
Created attachment 1069586 [details] gluster logs node1
Created attachment 1069587 [details] gluster logs node2
Created attachment 1069588 [details] gluster logs node3
Logs attached. At ~13:27 PST (20:27 UTC) I took down node3 (10.0.231.52) network interface. After about a minute node1 reports: # gluster peer status Number of Peers: 2 Hostname: 10.0.231.61 Uuid: 3ae42ed6-bf0a-4592-b96d-f799220781a9 State: Peer in Cluster (Connected) Hostname: 10.0.231.52 Uuid: e5c2488b-69b3-419a-8ec3-210a3fc149e5 State: Peer in Cluster (Connected) [root@compute1 mnt]# cd /var/log/glusterfs/ node2 reports: # gluster peer status Number of Peers: 2 Hostname: 10.0.231.60 Uuid: ba1f9caa-17a2-4484-a668-d63537defd2f State: Peer in Cluster (Connected) Hostname: 10.0.231.52 Uuid: e5c2488b-69b3-419a-8ec3-210a3fc149e5 State: Peer in Cluster (Connected) node 3 reports: # gluster peer status Number of Peers: 2 Hostname: 10.0.231.61 Uuid: 3ae42ed6-bf0a-4592-b96d-f799220781a9 State: Peer in Cluster (Disconnected) Hostname: 10.0.231.60 Uuid: ba1f9caa-17a2-4484-a668-d63537defd2f State: Peer in Cluster (Disconnected)
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.