Bug 1314373
Summary: | Peer information is not propagated to all the nodes in the cluster, when the peer is probed with its second interface FQDN/IP | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | SATHEESARAN <sasundar> |
Component: | glusterd | Assignee: | Kaushal <kaushal> |
Status: | CLOSED ERRATA | QA Contact: | Byreddy <bsrirama> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.1 | CC: | amukherj, asrivast, kaushal, rhinduja, rhs-bugs, storage-qa-internal, vbellur |
Target Milestone: | --- | Keywords: | Regression, ZStream |
Target Release: | RHGS 3.1.3 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.7.9-2 | Doc Type: | Bug Fix |
Doc Text: |
Cause:
The fix for bug#1291386 introduced changes to reduce the number of updates exchanged between GlusterDs. This change inadvertently made it so that updates that needed to be sent when peer probe command was done to attach a new address to an existing peer, were not sent.
Consequence:
Because of this, the newly attached address was only known to the peer where the peer probe command was issued. This could cause failures of gluster volume commands using this new address.
Fix:
GlusterD was fixed to send updates to all other nodes when a peer probe is done to attach a new address.
Result:
New address is available on all nodes and commands using the new address don't fail.
|
Story Points: | --- |
Clone Of: | 1314366 | Environment: | |
Last Closed: | 2016-06-23 05:10:12 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1314366 | ||
Bug Blocks: | 1299184 |
Description
SATHEESARAN
2016-03-03 13:18:11 UTC
This is a regression caused by the fix for BZ 1291386 The fix for #1291386 reduced the number of updates sent when a peer, already in the befriended state, establishes a connection with another peer. Before this fix, the updates were sent to all other peers when this happened. The fix changed it so that the updates are only sent between the peers involved. This was done by changing the action for a ACC or LOCAL_ACC event when in BEFRIENDED state, in the state table used by the peer state machine. This caused a regression when attempting to attach other names to a peer using peer probe. Attaching another name to a peer in befriended state gives rise to a LOCAL_ACC, which leads to the updates being exchanged only between the two involved peers. The other peers don't get updates with the newly attached name, which could lead to command failures later. Upstream patch http://review.gluster.org/13817 posted for review. This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions Downstream patch https://code.engineering.redhat.com/gerrit/#/c/71313/ is now merged. Moving the status to Modified. Verified this bug using the build "glusterfs-3.7.9-2.el7rhgs" Steps followed to verify this bug: ================================== 1. Have 3 rhgs nodes with 3.1.3 (node1, node2 and node3) 2. Probed node2 from node1 - Using the node2 IP - Using the FQDN of node2 - Using the short name. 3. Checked peer status on node1 and node2 //it was correct,node1 peer status had short name and FQDN name of node2 under other names. 4. Again probed node3 from node1 - Using the node3 IP - Using the FQDN of node3 - Using the short name. 5. Checked peer status on node1, node2 and node3 //it was correct - Node1 peer status had short name and FQDN name of node2 under other names AND had short name and FQDN name of node3 under other names. - Node2 peer status had short name and FQDN name of node3 under other names - Node3 peer status had short name and FQDN name of node2 under other names. With above details moving this bug to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |