Bug 1293273
Summary: | [GlusterD]: Peer detach happening with a node which is hosting volume bricks | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Byreddy <bsrirama> | |
Component: | glusterd | Assignee: | Atin Mukherjee <amukherj> | |
Status: | CLOSED ERRATA | QA Contact: | Byreddy <bsrirama> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.1 | CC: | amukherj, asrivast, nlevinki, rhinduja, sasundar, vbellur | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.1.3 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | GlusterD | |||
Fixed In Version: | glusterfs-3.7.9-1 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1293414 (view as bug list) | Environment: | ||
Last Closed: | 2016-06-23 04:59:48 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1293414, 1297305, 1299184 |
Description
Byreddy
2015-12-21 09:38:43 UTC
Looks like this is a day zero bug and here is why: When step 4 was executed the probed node (say N3) goes for importing volumes from the probing node (N1), but it still doesn't have information about the other node (N2) about its membership (since peer update happens post volume updates) and hence fail to update its brick's uuid. Post that even though N2 updates N3 about its membership the brick's uuid was never generated. Now as a consequence when N3 initiates a detach of N2, it checks whether the node to be detached has any bricks configured by its respective uuid which is NULL in this case and hence it goes ahead and removes the peer which ideally it shouldn't have. I think we'd need to think about doing a peer list update first before volume data to fix these types of inconsistencies which itself is a effort. Another way of fixing it would be to import the uuid and just updating it instead of resolving. Need to validate it though. http://review.gluster.org/13047 is posted for review upstream The fix is now available in rhgs-3.1.3 branch, hence moving the state to Modified. Verified this issue using the build "glusterfs-3.7.9-1" Repeated the reproducing steps mentioned in description section, Fix is working properly, it's not allowing to detach a node which is hosting the bricks. Moving to verified state with above details. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |