Bug 1090298
Summary: | Addition of new server after upgrade from 3.3 results in peer rejected | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Awktane <bmackie> |
Component: | core | Assignee: | Ravishankar N <ravishankar> |
Status: | CLOSED WONTFIX | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.4.3 | CC: | gluster-bugs, kkeithle, ravishankar |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-05-21 14:54:37 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1095324 |
Description
Awktane
2014-04-23 05:56:52 UTC
REVIEW: http://review.gluster.org/7729 (glusterd: update op-version info during upgrades.) posted (#1) for review on release-3.4 by Ravishankar N (ravishankar) The patch to fix this is being abandoned for reasons described in the review comments. Proposed solution (sic): Once all the peers have been upgraded, the user must do a dummy volume set operation on all volumes. This ensures that the volume information and checksums are updated correctly. This will allow probing new peers without any problem. For eg: # gluster volume set <name> brick-log-level INFO (This won't have any affect on the operation volume as the default log-level is already INFO, but would update the volume info and checksums)" Alright, for the existing folk like me who just added those two lines are there any ramifications? I think I did do a volume set to remove lookup-unhashed as it was causing a bunch of files/folders to error out. I assumed this was due to the re-balancing state. (In reply to Awktane from comment #3) > Alright, for the existing folk like me who just added those two lines are > there any ramifications? I think I did do a volume set to remove > lookup-unhashed as it was causing a bunch of files/folders to error out. I > assumed this was due to the re-balancing state. I don't think it should matter. If the peer status shows all peers in connected state then we are good. |