Bug 1429117
Summary: | auth failure after upgrade to GlusterFS 3.10 | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Flexadot <news> | |
Component: | core | Assignee: | bugs <bugs> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | urgent | Docs Contact: | ||
Priority: | unspecified | |||
Version: | 3.10 | CC: | amukherj, bordas.csaba, bugs, halgravity, hiscal, michalon, news | |
Target Milestone: | --- | Keywords: | Triaged | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.10.1 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1433815 (view as bug list) | Environment: | ||
Last Closed: | 2017-04-05 00:01:42 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1427207, 1433815, 1437332 |
Description
Flexadot
2017-03-04 16:10:22 UTC
Can you provide entire logs including bricks,glusterd and glusterfs client. Also it will be easier if can take the tcdump from server and client I am stumbling on the same problem. Setting log level to DEBUG (gluster volume set volname diagnostics.brick-log-level DEBUG) I got this first interesting stuff: allowed = "192.168.122.186", received addr = "R" Then some time afterwards: allowed = "192.168.122.186", received addr = "m" So it was looking like we were reading some random memory. And indeed looking into source code, between 3.9 and 3.10 the big switch/case filling peer_addr disappeared in /xlators/protocol/auth/addr/src/addr.c I think this is enough to tell that there is some problem here :) auth failures need not be in glusterd, moving this to core component. I have the same issue, I think this is critical, please help On a fresh install of 3.10 i have the same symptoms. I'm running a single node install and it's impossible to mount a share over local network. REVIEW: https://review.gluster.org/16967 (protocol : fix auth-allow regression) posted (#1) for review on release-3.10 by Atin Mukherjee (amukherj) COMMIT: https://review.gluster.org/16967 committed in release-3.10 by Shyamsundar Ranganathan (srangana) ------ commit bbf83e34d78e064befe816edf71a9ee5c2c5c209 Author: Atin Mukherjee <amukherj> Date: Mon Mar 20 05:15:25 2017 +0530 protocol : fix auth-allow regression One of the brick multiplexing patches (commit 1a95fc3) had some changes in gf_auth () & server_setvolume () functions which caused auth-allow feature to be broken. mount doesn't succeed even if it's part of the auth-allow list. This fix does the following: 1. Reintroduce the peer-info data back in gf_auth () so that fnmatch has valid input and it can decide on the result. 2. config-params dict should capture key values pairs for all the bricks in case brick multiplexing is on. In case brick multiplexing isn't enabled, then config-params should carry attributes from protocol/server such that all rpc auth related attributes stay in tact in the dictionary. >Reviewed-on: https://review.gluster.org/16920 >Tested-by: Jeff Darcy <jeff.us> >Smoke: Gluster Build System <jenkins.org> >NetBSD-regression: NetBSD Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.org> >Reviewed-by: Jeff Darcy <jeff.us> >Reviewed-by: MOHIT AGRAWAL <moagrawa> >(cherry picked from commit 0bd58241143e91b683a3e5c4335aabf9eed537fe) Change-Id: I007c4c6d78620a896b8858a29459a77de8b52412 BUG: 1429117 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: https://review.gluster.org/16967 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Shyamsundar Ranganathan <srangana> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.1, please open a new bug report. glusterfs-3.10.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-April/030494.html [2] https://www.gluster.org/pipermail/gluster-users/ |