Bug 1434412
Summary: | Brick Multiplexing: Volume gets unmounted when glusterd is restarted | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Nag Pavan Chilakam <nchilaka> |
Component: | core | Assignee: | Jeff Darcy <jeff> |
Status: | CLOSED EOL | QA Contact: | |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.10 | CC: | amukherj, bugs, jeff, joe, nchilaka, sasundar |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-06-20 18:26:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Nag Pavan Chilakam
2017-03-21 13:07:03 UTC
I don't know what exactly made it into 3.10.0, but this looks like something that was already fixed. https://review.gluster.org/#/c/16886/ Please verify whether you have that patch in the version you're using. *** Bug 1434617 has been marked as a duplicate of this bug. *** That is a very partial fix(?) that prevents the client​ from closing. The rest of the problem is that the clients should get the complete list of volume member servers and be able to connect to any of them after the initial volfile retrieval. If the volume is changed, like with an add-brick, replace-brick, or remove-brick the list of known servers should also be updated. I suggest the volume members as opposed to the peers because there's no guarantee the client will have network access to all of the peers, nor should that be a requirement. There may be good reason for a peer group to allow access to one volume from one network, but not allow access to a different volume hosted by the same peer group for management purposes. Just to be clear, are you saying that's a feature that should be added, or a feature that used to exist but has regressed? IMHO, it's a bug that I've been forgetting to file for years. It's critical because if the mount server fails and is replaced with a new one, the clients will never be connected to a glusterd ever again unless remounted. Isn't this why we have backup-volfile-server option in place? In a cloud environment, or on kubernetes, you don't necessarily have control over which nodes are going to die or be replaced. The management connection really needs to be dynamic. This bug reported is against a version of Gluster that is no longer maintained (or has been EOL'd). See https://www.gluster.org/release-schedule/ for the versions currently maintained. As a result this bug is being closed. If the bug persists on a maintained version of gluster or against the mainline gluster repository, request that it be reopened and the Version field be marked appropriately. |