Bug 1416251
Summary: | [SNAPSHOT] With all USS plugin enable .snaps directory is not visible in cifs mount as well as windows mount | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Atin Mukherjee <amukherj> | |
Component: | glusterd | Assignee: | bugs <bugs> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | unspecified | Docs Contact: | ||
Priority: | unspecified | |||
Version: | mainline | CC: | amukherj, bugs, rcyriac, rhinduja, rhs-bugs, rhs-smb, rjoseph, rtalur, sbhaloth, vdas | |
Target Milestone: | --- | Keywords: | Triaged | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.11.0 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | 1411270 | |||
: | 1417521 (view as bug list) | Environment: | ||
Last Closed: | 2017-05-30 18:39:31 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1411270, 1417521 |
Comment 1
Atin Mukherjee
2017-01-25 04:35:34 UTC
RCA: Client gets volfile from server (glusterd) and based on the options provided in the volfile client connects to bricks and other services (e.g. snapd). The volfile has the information which brick/service to connect to, which includes the hostname as well. As part of connection the first thing a client does is to resolve the hostname to get IP address. The hostname resolution is done by DNS server or by local DNS cache (if your OS is configured with one) or something primitive like /etc/hosts. A hostname can be resolved to multiple IP addresses (including IPv4 and IPv6). Gluster also has some sort of internal DNS cache. All the IP addresses received during hostname resolution is kept in this cache. Every time we try to resolve a hostname the IP from this list is returned one after another. So if we get "::1" and "127.0.0.1" as IP addresses then first call to resolve hostname will return "::1" and the second call will return "127.0.0.1". Now lets take a look at how a client makes connection to a brick or a service. First the connection is made to glusterd to get port number of the brick/service. Once we get the port number we make connection to the brick/service. So lets say we got the port number from glusterd, now the client is trying to connect to the brick/service. During hostname resolution we got "::1" and "127.0.0.1" IP addresses. So it will first try to reach the brick/service via "::1". This will obviously fail because we are not listening on that IP. After the connection failure our state-machine tries to reconnect with the next IP address, i.e. "127.0.0.1". But before reconnect our state machine resets the target port to 0, i.e. connect to glusterd. This is done because the state-machine assumes connection issues with the brick/service and it will contact glusterd to get the correct state. The code was initially written to handle only IPv4 addresses. Gluster has a volume option, "transport.address-family", which tells that what kind of addresses we should resolve to. Currently the default is AF_UNSPEC, i.e. it will fetch both ipv4 and ipv6 addresses. As a workaround during cluster op-version change and new volume creation time we explicitly set "transport.address-family" to "inet" (i.e. IPv4). But we have a bug in glusterd where when we change the cluster op-version we only update the in-memory value of "transport.address-family" and we fail to update the *.vol files. And when a client gets the volfile from glusterd this option is missing which make the client to use the default AF_UNSPEC. So in short we have multiple issues here: 1) Glusterd should persist this option so that during handshake clients get the correct options. 2) During connection failure we should try all the IP addresses before changing the state-machine. 3) Also we feel the use of AF_UNSPEC as the default value of connection family is not very useful as majority of our setup are IPv4. It would be good to make default as AF_INET. Also this problem is not limited to just snapd as explained above. If a hostname is resolved to more than one IP we will see the issue in bricks and other services as well. REVIEW: https://review.gluster.org/16455 (glusterd: regenerate volfiles on op-version bump up) posted (#2) for review on master by Atin Mukherjee (amukherj) REVIEW: https://review.gluster.org/16455 (glusterd: regenerate volfiles on op-version bump up) posted (#3) for review on master by Atin Mukherjee (amukherj) COMMIT: https://review.gluster.org/16455 committed in master by Kaushal M (kaushal) ------ commit 33f8703a12dd97980c43e235546b04dffaf4afa0 Author: Atin Mukherjee <amukherj> Date: Mon Jan 23 13:03:06 2017 +0530 glusterd: regenerate volfiles on op-version bump up Change-Id: I2fe7a3ebea19492d52253ad5a1fdd67ac95c71c8 BUG: 1416251 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: https://review.gluster.org/16455 NetBSD-regression: NetBSD Build System <jenkins.org> Smoke: Gluster Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Prashanth Pai <ppai> Reviewed-by: Kaushal M <kaushal> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/ |