Bug 1856574 - shared storage volume fails to mount in IPV6 environment
Summary: shared storage volume fails to mount in IPV6 environment
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: RHGS 3.5.z Batch Update 4
Assignee: Nikhil Ladha
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1856577
TreeView+ depends on / blocked
 
Reported: 2020-07-14 01:56 UTC by SATHEESARAN
Modified: 2021-04-29 07:20 UTC (History)
15 users (show)

Fixed In Version: glusterfs-6.0-52
Doc Type: Known Issue
Doc Text:
Previously, in a dual network environment shared storage volume did not auto-mount on a node reboot. To work around this issue, manually mounting with the command `mount -a` ensures that shared storage is mounted safely in a dual network environment.
Clone Of:
: 1856577 (view as bug list)
Environment:
Last Closed: 2021-04-29 07:20:36 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:1462 0 None None None 2021-04-29 07:20:53 UTC

Description SATHEESARAN 2020-07-14 01:56:55 UTC
Description of problem:
-----------------------
In an IPV6 only setup, enabling the shared-storage, created the 'gluster_shared_storage' volume with IPV6 FQDNs, but while adding the fstab mount options there is no IPV6 specific mount options added and the volume fails to mount

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
RHGS 3.5.2

How reproducible:
-----------------
Always

Steps to Reproduce:
--------------------
1. Create IPV6 only setup, no IPV4
2. Create geo-rep master volume using IPV6 FQDN
3. Enable shared-storage
4. Verify that the shared storage is mounted

Actual results:
----------------
Shared storage fails to mount

Expected results:
------------------
Shared storage should get mounted

Additional info:
----------------
entry in fstab:
---------------
rhsqa-grafton7-ipv6.lab.eng.blr.redhat.com:/gluster_shared_storage /run/gluster/shared_storage glusterfs defaults 0 0

Error in mounting
-----------------
<snip>
[2020-07-13 09:54:09.316039] I [MSGID: 100030] [glusterfsd.c:2858:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 6.0 (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=rhsqa-grafton7-ipv6.lab.eng.blr.redhat.com --volfile-id=/gluster_shared_storage /run/gluster/shared_storage)
[2020-07-13 09:54:09.317134] I [glusterfsd.c:2567:daemonize] 0-glusterfs: Pid of current running process is 1879648
[2020-07-13 09:54:09.321866] E [MSGID: 101075] [common-utils.c:508:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known)
[2020-07-13 09:54:09.321891] E [name.c:266:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS resolution failed on host rhsqa-grafton7-ipv6.lab.eng.blr.redhat.com
[2020-07-13 09:54:09.322047] I [glusterfsd-mgmt.c:2443:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: rhsqa-grafton7-ipv6.lab.eng.blr.redhat.com
[2020-07-13 09:54:09.322082] I [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers
[2020-07-13 09:54:09.322124] I [MSGID: 101190] [event-epoll.c:688:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2020-07-13 09:54:09.322149] I [MSGID: 101190] [event-epoll.c:688:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2020-07-13 09:54:09.322171] W [glusterfsd.c:1581:cleanup_and_exit] (-->/lib64/libgfrpc.so.0(+0xf5b3) [0x7fc5ab8935b3] -->/usr/sbin/glusterfs(+0x13c97) [0x564f0d191c97] -->/usr/sbin/glusterfs(cleanup_and_exit+0x58) [0x564f0d1866b8] ) 0-: received signum (1), shutting down
[2020-07-13 09:54:09.322195] I [fuse-bridge.c:6917:fini] 0-fuse: Unmounting '/run/gluster/shared_storage'.
[2020-07-13 09:54:09.334362] I [fuse-bridge.c:6922:fini] 0-fuse: Closing fuse connection to '/run/gluster/shared_storage'.
[2020-07-13 09:54:09.334496] W [glusterfsd.c:1581:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x82de) [0x7fc5aa66c2de] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xfd) [0x564f0d18686d] -->/usr/sbin/glusterfs(cleanup_and_exit+0x58) [0x564f0d1866b8] ) 0-: received signum (15), shutting down
</snip>

Comment 1 SATHEESARAN 2020-07-14 01:59:03 UTC
Error is gone with adding fstab mount option as 'xlator-option=transport.address-family=inet6'

Comment 11 SATHEESARAN 2021-01-04 16:16:02 UTC
Tested with glusterfs-6.0-51.el8rhgs by enabling IPV6 and seeing 2 issues with IPV6 environment.

1. The shared storage volume is not even created.
2. No mount option is set with new entry in /etc/fstab

Comment 12 Nikhil Ladha 2021-01-07 07:36:35 UTC
Upstream patch link: https://github.com/gluster/glusterfs/pull/1972

Comment 15 SATHEESARAN 2021-02-08 07:49:32 UTC
Verified with the RHGS 3.5.4 interim build - glusterfs-6.0-52.el8rhgs with the following steps repeated across RHEL 7.9 and RHEL 8.3 platforms

1. Create a new gluster cluster and volume using IPV4 FQDN
2. Enable shared storage using the command: # gluster volume set all cluster.enable-shared-storage enabled
3. Verify that the shared storage volume is created, and fuse mounted. Also has the appropriate entry in /etc/fstab
4. Reboot one of the hosts, and make sure that the shared storage is mounted

Repeated the above test with cluster and volume created with IPV4 direct addresses
Repeated the above test with cluster and volume created with IPV6 direct addresses
Repeated the above test with cluster and volume created with IPV6 FQDNs

All the cases works good

Comment 19 errata-xmlrpc 2021-04-29 07:20:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1462


Note You need to log in before you can comment on or make changes to this bug.