Bug 1715012 - Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually
Summary: Failure when glusterd is configured to bind specific IPv6 address. If bind-ad...
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: rpc
Version: 6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1713730
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-29 11:23 UTC by hari gowtham
Modified: 2019-07-11 09:11 UTC (History)
6 users (show)

Fixed In Version:
Clone Of: 1713730
Environment:
Last Closed: 2019-06-03 04:08:31 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 22787 0 None Merged If bind-address is IPv6 return it successfully 2019-06-03 04:08:30 UTC

Description hari gowtham 2019-05-29 11:23:06 UTC
+++ This bug was initially created as a clone of Bug #1713730 +++

Description of problem:

Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually	
Version-Release number of selected component (if applicable):


How reproducible:
Configure glusterd with pure IPv6

Steps to Reproduce:
1.
2.
3.

Actual results:

log:
[2019-05-21 06:07:28.121877] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/6.1/xlator/mgmt/glusterd.so
[2019-05-21 06:07:28.123042] T [MSGID: 0] [xlator.c:286:xlator_dynload_apis] 0-xlator: management: method missing (reconfigure)
[2019-05-21 06:07:28.123061] T [MSGID: 0] [xlator.c:290:xlator_dynload_apis] 0-xlator: management: method missing (notify)
[2019-05-21 06:07:28.123069] T [MSGID: 0] [xlator.c:294:xlator_dynload_apis] 0-xlator: management: method missing (dumpops)
[2019-05-21 06:07:28.123075] T [MSGID: 0] [xlator.c:305:xlator_dynload_apis] 0-xlator: management: method missing (dump_metrics)
[2019-05-21 06:07:28.123081] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: management: method missing (pass_through_fops), falling back to default
[2019-05-21 06:07:28.123100] T [MSGID: 0] [graph.y:218:volume_type] 0-parser: Type:management:mgmt/glusterd
[2019-05-21 06:07:28.123115] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:working-directory:/var/lib/glusterd
[2019-05-21 06:07:28.123124] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport-type:socket,rdma
[2019-05-21 06:07:28.123134] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.socket.keepalive-time:10
[2019-05-21 06:07:28.123142] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.socket.keepalive-interval:2
[2019-05-21 06:07:28.123153] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.socket.read-fail-log:off
[2019-05-21 06:07:28.123160] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.socket.listen-port:24007
[2019-05-21 06:07:28.123167] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.rdma.listen-port:24008
[2019-05-21 06:07:28.123175] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.address-family:inet6
[2019-05-21 06:07:28.123182] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:ping-timeout:0
[2019-05-21 06:07:28.123191] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:event-threads:1
[2019-05-21 06:07:28.123199] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.socket.bind-address:2001:db81234:e
[2019-05-21 06:07:28.123206] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.tcp.bind-address:2001:db81234:e
[2019-05-21 06:07:28.123223] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.rdma.bind-address:2001:db81234:e
[2019-05-21 06:07:28.123233] T [MSGID: 0] [graph.y:324:volume_end] 0-parser: end:management
[2019-05-21 06:07:28.123482] I [MSGID: 106478] [glusterd.c:1422:init] 0-management: Maximum allowed open file descriptors set to 65536
[2019-05-21 06:07:28.123541] I [MSGID: 106479] [glusterd.c:1478:init] 0-management: Using /var/lib/glusterd as working directory
[2019-05-21 06:07:28.123557] I [MSGID: 106479] [glusterd.c:1484:init] 0-management: Using /var/run/gluster as pid file working directory
[2019-05-21 06:07:28.123678] D [MSGID: 0] [glusterd.c:458:glusterd_rpcsvc_options_build] 0-glusterd: listen-backlog value: 1024
[2019-05-21 06:07:28.123710] T [rpcsvc.c:2815:rpcsvc_init] 0-rpc-service: rx pool: 64
[2019-05-21 06:07:28.123739] T [rpcsvc-auth.c:124:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_GLUSTERFS
[2019-05-21 06:07:28.123746] T [rpcsvc-auth.c:124:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_GLUSTERFS-v2
[2019-05-21 06:07:28.123750] T [rpcsvc-auth.c:124:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_GLUSTERFS-v3
[2019-05-21 06:07:28.123765] T [rpcsvc-auth.c:124:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_UNIX
[2019-05-21 06:07:28.123772] T [rpcsvc-auth.c:124:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_NULL
[2019-05-21 06:07:28.123777] D [rpcsvc.c:2835:rpcsvc_init] 0-rpc-service: RPC service inited.
[2019-05-21 06:07:28.123959] D [rpcsvc.c:2337:rpcsvc_program_register] 0-rpc-service: New program registered: GF-DUMP, Num: 123451501, Ver: 1, Port: 0
[2019-05-21 06:07:28.123983] D [rpc-transport.c:293:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/6.1/rpc-transport/socket.so
[2019-05-21 06:07:28.127261] T [MSGID: 0] [options.c:141:xlator_option_validate_sizet] 0-management: no range check required for 'option transport.listen-backlog 1024'
[2019-05-21 06:07:28.127422] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-management: no range check required for 'option transport.socket.listen-port 24007'
[2019-05-21 06:07:28.127487] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-management: no range check required for 'option transport.socket.keepalive-interval 2'
[2019-05-21 06:07:28.127513] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-management: no range check required for 'option transport.socket.keepalive-time 10'
[2019-05-21 06:07:28.129213] D [socket.c:4505:socket_init] 0-socket.management: Configued transport.tcp-user-timeout=42
[2019-05-21 06:07:28.129231] D [socket.c:4523:socket_init] 0-socket.management: Reconfigued transport.keepalivecnt=9
[2019-05-21 06:07:28.129239] D [socket.c:4209:ssl_setup_connection_params] 0-socket.management: SSL support on the I/O path is NOT enabled
[2019-05-21 06:07:28.129244] D [socket.c:4212:ssl_setup_connection_params] 0-socket.management: SSL support for glusterd is NOT enabled
[2019-05-21 06:07:28.129268] W [rpcsvc.c:1991:rpcsvc_create_listener] 0-rpc-service: listening on transport failed

Expected results:
Expected not to fail

Additional info:

The bug is in below snippet. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually.
rpc/rpc-transport/socket/src/name.c

/* IPV6 server can handle both ipv4 and ipv6 clients */
for (rp = res; rp != NULL; rp = rp->ai_next) {
if (rp->ai_addr == NULL)
continue;
if (rp->ai_family == AF_INET6) { ==============1
memcpy(addr, rp->ai_addr, rp->ai_addrlen);
*addr_len = rp->ai_addrlen;
}
}

if (!(*addr_len) && res && res->ai_addr) {
memcpy(addr, res->ai_addr, res->ai_addrlen);
*addr_len = res->ai_addrlen;
} else { ==================2
ret = -1;
}

freeaddrinfo(res);

Issue#667 was opened and a fix was submitted. This is to tag the Gerrit patch with bugzilla ID

--- Additional comment from RHEL Product and Program Management on 2019-05-24 16:08:03 UTC ---

This bug is automatically being proposed for the next minor release of Red Hat Gluster Storage by setting the release flag 'rhgs‑3.5.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Amgad on 2019-05-24 16:26:14 UTC ---

Issue should be #677 and PR is #678

--- Additional comment from Ravishankar N on 2019-05-24 16:35:31 UTC ---

You used the wrong Product type. Fixing it now.

--- Additional comment from Amgad on 2019-05-24 17:13:26 UTC ---

Let me know if any action on my side for code submission!

--- Additional comment from Ravishankar N on 2019-05-27 05:39:51 UTC ---

https://review.gluster.org/#/c/glusterfs/+/22769/

--- Additional comment from Worker Ant on 2019-05-28 17:10:44 UTC ---

REVIEW: https://review.gluster.org/22769 (If bind-address is IPv6 return it successfully) merged (#6) on master by Amar Tumballi

Comment 1 Worker Ant 2019-05-29 11:38:40 UTC
REVIEW: https://review.gluster.org/22787 (If bind-address is IPv6 return it successfully) posted (#2) for review on release-6 by Sunny Kumar

Comment 2 Worker Ant 2019-06-03 04:08:31 UTC
REVIEW: https://review.gluster.org/22787 (If bind-address is IPv6 return it successfully) merged (#2) on release-6 by Sunny Kumar

Comment 3 hari gowtham 2019-07-11 09:11:32 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.3, please open a new bug report.

glusterfs-6.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/gluster-users/2019-July/036790.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.