Bug 1792855 - Memory corruption when sending events to an IPv6 host
Summary: Memory corruption when sending events to an IPv6 host
Alias: None
Product: GlusterFS
Classification: Community
Component: eventsapi
Version: 7
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
Assignee: Xavi Hernandez
QA Contact:
Depends On: 1790870
TreeView+ depends on / blocked
Reported: 2020-01-20 09:01 UTC by Xavi Hernandez
Modified: 2020-03-12 14:22 UTC (History)
0 users

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1790870
Last Closed: 2020-03-12 14:22:28 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 24184 0 None Open events: fix IPv6 memory corruption 2020-02-27 09:58:56 UTC

Description Xavi Hernandez 2020-01-20 09:01:28 UTC
+++ This bug was initially created as a clone of Bug #1790870 +++

Description of problem:

There's memory corruption when an event is sent to an IPv6 host.

Version-Release number of selected component (if applicable):

How reproducible:

always on a volume where volfile server resolves to an IPv6 address.

Steps to Reproduce:

Actual results:

Expected results:

Additional info:

Backtrace of the crash:

Thread 1 (Thread 0xb2a57700 (LWP 1984)):
#0  __libc_do_syscall () at ../sysdeps/unix/sysv/linux/arm/libc-do-syscall.S:47
#1  0xb6cb8b32 in __libc_signal_restore_set (set=0xb2a567d4) at ../sysdeps/unix/sysv/linux/nptl-signals.h:80
#2  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:48
#3  0xb6cb982e in __GI_abort () at abort.c:79
#4  0xb6ce1460 in __libc_message (action=do_abort, fmt=<optimized out>) at ../sysdeps/posix/libc_fatal.c:181
#5  0xb6d386e8 in __GI___fortify_fail_abort (need_backtrace=need_backtrace@entry=false, 
    msg=0xb6d6e7ec "stack smashing detected") at fortify_fail.c:33
#6  0xb6d386c4 in __stack_chk_fail () at stack_chk_fail.c:29
#7  0xb6ea4c52 in _gf_event (event=event@entry=EVENT_AFR_SUBVOL_UP, fmt=0xb1870bcc "client-pid=%d; subvol=%s")
    at events.c:151
#8  0xb1857ddc in __afr_handle_child_up_event (this=this@entry=0xb21219f0, 
    child_xlator=child_xlator@entry=0xb2111ef0, idx=idx@entry=2, child_latency_msec=-1, 
    event=event@entry=0xb2a56c4c, call_psh=call_psh@entry=0xb2a56c54, up_child=up_child@entry=0xb2a56c58)
    at afr-common.c:6035
#9  0xb186916e in afr_notify (this=0xb21219f0, event=<optimized out>, data=data@entry=0x0, data2=<optimized out>)
    at afr-common.c:6341
#10 0xb1869674 in notify (this=<optimized out>, event=<optimized out>, data=0xb2111ef0) at afr.c:42
#11 0xb6e3ba72 in xlator_notify (xl=0xb21219f0, event=event@entry=5, data=0xb2111ef0) at xlator.c:699
#12 0xb6ed21f0 in default_notify (this=this@entry=0xb2111ef0, event=event@entry=5, data=0x0) at defaults.c:3388
#13 0xb189c7d0 in client_notify_dispatch (this=this@entry=0xb2111ef0, event=event@entry=5, data=0x0)
    at client.c:148
#14 0xb189c88a in client_notify_dispatch_uniq (this=0xb2111ef0, event=event@entry=5, data=0x0) at client.c:120
#15 0xb18b6d02 in client_notify_parents_child_up (this=this@entry=0xb2111ef0) at client-handshake.c:48
#16 0xb18b8c74 in client_post_handshake (frame=0xb170c614, this=0xb2111ef0) at client-handshake.c:699
#17 client_setvolume_cbk (req=<optimized out>, iov=<optimized out>, count=<optimized out>, myframe=0xb170c614)
    at client-handshake.c:889
#18 0xb6de9f6a in rpc_clnt_handle_reply (clnt=clnt@entry=0xb217d530, pollin=pollin@entry=0x4) at rpc-clnt.c:768
#19 0xb6dea1c6 in rpc_clnt_notify (trans=0xb217d870, mydata=0xb217d550, event=RPC_TRANSPORT_MSG_RECEIVED, 
    data=0xb2186fd8) at rpc-clnt.c:935
#20 0xb6de77a8 in rpc_transport_notify (this=this@entry=0xb217d870, event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, 
    data=0xb2186fd8) at rpc-transport.c:545
#21 0xb2ae5124 in socket_event_poll_in_async (xl=<optimized out>, async=async@entry=0xb2187064) at socket.c:2601
#22 0xb2ae9fc2 in gf_async (cbk=0xb2ae510d <socket_event_poll_in_async>, xl=<optimized out>, async=0xb2187064)
    at ../../../../libglusterfs/src/glusterfs/async.h:189
---Type <return> to continue, or q <return> to quit---
#23 socket_event_poll_in (notify_handled=true, this=0xb217d870) at socket.c:2642
#24 socket_event_handler (fd=<optimized out>, idx=2, gen=4, data=0xb217d870, poll_in=1, poll_out=0, poll_err=0, 
    event_thread_died=0 '\000') at socket.c:3040
#25 0xb6e8a66a in event_dispatch_epoll_handler (event=0xb2a570d0, event_pool=0x4b42a0) at event-epoll.c:650
#26 event_dispatch_epoll_worker (data=0x4d58d8) at event-epoll.c:763
#27 0xb6d91614 in start_thread (arg=0x7da5495d) at pthread_create.c:463

Comment 1 Worker Ant 2020-02-27 09:56:01 UTC
REVIEW: https://review.gluster.org/24183 (events: fix IPv6 memory corruption) posted (#2) for review on release-6 by Xavi Hernandez

Comment 2 Worker Ant 2020-02-27 09:57:38 UTC
REVISION POSTED: https://review.gluster.org/24183 (events: fix IPv6 memory corruption) posted (#3) for review on release-6 by Xavi Hernandez

Comment 3 Worker Ant 2020-02-27 09:58:58 UTC
REVIEW: https://review.gluster.org/24184 (events: fix IPv6 memory corruption) posted (#1) for review on release-7 by Xavi Hernandez

Comment 4 Worker Ant 2020-03-12 14:22:28 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/1030, and will be tracked there from now on. Visit GitHub issues URL for further details

Note You need to log in before you can comment on or make changes to this bug.