Bug 1215161

Summary: rpc: Memory corruption because rpcsvc_register_notify interprets opaque mydata argument as xlator pointer
Product: [Community] GlusterFS Reporter: Kotresh HR <khiremat>
Component: rpcAssignee: Kotresh HR <khiremat>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, gluster-bugs
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1218381 (view as bug list) Environment:
Last Closed: 2016-06-16 12:55:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1218381    

Description Kotresh HR 2015-04-24 13:10:17 UTC
Description of problem:
Memory corruption might happen to mydata argument passed while registering with rpc using following routine as it interprets mydata as xlator pointer.

int
rpcsvc_register_notify (rpcsvc_t *svc, rpcsvc_notify_t notify, void *mydata)


Version-Release number of selected component (if applicable):
mainline

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
Possible Memory corruption of mydata

Expected results:
mydata should not be interpreted as xlator pointer and should not be touched.

Additional info:

Comment 1 Kotresh HR 2015-04-24 13:13:32 UTC
Following core is seen with geo-rep using changelog.

(gdb) 
#0  0x00007f2c79b373a0 in pthread_mutex_lock () from /lib64/libpthread.so.0
#1  0x00007f2c5fdf70d1 in gf_changelog_process (data=0x7f2c6405c520)
    at /home/jenkins/root/workspace/smoke/xlators/features/changelog/lib/src/gf-changelog-journal-handler.c:592
#2  0x00007f2c79b359d1 in start_thread () from /lib64/libpthread.so.0
#3  0x00007f2c791f78fd in clone () from /lib64/libc.so.6

(gdb) p *((gf_changelog_journal_t *)0x7f2c6405c520)->jnl_proc
$6 = {lock = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, 
    __size = '\000' <repeats 39 times>, __align = 0}, cond = {__data = {__lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0, 
      __nwaiters = 0, __broadcast_seq = 0}, __size = '\000' <repeats 47 times>, __align = 0}, waiting = _gf_false, processor = 139828389721856, entries = {
    next = 0x7f2c64076b08, prev = 0x7f2c64076b08}}
(gdb)

Comment 2 Anand Avati 2015-04-24 13:20:13 UTC
REVIEW: http://review.gluster.org/10366 (rpc: Maintain separate xlator pointer in 'rpcsvc_state') posted (#1) for review on master by Kotresh HR (khiremat)

Comment 3 Anand Avati 2015-04-24 13:29:01 UTC
REVIEW: http://review.gluster.org/10366 (rpc: Maintain separate xlator pointer in 'rpcsvc_state') posted (#2) for review on master by Kotresh HR (khiremat)

Comment 4 Anand Avati 2015-04-25 11:19:55 UTC
REVIEW: http://review.gluster.org/10366 (rpc: Maintain separate xlator pointer in 'rpcsvc_state') posted (#3) for review on master by Kotresh HR (khiremat)

Comment 5 Anand Avati 2015-04-27 09:19:04 UTC
REVIEW: http://review.gluster.org/10366 (rpc: Maintain separate xlator pointer in 'rpcsvc_state') posted (#4) for review on master by Kotresh HR (khiremat)

Comment 6 Anand Avati 2015-04-30 11:38:57 UTC
REVIEW: http://review.gluster.org/10366 (rpc: Maintain separate xlator pointer in 'rpcsvc_state') posted (#5) for review on master by Kotresh HR (khiremat)

Comment 7 Anand Avati 2015-05-04 11:08:50 UTC
COMMIT: http://review.gluster.org/10366 committed in master by Raghavendra G (rgowdapp) 
------
commit dc0020c72d5c2d20328b89224b149ebb87002277
Author: Kotresh HR <khiremat>
Date:   Fri Apr 24 17:31:03 2015 +0530

    rpc: Maintain separate xlator pointer in 'rpcsvc_state'
    
    The structure 'rpcsvc_state', which maintains rpc server
    state had no separate pointer to track the translator.
    It was using the mydata pointer itself. So callers were
    forced to send xlator pointer as mydata which is opaque
    (void pointer) by function prototype.
    
    'rpcsvc_register_init' is setting svc->mydata with xlator
    pointer. 'rpcsvc_register_notify' is overwriting svc->mydata
    with mydata pointer. And rpc interprets svc->mydata as
    xlator pointer internally. If someone passes non xlator
    structure pointer to rpcsvc_register_notify as libgfchangelog
    currently does, it might corrupt mydata. So interpreting opaque
    mydata as xlator pointer is incorrect as it is caller's choice
    to send mydata as any type of data to 'rpcsvc_register_notify'.
    
    Maintaining two different pointers in 'rpcsvc_state' for xlator
    and mydata solves the issue.
    
    Change-Id: I7874933fefc68f3fe01d44f92016a8e4e9768378
    BUG: 1215161
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: http://review.gluster.org/10366
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>

Comment 8 Nagaprasad Sathyanarayana 2015-10-25 14:53:33 UTC
Fix for this bug is already made in a GlusterFS release. The cloned BZ has details of the fix and the release. Hence closing this mainline BZ.

Comment 9 Nagaprasad Sathyanarayana 2015-10-25 15:12:24 UTC
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.

Comment 10 Niels de Vos 2016-06-16 12:55:58 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user