Bug 1218381

Summary: rpc: Memory corruption because rpcsvc_register_notify interprets opaque mydata argument as xlator pointer
Product: [Community] GlusterFS Reporter: Kotresh HR <khiremat>
Component: rpcAssignee: Kotresh HR <khiremat>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.7.0CC: bugs, gluster-bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1215161 Environment:
Last Closed: 2015-05-14 17:29:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1215161    
Bug Blocks:    

Description Kotresh HR 2015-05-04 18:27:59 UTC
+++ This bug was initially created as a clone of Bug #1215161 +++

Description of problem:
Memory corruption might happen to mydata argument passed while registering with rpc using following routine as it interprets mydata as xlator pointer.

int
rpcsvc_register_notify (rpcsvc_t *svc, rpcsvc_notify_t notify, void *mydata)


Version-Release number of selected component (if applicable):
mainline

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
Possible Memory corruption of mydata

Expected results:
mydata should not be interpreted as xlator pointer and should not be touched.

Additional info:

--- Additional comment from Kotresh HR on 2015-04-24 09:13:32 EDT ---

Following core is seen with geo-rep using changelog.

(gdb) 
#0  0x00007f2c79b373a0 in pthread_mutex_lock () from /lib64/libpthread.so.0
#1  0x00007f2c5fdf70d1 in gf_changelog_process (data=0x7f2c6405c520)
    at /home/jenkins/root/workspace/smoke/xlators/features/changelog/lib/src/gf-changelog-journal-handler.c:592
#2  0x00007f2c79b359d1 in start_thread () from /lib64/libpthread.so.0
#3  0x00007f2c791f78fd in clone () from /lib64/libc.so.6

(gdb) p *((gf_changelog_journal_t *)0x7f2c6405c520)->jnl_proc
$6 = {lock = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, 
    __size = '\000' <repeats 39 times>, __align = 0}, cond = {__data = {__lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0, 
      __nwaiters = 0, __broadcast_seq = 0}, __size = '\000' <repeats 47 times>, __align = 0}, waiting = _gf_false, processor = 139828389721856, entries = {
    next = 0x7f2c64076b08, prev = 0x7f2c64076b08}}
(gdb)

Comment 1 Anand Avati 2015-05-04 18:31:48 UTC
REVIEW: http://review.gluster.org/10534 (rpc: Maintain separate xlator pointer in 'rpcsvc_state') posted (#1) for review on release-3.7 by Kotresh HR (khiremat)

Comment 2 Anand Avati 2015-05-05 13:29:25 UTC
COMMIT: http://review.gluster.org/10534 committed in release-3.7 by Vijay Bellur (vbellur) 
------
commit 783d78de250ba4159e5c59cdf476305ccb0814ec
Author: Kotresh HR <khiremat>
Date:   Fri Apr 24 17:31:03 2015 +0530

    rpc: Maintain separate xlator pointer in 'rpcsvc_state'
    
    The structure 'rpcsvc_state', which maintains rpc server
    state had no separate pointer to track the translator.
    It was using the mydata pointer itself. So callers were
    forced to send xlator pointer as mydata which is opaque
    (void pointer) by function prototype.
    
    'rpcsvc_register_init' is setting svc->mydata with xlator
    pointer. 'rpcsvc_register_notify' is overwriting svc->mydata
    with mydata pointer. And rpc interprets svc->mydata as
    xlator pointer internally. If someone passes non xlator
    structure pointer to rpcsvc_register_notify as libgfchangelog
    currently does, it might corrupt mydata. So interpreting opaque
    mydata as xlator pointer is incorrect as it is caller's choice
    to send mydata as any type of data to 'rpcsvc_register_notify'.
    
    Maintaining two different pointers in 'rpcsvc_state' for xlator
    and mydata solves the issue.
    
    BUG: 1218381
    Change-Id: I4c28937a30845e3f41b6fc7a09036149c816659b
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: http://review.gluster.org/10366
    Reviewed-on: http://review.gluster.org/10534
    Tested-by: Gluster Build System <jenkins.com>
    Tested-by: NetBSD Build System
    Reviewed-by: Aravinda VK <avishwan>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 3 Niels de Vos 2015-05-14 17:29:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 4 Niels de Vos 2015-05-14 17:35:59 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:38:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:47:12 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user