Bug 1129708 - rdma: glusterfsd SEGV at volume start
Summary: rdma: glusterfsd SEGV at volume start
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: rdma
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1124981
Blocks: 1129710
TreeView+ depends on / blocked
 
Reported: 2014-08-13 14:01 UTC by Kaleb KEITHLEY
Modified: 2015-12-01 16:45 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1124981
: 1129710 (view as bug list)
Environment:
Last Closed: 2015-05-14 17:27:07 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Kaleb KEITHLEY 2014-08-13 14:01:08 UTC
+++ This bug was initially created as a clone of Bug #1124981 +++

Description of problem: glusterfsd NULL ptr deref in proto/server: get_frame_from_request() on transport rdma volume


Version-Release number of selected component (if applicable):

3.6.0.25 and earlier


How reproducible:

Create a volume with "... transport rdma ...", then start it


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Kaleb KEITHLEY on 2014-07-30 15:18:13 EDT ---

(gdb) where
#0  get_frame_from_request (req=0x7f3157e9e04c) at server-helpers.c:435
#1  0x00007f315c5245ce in server3_3_statfs (req=0x7f3157e9e04c)
    at server-rpc-fops.c:6106
#2  0x00000035b3809995 in rpcsvc_handle_rpc_call (svc=<value optimized out>,
    trans=<value optimized out>, msg=0x7f3130000a00) at rpcsvc.c:680
#3  0x00000035b3809bd3 in rpcsvc_notify (trans=0x7f3151636070,
    mydata=<value optimized out>, event=<value optimized out>,
    data=0x7f3130000a00) at rpcsvc.c:774
#4  0x00000035b380b678 in rpc_transport_notify (this=<value optimized out>,
    event=<value optimized out>, data=<value optimized out>)
    at rpc-transport.c:512
#5  0x00007f3157c948e0 in gf_rdma_pollin_notify (peer=0x7f3151632c60,
    post=<value optimized out>) at rdma.c:3517
#6  0x00007f3157c94e14 in gf_rdma_recv_request (peer=0x7f3151632c60,
    wc=<value optimized out>) at rdma.c:3633
#7  gf_rdma_process_recv (peer=0x7f3151632c60, wc=<value optimized out>)
    at rdma.c:3734
#8  0x00007f3157c951c7 in gf_rdma_recv_completion_proc (data=0x7f3150019bc0)
    at rdma.c:3867
#9  0x00000035b20079d1 in start_thread () from /lib64/libpthread.so.0
#10 0x00000035b18e8b5d in clone () from /lib64/libc.so.6
(gdb)

Same crash and backtrace regardless of 3.6.0.22 or 3.6.0.25.

(gdb) print req->trans->xl
$6 = (void *) 0x0

related to this fragment of code near line 435 of server-helpers.c:

        ...
        this = req->trans->xl;
        priv = this->private;
        ...

--- Additional comment from Kaleb KEITHLEY on 2014-07-30 15:25:29 EDT ---

with this fix the RHS-glusterfs-3.6.0.25 glusterfsd no longer SEGVs, but a) this may not be the right place to do this, and b) I confess I'm puzzled as to why upstream 3.5.1 works without it. The .../rpc/.... source tree is, delta the addition of the ssl logic in upstream, the same, and it works.

--- rpc/rpc-transport/rdma/src/rdma.c.orig	2014-07-30 15:19:17.931001471 -0400
+++ rpc/rpc-transport/rdma/src/rdma.c	2014-07-30 15:19:42.684999382 -0400
@@ -716,6 +716,7 @@
         this->name = gf_strdup (listener->name);
         this->notify = listener->notify;
         this->mydata = listener->mydata;
+        this->xl = listener->xl;
 
         this->myinfo.sockaddr_len = sizeof (cm_id->route.addr.src_addr);
         memcpy (&this->myinfo.sockaddr, &cm_id->route.addr.src_addr,


On a separate note

--- Additional comment from RHEL Product and Program Management on 2014-07-30 15:43:26 EDT ---

Since this issue was entered in bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

--- Additional comment from Kaleb KEITHLEY on 2014-07-30 16:08:39 EDT ---

https://code.engineering.redhat.com/gerrit/30050

Comment 1 Kaleb KEITHLEY 2014-08-13 14:06:16 UTC
--- Additional comment from Kaleb KEITHLEY on 2014-07-30 15:25:29 EDT ---

with this fix the glusterfs-3.7dev glusterfsd no longer SEGVs, 

--- rpc/rpc-transport/rdma/src/rdma.c.orig	2014-07-30 15:19:17.931001471 -0400
+++ rpc/rpc-transport/rdma/src/rdma.c	2014-07-30 15:19:42.684999382 -0400
@@ -716,6 +716,7 @@
         this->name = gf_strdup (listener->name);
         this->notify = listener->notify;
         this->mydata = listener->mydata;
+        this->xl = listener->xl;
 
         this->myinfo.sockaddr_len = sizeof (cm_id->route.addr.src_addr);
         memcpy (&this->myinfo.sockaddr, &cm_id->route.addr.src_addr,

Comment 2 Anand Avati 2014-08-13 14:31:20 UTC
REVIEW: http://review.gluster.org/8479 (rdma: glusterfsd SEGV at volume start) posted (#1) for review on master by Kaleb KEITHLEY (kkeithle@redhat.com)

Comment 3 Anand Avati 2014-08-13 17:32:00 UTC
COMMIT: http://review.gluster.org/8479 committed in master by Vijay Bellur (vbellur@redhat.com) 
------
commit 37b31605c6a2495848d52270e37b5fa0a8b9fdd5
Author: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Date:   Wed Aug 13 10:27:47 2014 -0400

    rdma: glusterfsd SEGV at volume start
    
    glusterfsd NULL ptr deref in proto/server: get_frame_from_request()
    with 'transport rdma' volume
    
    no test case, our regression test framework doesn't have Infiniband.
    If it did, the test case would be to create a 'transport rdma' volume,
    start it, and create/write/read/delete files on the volume.
    
    Change-Id: I91a6956bdf8f61f3853e0c0951744460ba138576
    BUG: 1129708
    Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
    Reviewed-on: http://review.gluster.org/8479
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Vijay Bellur <vbellur@redhat.com>

Comment 4 Niels de Vos 2015-05-14 17:27:07 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:35:32 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:37:54 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:43:17 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.