Bug 1197260

Summary: segfault trying to call ibv_dealloc_pd on a null pointer if ibv_alloc_pd failed
Product: [Community] GlusterFS Reporter: Mark <mlipscombe>
Component: rdmaAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: mainlineCC: bugs, gluster-bugs, hchiramm
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.7.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-05-14 17:29:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Mark 2015-02-27 23:33:09 UTC
Description of problem:
If creating an ib protection domain fails, during the cleanup a segfault will occur because trav->pd is null.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Attempt to create an RDMA connection under circumstances where the process cannot map enough memory.


Actual results:
Segfault

Expected results:
Failure without segfault

Additional info:

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f28fa6a6700 (LWP 27142)]
0x00007f295897cbe0 in ibv_dealloc_pd () from /usr/lib/libibverbs.so.1
(gdb) bt
#0  0x00007f295897cbe0 in ibv_dealloc_pd () from /usr/lib/libibverbs.so.1
#1  0x00007f28fc39c86c in gf_rdma_get_device (this=this@entry=0x7f28a44118b0, ibctx=<optimized out>, device_name=device_name@entry=0x7f28a440acd8 "mthca0") at rdma.c:805
#2  0x00007f28fc39cd48 in gf_rdma_create_qp (this=this@entry=0x7f28a44118b0) at rdma.c:3089
#3  0x00007f28fc39d3a2 in gf_rdma_cm_handle_route_resolved (event=<optimized out>) at rdma.c:999
#4  gf_rdma_cm_event_handler (data=0x7f28a4412940) at rdma.c:1195
#5  0x00007f2957655182 in start_thread (arg=0x7f28fa6a6700) at pthread_create.c:312
#6  0x00007f295738247d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111


#1  0x00007f28fc39c86c in gf_rdma_get_device (this=this@entry=0x7f28a44118b0, ibctx=<optimized out>, device_name=device_name@entry=0x7f28a440acd8 "mthca0") at rdma.c:805
805	                ibv_dealloc_pd (trav->pd);
(gdb) print trav
$2 = (gf_rdma_device_t *) 0x7f28c0000b70
(gdb) print trav->pd
$3 = (struct ibv_pd *) 0x0

Comment 1 Anand Avati 2015-02-27 23:38:37 UTC
REVIEW: http://review.gluster.org/9774 (rdma: segfault trying to call ibv_dealloc_pd on a null pointer if ibv_alloc_pd failed) posted (#1) for review on master by Mark Lipscombe (mlipscombe)

Comment 2 Anand Avati 2015-03-03 12:45:36 UTC
REVIEW: http://review.gluster.org/9774 (rdma: segfault trying to call ibv_dealloc_pd on a null pointer if ibv_alloc_pd failed) posted (#2) for review on master by Vijay Bellur (vbellur)

Comment 3 Anand Avati 2015-03-03 12:46:41 UTC
COMMIT: http://review.gluster.org/9774 committed in master by Vijay Bellur (vbellur) 
------
commit 33214ef83684c3b025c773931c071f8af030242b
Author: Mark Lipscombe <mlipscombe>
Date:   Fri Feb 27 15:36:48 2015 -0800

    rdma: segfault trying to call ibv_dealloc_pd on a null pointer
    if ibv_alloc_pd failed
    
    If creating an ib protection domain fails, during the cleanup
    a segfault will occur because trav->pd is null.
    
    Bug: 1197260
    Change-Id: I21b867c204c4049496b1bf11ec47e4139610266a
    Signed-off-by: Mark Lipscombe <mlipscombe>
    Reviewed-on: http://review.gluster.org/9774
    Reviewed-by: Vijay Bellur <vbellur>
    Tested-by: Vijay Bellur <vbellur>

Comment 4 Niels de Vos 2015-05-14 17:29:14 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:35:51 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:38:13 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:46:12 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user