Bug 1218653 - rdma: properly handle memory registration during network interruption
Summary: rdma: properly handle memory registration during network interruption
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: rdma
Version: 3.7.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1200704
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-05 13:36 UTC by Mohammed Rafi KC
Modified: 2015-12-01 16:45 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.7.0
Clone Of: 1200704
Environment:
Last Closed: 2015-05-14 17:29:37 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Mohammed Rafi KC 2015-05-05 13:36:29 UTC
+++ This bug was initially created as a clone of Bug #1200704 +++

Description of problem:
 
When rdma.so library is unloaded because of any problem, then we need to deregister every buffer allocated with rdma. Also when the rdma.so is again loaded, we need to do the pre registeration again.

Version-Release number of selected component (if applicable):

mainline

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Anand Avati on 2015-03-11 06:03:36 EDT ---

REVIEW: http://review.gluster.org/9854 (rdma:properly handle iobuf_pool when rdma transport is unloaded) posted (#1) for review on master by mohammed rafi  kc (rkavunga)

--- Additional comment from Anand Avati on 2015-03-11 10:27:03 EDT ---

REVIEW: http://review.gluster.org/9854 (rdma:properly handle iobuf_pool when rdma transport is unloaded) posted (#2) for review on master by mohammed rafi  kc (rkavunga)

--- Additional comment from Anand Avati on 2015-04-23 14:30:17 EDT ---

REVIEW: http://review.gluster.org/9854 (rdma:properly handle iobuf_pool when rdma transport is unloaded) posted (#3) for review on master by mohammed rafi  kc (rkavunga)

Comment 1 Anand Avati 2015-05-05 13:40:29 UTC
REVIEW: http://review.gluster.org/10585 (rdma:properly handle iobuf_pool when rdma transport is unloaded) posted (#1) for review on release-3.7 by mohammed rafi  kc (rkavunga)

Comment 2 Anand Avati 2015-05-06 08:19:24 UTC
REVIEW: http://review.gluster.org/10585 (rdma:properly handle iobuf_pool when rdma transport is unloaded) posted (#2) for review on release-3.7 by Vijay Bellur (vbellur)

Comment 3 Anand Avati 2015-05-07 10:52:27 UTC
COMMIT: http://review.gluster.org/10585 committed in release-3.7 by Vijay Bellur (vbellur) 
------
commit 14011cb0383ac19b98b02f0caec5a1977ecd7c35
Author: Mohammed Rafi KC <rkavunga>
Date:   Wed Mar 11 12:20:38 2015 +0530

    rdma:properly handle iobuf_pool when rdma transport is unloaded
    
             Back port of : http://review.gluster.org/9854
    
    We are registering iobuf_pool with rdma. When rdma transport is
    unloaded, we need to deregister all the buffers registered with
    rdma. Otherwise iobuf_arena destroy will fail.
    
    Also if rdma.so is loaded again, then register iobuf_pool with
    rdma
    
    Change-Id: Ic197721a44ba11dce41e03058e0a73901248c541
    BUG: 1218653
    Signed-off-by: Mohammed Rafi KC <rkavunga>
    Reviewed-on: http://review.gluster.org/9854
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra Talur <rtalur>
    Reviewed-on: http://review.gluster.org/10585
    Tested-by: NetBSD Build System
    Reviewed-by: Vijay Bellur <vbellur>

Comment 4 Niels de Vos 2015-05-14 17:29:37 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:36:00 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:38:21 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:47:22 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.