+++ This bug was initially created as a clone of Bug #1522651 +++ Description of problem: In the rdma.c file, gf_rdma_device_t->all_mr is a __gf_rdma_arena_mr(include RDMA Memory Region(MR) content) kind of list in the rdma rpc-transport. The rdma rpc-transport will add/delete items to the gf_rdma_device_t->all_mr when MRs register, deregister, and free. Because gf_rdma_device_t->all_mr is used by different threads and it is not mutex protected, rdma transport maybe access obsolete items in it. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: At a heavy condition, the items in the gf_rdma_device_t->all_mr should be released by threads. As a result, glusterfsd/glusterfs process will crash. Expected results: gf_rdma_device_t->all_mr must be mutex protected. Additional info: None
REVIEW: https://review.gluster.org/19033 (rpc-transport/rdma: Add a mutex for the list of RDMA Memory Region(MR) access) posted (#2) for review on release-3.13 by Shyamsundar Ranganathan
COMMIT: https://review.gluster.org/19033 committed in release-3.13 by \"Yi Wang\" <wangyi> with a commit message- rpc-transport/rdma: Add a mutex for the list of RDMA Memory Region(MR) access Problem: gf_rdma_device_t->all_mr is a __gf_rdma_arena_mr(includes MR content) kind of list in the rdma rpc-transport. The rdma rpc-transport will add/delete items to the list when MRs register, deregister, and free. Because gf_rdma_device_t->all_mr is used by different threads and it is not mutex protected, rdma transport maybe access obsolete items in it. Solution: Add a mutex protection for the gf_rdma_device_t->all_mr. > Change-Id: I2b7de0f7aa516b90bb6f3c6aae3aadd23b243900 > BUG: 1522651 > Signed-off-by: Yi Wang <wangyi> (cherry picked from commit 8483ed87165c1695b513e223549d33d2d63891d9) Signed-off-by: Yi Wang <wangyi> Change-Id: I2b7de0f7aa516b90bb6f3c6aae3aadd23b243900 BUG: 1527699
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.1, please open a new bug report. glusterfs-3.13.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-devel/2017-December/054104.html [2] https://www.gluster.org/pipermail/gluster-users/