Bug 1640489 - Invalid memory read after freed in dht_rmdir_readdirp_cbk
Summary: Invalid memory read after freed in dht_rmdir_readdirp_cbk
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kinglong Mee
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1654103
TreeView+ depends on / blocked
 
Reported: 2018-10-18 08:32 UTC by Kinglong Mee
Modified: 2019-03-25 16:31 UTC (History)
3 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1654103 (view as bug list)
Environment:
Last Closed: 2019-03-25 16:31:24 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 21446 0 None Open dht: fix use after free in dht_rmdir_readdirp_cbk 2018-11-05 04:24:54 UTC

Description Kinglong Mee 2018-10-18 08:32:58 UTC
Description of problem:
valgrind shows,

==7734== Thread 13:
==7734== Invalid read of size 8
==7734==    at 0x15EE4B68: dht_rmdir_readdirp_cbk (dht-common.c:8697)
==7734==    by 0x15C332E2: client3_3_readdirp_cbk (client-rpc-fops.c:2660)
==7734==    by 0xAB96524: rpc_clnt_handle_reply (rpc-clnt.c:786)
==7734==    by 0xAB96ABD: rpc_clnt_notify (rpc-clnt.c:977)
==7734==    by 0xAB9275B: rpc_transport_notify (rpc-transport.c:543)
==7734==    by 0x15508220: socket_event_poll_in (socket.c:2541)
==7734==    by 0x15508868: socket_event_handler (socket.c:2690)
==7734==    by 0xA90987E: event_dispatch_epoll_handler (event-epoll.c:587)
==7734==    by 0xA909B8B: event_dispatch_epoll_worker (event-epoll.c:665)
==7734==    by 0x688EDC4: start_thread (in /usr/lib64/libpthread-2.17.so)
==7734==    by 0x71FA73C: clone (in /usr/lib64/libc-2.17.so)
==7734==  Address 0x29aa73a8 is 8 bytes inside a block of size 3,536 free'd
==7734==    at 0x4C28CDD: free (vg_replace_malloc.c:530)
==7734==    by 0xA8CA4B6: __gf_free (mem-pool.c:329)
==7734==    by 0xA8CA9F3: mem_put (mem-pool.c:579)
==7734==    by 0x15E798F4: dht_local_wipe (dht-helper.c:639)
==7734==    by 0x15EE4A4A: dht_rmdir_readdirp_done (dht-common.c:8663)
==7734==    by 0x15EE4C40: dht_rmdir_readdirp_do (dht-common.c:8733)
==7734==    by 0x15EE3A8B: dht_rmdir_cached_lookup_cbk (dht-common.c:8459)
==7734==    by 0x15C350E9: client3_3_lookup_cbk (client-rpc-fops.c:2955)
==7734==    by 0xAB96524: rpc_clnt_handle_reply (rpc-clnt.c:786)
==7734==    by 0xAB96ABD: rpc_clnt_notify (rpc-clnt.c:977)
==7734==    by 0xAB9275B: rpc_transport_notify (rpc-transport.c:543)
==7734==    by 0x15508220: socket_event_poll_in (socket.c:2541)
==7734==  Block was alloc'd at
==7734==    at 0x4C27BE3: malloc (vg_replace_malloc.c:299)
==7734==    by 0xA8C95B4: __gf_default_malloc (mem-pool.h:110)
==7734==    by 0xA8C9D5D: __gf_malloc (mem-pool.c:137)
==7734==    by 0xA8CA9D9: mem_get (mem-pool.c:475)
==7734==    by 0xA8CA984: mem_get0 (mem-pool.c:463)
==7734==    by 0x15E7993B: dht_local_init (dht-helper.c:650)
==7734==    by 0x15EE52D9: dht_rmdir_opendir_cbk (dht-common.c:8825)
==7734==    by 0x15C3484F: client3_3_opendir_cbk (client-rpc-fops.c:2859)
==7734==    by 0xAB96524: rpc_clnt_handle_reply (rpc-clnt.c:786)
==7734==    by 0xAB96ABD: rpc_clnt_notify (rpc-clnt.c:977)
==7734==    by 0xAB9275B: rpc_transport_notify (rpc-transport.c:543)
==7734==    by 0x15508220: socket_event_poll_in (socket.c:2541)
==7734==

and some massages at ganesha-gfapi.log
[dht-common.c:8412:dht_rmdir_cached_lookup_cbk] 0-openfs1-dht: /nfs/tfile/p25/d5XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/l6XX found on cached subvol openfs1-client-0
[dht-common.c:8412:dht_rmdir_cached_lookup_cbk] 0-openfs1-dht: /nfs/tfile/p18/d0XXXXXXXXXXXXX/ddXXXXXXXXXXXXXXXXXX/d1aX/c1b found on cached subvol openfs1-client-0
[dht-common.c:8412:dht_rmdir_cached_lookup_cbk] 0-openfs1-dht: /nfs/tfile/p10/d32XXXXXXXXX/c33 found on cached subvol openfs1-client-0
[dht-common.c:8412:dht_rmdir_cached_lookup_cbk] 0-openfs1-dht: /nfs/tfile/p10/d32XXXXXXXXX/c33 found on cached subvol openfs1-client-0
[dht-common.c:8412:dht_rmdir_cached_lookup_cbk] 0-openfs1-dht: /nfs/tfile/p1b/d1X/d14XXXXXXXXXXXXXXXXXXXX/f1dXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX found on cached subvol openfs1-client-1
[dht-common.c:8412:dht_rmdir_cached_lookup_cbk] 0-openfs1-dht: /nfs/tfile/p2c/d0XX/d10XXXXXXXXXXXXX/d15XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/d16XXXXX/c19XXXX found on cached subvol openfs1-client-0
[dht-common.c:8412:dht_rmdir_cached_lookup_cbk] 0-openfs1-dht: /nfs/tfile/p2c/d0XX/d10XXXXXXXXXXXXX/d15XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/d16XXXXX/c19XXXX found on cached subvol openfs1-client-0


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2018-10-18 10:25:37 UTC
REVIEW: https://review.gluster.org/21446 (dht: fix use after free in dht_rmdir_readdirp_cbk) posted (#1) for review on master by Kinglong Mee

Comment 2 Worker Ant 2018-11-05 04:24:52 UTC
REVIEW: https://review.gluster.org/21446 (dht: fix use after free in dht_rmdir_readdirp_cbk) posted (#6) for review on master by N Balachandran

Comment 3 Shyamsundar 2019-03-25 16:31:24 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.