+++ This bug was initially created as a clone of Bug #1281285 +++ Description of problem: Regression test failed: https://build.gluster.org/job/rackspace-regression-2GB-triggered/15785/consoleFull failed on ./tests/bugs/fuse/many-groups-for-acl.t: 1 new core files Version-Release number of selected component (if applicable): How reproducible: Rarely Steps to Reproduce: 1. run regression tests Actual results: core dump of the glusterfs-client process. Expected results: No core dumps Additional info: --- Additional comment from Vijay Bellur on 2015-11-12 11:03:34 CET --- REVIEW: http://review.gluster.org/12575 (protocol/client: prevent use-after-free of frame->root) posted (#2) for review on master by Niels de Vos (ndevos)
REVIEW: http://review.gluster.org/12665 (protocol/client: prevent use-after-free of frame->root) posted (#1) for review on release-3.6 by Niels de Vos (ndevos)
COMMIT: http://review.gluster.org/12665 committed in release-3.6 by Raghavendra Bhat (raghavendra) ------ commit 5d264dbcb7cd08337105417014dccc8fda6f169a Author: Niels de Vos <ndevos> Date: Thu Nov 19 16:20:40 2015 +0100 protocol/client: prevent use-after-free of frame->root A regression failure generated a coredump on the glusterfs-client side: (gdb) f 0 #0 0x00007fba6cd76432 in client_submit_request (this=0x7fba68006fc0, req=0x7fba6579aa70, frame=0x7fba5c0058cc, prog=0x7fba6cfb53c0 <clnt3_3_fop_prog>, procnum=41, cbkfn=0x7fba6cd9206d <client3_3_release_cbk>, iobref=0x0, rsphdr=0x0, rsphdr_count=0, rsp_payload=0x0, rsp_payload_count=0, rsp_iobref=0x0, xdrproc=0x7fba79801075 <xdr_gfs3_release_req>) at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/protocol/client/src/client.c:324 324 frame->root->ngrps = ngroups; (gdb) l 319 gf_msg_debug (this->name, 0, "rpc_clnt_submit failed"); 320 } 321 322 if (!conf->send_gids) { 323 /* restore previous values */ 324 frame->root->ngrps = ngroups; 325 if (ngroups <= SMALL_GROUP_COUNT) 326 frame->root->groups_small[0] = gid; 327 } 328 (gdb) p *frame->root Cannot access memory at address 0x64185df000000000 After looking at this in more detail, the flow is like this: client_submit_request() | '- rpc_clnt_submit() // on line 314 | '- cbkfn() // = client3_3_release_cbk | :- STACK_DESTROY (frame->root); .----' .----' | :- frame->root->ngrps = ngroups; // on line 324 ' So, there is a use-after-free, and it is not needed to restore the previous groups in frame->root. Cherry picked from commit dc3aa7524e4974f9d02465e2e5dd6ed9b6d319e1: > Change-Id: I9e7d712183692ed92cfc2f75cd3c2781a9db20e2 > BUG: 1281285 (was incorrect in original patch) > Signed-off-by: Niels de Vos <ndevos> > Reviewed-on: http://review.gluster.org/12575 > Reviewed-by: Dan Lambright <dlambrig> > Tested-by: NetBSD Build System <jenkins.org> > Reviewed-by: Jeff Darcy <jdarcy> Change-Id: I9e7d712183692ed92cfc2f75cd3c2781a9db20e2 BUG: 1283690 Signed-off-by: Niels de Vos <ndevos> Reviewed-on: http://review.gluster.org/12665 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Dan Lambright <dlambrig> Reviewed-by: Raghavendra Bhat <raghavendra>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.7, please open a new bug report. glusterfs-3.6.7 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/gluster-devel/2015-December/047260.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user