+++ This bug was initially created as a clone of Bug #1428739 +++ Description of problem: My patch at https://review.gluster.org/16419 is resulting in core dumps everytime I run tests/features/nuke.t. Turns out dht, upon successfully "nuking" a directory, which was initiated through a setxattr, unwinds the operation with rmdir fop signature, resulting in readdir-ahead casting a struct iatt (preparent) to dict_t, leading to a crash. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2017-03-03 04:42:08 EST --- REVIEW: https://review.gluster.org/16836 (cluster/dht: Fix crash in "nuke-dir" feature) posted (#1) for review on release-3.10 by Krutika Dhananjay (kdhananj)
REVIEW: https://review.gluster.org/16840 (cluster/dht: Fix crash in "nuke-dir" feature) posted (#1) for review on release-3.8 by Krutika Dhananjay (kdhananj)
COMMIT: https://review.gluster.org/16840 committed in release-3.8 by Niels de Vos (ndevos) ------ commit 2b65e69bf7adcd6176c24ef1439bd06aa445deae Author: Krutika Dhananjay <kdhananj> Date: Thu Mar 2 15:27:54 2017 +0530 cluster/dht: Fix crash in "nuke-dir" feature Backport of: https://review.gluster.org/16829 My patch at https://review.gluster.org/16419 is resulting in core dumps everytime I run tests/features/nuke.t. Turns out dht, upon successfully "nuking" a directory, which was initiated through a setxattr, unwinds the operation with rmdir fop signature, resulting in readdir-ahead casting a struct iatt (preparent) to dict_t, leading to a crash. Change-Id: Ib970b3198185a6c641092b00e115a672cb3f9111 BUG: 1428743 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: https://review.gluster.org/16840 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Raghavendra G <rgowdapp> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Niels de Vos <ndevos>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.10, please open a new bug report. glusterfs-3.8.10 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-March/000068.html [2] https://www.gluster.org/pipermail/gluster-users/