Description of problem: Was executing rsync to copy data within a sharded volume. And rsync internally copies data to a temporary file before renaming it to the actual bname. Hit this crash. But then there was a .t specifically meant to test all rename and unlink scenarios in sharded volumes. Then the question is - why didn't the tests @ https://bit.ly/2LScry1 and https://bit.ly/2vbqO6S cause the process to crash? Well, turns out it's because of a bug in the script itself where the test cases were supposed to test the scenario where the dst file doesn't exist but the dst path in both tests was existent as a remnant of the previous tests and never cleaned up prior to running these tests. Need to fix both the bug and the test. Version-Release number of selected component (if applicable): Only the master branch How reproducible: Every time Steps to Reproduce: 1. Create any volume with shard enabled. 2. Create a file f1 under the mount point. 3. Rename it to f2. The mount process crashes. Actual results: Expected results: Additional info:
https://review.gluster.org/#/c/20623
REVIEW: https://review.gluster.org/20623 (features/shard: Fix crash and test case in RENAME fop) posted (#1) for review on master by Krutika Dhananjay
COMMIT: https://review.gluster.org/20623 committed in master by "Krutika Dhananjay" <kdhananj> with a commit message- features/shard: Fix crash and test case in RENAME fop Setting the refresh flag in inode ctx in shard_rename_src_cbk() is applicable only when the dst file exists and is sharded and has a hard link > 1 at the time of rename. But this piece of code is exercised even when dst doesn't exist. In this case, the mount crashes because local->int_inodelk.loc.inode is NULL. Change-Id: Iaf85a5ee3dff8b01a76e11972f10f2bb9dcbd407 Updates: bz#1611692 Signed-off-by: Krutika Dhananjay <kdhananj>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/