Description of problem: While running regression tests glusterfsd process crashed due to a bit-rot bug. test script below: tests/bitrot/bug-1244613.t This is the backtrace of the core generated. Thread 1 (Thread 0x7fc834073700 (LWP 130993)): #0 0x00007fc842e431d7 in raise () from /lib64/libc.so.6 No symbol table info available. #1 0x00007fc842e448c8 in abort () from /lib64/libc.so.6 No symbol table info available. #2 0x00007fc842e3c146 in __assert_fail_base () from /lib64/libc.so.6 No symbol table info available. #3 0x00007fc842e3c1f2 in __assert_fail () from /lib64/libc.so.6 No symbol table info available. #4 0x00007fc834553a0f in __br_stub_can_trigger_release (inode=0x7fc7f0002208, ctx=0x7fc800014b70, version=0x7fc834072cb0) at /jenkins/workspace/glusterfs-release-6-10.10.117.24/xlators/features/bit-rot/src/stub/bit-rot-stub.h:294 __PRETTY_FUNCTION__ = "__br_stub_can_trigger_release" #5 0x00007fc834569229 in br_stub_release (this=0x7fc830010080, fd=0x7fc80c011e58) at /jenkins/workspace/glusterfs-release-6-10.10.117.24/xlators/features/bit-rot/src/stub/bit-rot-stub.c:3428 ret = 0 flags = 0 inode = 0x7fc7f0002208 releaseversion = 0 ctx = 0x7fc800014b70 tmp = 0 br_stub_fd = 0x7fc8100680a0 signinfo = 0 __FUNCTION__ = "br_stub_release" #6 0x00007fc84484c944 in fd_destroy (fd=0x7fc80c011e58, bound=true) at /jenkins/workspace/glusterfs-release-6-10.10.117.24/libglusterfs/src/fd.c:478 xl = 0x7fc830010080 i = 0 old_THIS = 0x7fc83001df00 __FUNCTION__ = "fd_destroy" #7 0x00007fc84484cbb8 in fd_unref (fd=0x7fc80c011e58) at /jenkins/workspace/glusterfs-release-6-10.10.117.24/libglusterfs/src/fd.c:529 refcount = 0 bound = true __FUNCTION__ = "fd_unref" #8 0x00007fc8448b4c64 in args_wipe (args=0x7fc80c00a0d0) at /jenkins/workspace/glusterfs-release-6-10.10.117.24/libglusterfs/src/default-args.c:1627 No locals. #9 0x00007fc84484b1c4 in call_stub_wipe_args (stub=0x7fc80c00a088) at /jenkins/workspace/glusterfs-release-6-10.10.117.24/libglusterfs/src/call-stub.c:2515 No locals. #10 0x00007fc84484b26e in call_stub_destroy (stub=0x7fc80c00a088) at /jenkins/workspace/glusterfs-release-6-10.10.117.24/libglusterfs/src/call-stub.c:2530 __FUNCTION__ = "call_stub_destroy" #11 0x00007fc84484b384 in call_resume (stub=0x7fc80c00a088) at /jenkins/workspace/glusterfs-release-6-10.10.117.24/libglusterfs/src/call-stub.c:2561 old_THIS = 0x7fc83001df00 __FUNCTION__ = "call_resume" #12 0x00007fc82f5909d0 in iot_worker (data=0x7fc830063260) at /jenkins/workspace/glusterfs-release-6-10.10.117.24/xlators/performance/io-threads/src/io-threads.c:232 conf = 0x7fc830063260 this = 0x7fc83001df00 stub = 0x7fc80c00a088 sleep_till = {tv_sec = 1564137618, tv_nsec = 831503198} ret = 0 pri = 2 bye = false __FUNCTION__ = "iot_worker" #13 0x00007fc843637dc5 in start_thread () from /lib64/libpthread.so.0 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Few questions: Which version were you running test? and which Arch/OS ?
This bug is moved to https://github.com/gluster/glusterfs/issues/947, and will be tracked there from now on. Visit GitHub issues URL for further details