Bug 1203433
Summary: | Brick/glusterfsd crash when tried to rm a folder with IO error | ||||||
---|---|---|---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Peter Auyeung <pauyeung> | ||||
Component: | nfs | Assignee: | bugs <bugs> | ||||
Status: | CLOSED DUPLICATE | QA Contact: | |||||
Severity: | urgent | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 3.5.2 | CC: | bugs, gluster-bugs, joe, ndevos | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2015-12-22 12:29:13 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Peter Auyeung
2015-03-18 19:37:23 UTC
When tried to remove another folder over glusterfs it crashed a brick too This time we getting these error: rm: cannot remove `qa_trunk.old/qa_trunk/externals/shipserv_api/.svn/text-base': Transport endpoint is not connected rm: cannot remove `qa_trunk.old/qa_trunk/externals/shipserv_api/.svn/prop-base': Transport endpoint is not connected rm: cannot remove `qa_trunk.old/qa_trunk/externals/shipserv_api/.svn/props': Transport endpoint is not connected rm: cannot remove `qa_trunk.old/qa_trunk/externals/shipserv_api/.svn/tmp/text-base': Transport endpoint is not connected rm: cannot remove `qa_trunk.old/qa_trunk/externals/shipserv_api/.svn/tmp/prop-base': Transport endpoint is not connected rm: cannot remove `qa_trunk.old/qa_trunk/externals/shipserv_api/.svn/tmp/props': Transport endpoint is not connected same crash pattern [2015-03-18 19:38:14.507657] W [quota.c:3669:quota_statfs_validate_cbk] 0-sas01-quota: quota context is not present in inode (gfid:00000000-0000-0000-0000-000000000001) pending frames: frame : type(0) op(0) patchset: git://git.gluster.com/glusterfs.git signal received: 11 time of crash: 2015-03-18 19:38:17configuration details: argp 1 backtrace 1 dlfcn 1 fdatasync 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 3.5.2 /lib/x86_64-linux-gnu/libc.so.6(+0x36150)[0x7f62349c6150] /lib/x86_64-linux-gnu/libc.so.6(+0x162761)[0x7f6234af2761] /usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/features/marker.so(mq_loc_fill_from_name+0x89)[0x7f622b393c59] /usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/features/marker.so(mq_readdir_cbk+0x21f)[0x7f622b39473f] /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_readdir_cbk+0xc2)[0x7f62353b7292] /usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/performance/io-threads.so(iot_readdir_cbk+0xc2)[0x7f622b7b3c02] /usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/features/access-control.so(posix_acl_readdir_cbk+0xc2)[0x7f622bbe4ca2] /usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/storage/posix.so(posix_do_readdir+0x1b8)[0x7f62303688c8] /usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/storage/posix.so(posix_readdir+0x13)[0x7f6230368d43] /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_readdir+0x88)[0x7f62353c0c58] /usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/features/access-control.so(posix_acl_readdir+0x23c)[0x7f622bbe708c] /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_readdir+0x88)[0x7f62353c0c58] /usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/performance/io-threads.so(iot_readdir_wrapper+0x150)[0x7f622b7b7a80] /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(call_resume+0x1c5)[0x7f62353d6235] /usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/performance/io-threads.so(iot_worker+0x146)[0x7f622b7bba66] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f6234d56e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6234a838bd] --------- Created attachment 1005506 [details]
Core file on brick crash on Mar 18
Should be fixed in recent versions. *** This bug has been marked as a duplicate of bug 1144315 *** |