Bug 1262324
Summary: | Data Tiering:glusterd crashes while attaching tier | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Nag Pavan Chilakam <nchilaka> |
Component: | tiering | Assignee: | Satish Mohan <smohan> |
Status: | CLOSED EOL | QA Contact: | bugs <bugs> |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | 3.7.4 | CC: | bugs, nchilaka, rkavunga, sankarshan, smohan |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-03-08 11:03:38 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Nag Pavan Chilakam
2015-09-11 12:22:18 UTC
core and Sosreports @rhsqe-repo.lab.eng.blr.redhat.com :/home/repo/sosreports/bug.1262324 etc log: 26d] -->/lib64/libgfrpc.so.0(rpcsvc_callback_submit+0x169) [0x7f9681254a49] -->/usr/lib64/glusterfs/3.7.4/rpc-transport/socket.so(+0x61c3) [0x7f9673cea1c3] ) 0-socket: invalid argument: this->private [Invalid argument] [2015-09-11 12:10:38.843106] W [rpcsvc.c:1085:rpcsvc_callback_submit] 0-rpcsvc: transmission of rpc-request failed pending frames: frame : type(0) op(0) patchset: git://git.gluster.com/glusterfs.git signal received: 11 time of crash: 2015-09-11 12:10:38 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 3.7.4 /lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xb2)[0x7f968148efd2] /lib64/libglusterfs.so.0(gf_print_trace+0x31d)[0x7f96814ab45d] /lib64/libc.so.6(+0x35650)[0x7f967fb7d650] /lib64/libgfrpc.so.0(rpc_transport_submit_request+0x9)[0x7f96812593e9] /lib64/libgfrpc.so.0(rpcsvc_callback_submit+0x169)[0x7f9681254a49] /usr/lib64/glusterfs/3.7.4/xlator/mgmt/glusterd.so(glusterd_fetchspec_notify+0x5d)[0x7f9675f4b26d] /usr/lib64/glusterfs/3.7.4/xlator/mgmt/glusterd.so(glusterd_op_perform_add_bricks+0x6da)[0x7f9675ffc2ea] /usr/lib64/glusterfs/3.7.4/xlator/mgmt/glusterd.so(glusterd_op_add_brick+0x1f0)[0x7f9675ffe3c0] /usr/lib64/glusterfs/3.7.4/xlator/mgmt/glusterd.so(glusterd_op_commit_perform+0x77b)[0x7f9675f7b09b] /usr/lib64/glusterfs/3.7.4/xlator/mgmt/glusterd.so(gd_commit_op_phase+0xb9)[0x7f9676004a49] /usr/lib64/glusterfs/3.7.4/xlator/mgmt/glusterd.so(gd_sync_task_begin+0x77d)[0x7f967600602d] /usr/lib64/glusterfs/3.7.4/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x30)[0x7f9676006300] /usr/lib64/glusterfs/3.7.4/xlator/mgmt/glusterd.so(__glusterd_handle_add_brick+0x888)[0x7f9675ff9c68] /usr/lib64/glusterfs/3.7.4/xlator/mgmt/glusterd.so(glusterd_big_locked_handler+0x30)[0x7f9675f65de0] /lib64/libglusterfs.so.0(synctask_wrap+0x12)[0x7f96814d0d72] /lib64/libc.so.6(+0x470f0)[0x7f967fb8f0f0] --------- Hi Nag Pavan, Could you give me some details how can i access log information for this bug. i tried both ssh and http @rhsqe-repo.lab.eng.blr.redhat.com :/home/repo/sosreports/bug.1262324 ~Gaurav Hi, This is a known issue. This issue is coming because of multi threaded epoll in glusterd. Could you change the by default epoll thread value to 1 by modifying /usr/local/etc/glusterfs/glusterd.vol file add/modify the following things in /usr/local/etc/glusterfs/glusterd.vol file option ping-timeout 0 option event-threads 1 for making default epoll thread value to 1 Thanks Gaurav. Nag, can you test with the changes suggested by Gaurav? This Bug is not yet fixed. Only a work around was given. Hence moving it back to assigned This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release. |