Bug 1608507
Summary: | glusterd segfault - memcpy () at /usr/include/bits/string3.h:51 | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | John Strunk <jstrunk> | |
Component: | glusterd | Assignee: | Sanju <srakonde> | |
Status: | CLOSED ERRATA | QA Contact: | Bala Konda Reddy M <bmekala> | |
Severity: | urgent | Docs Contact: | ||
Priority: | high | |||
Version: | rhgs-3.3 | CC: | amukherj, apaladug, jstrunk, kiyer, nbalacha, nravinas, rhinduja, rhs-bugs, sankarshan, sarora, sheggodu, srakonde, srangana, storage-qa-internal, vbellur, vdas | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.4.z Batch Update 3 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.12.2-33 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1615385 (view as bug list) | Environment: | ||
Last Closed: | 2019-02-04 07:41:25 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1615385 |
Description
John Strunk
2018-07-25 16:35:05 UTC
I am ok with deferring if we have not been able to reproduce the issue. Would John me able to reproduce ? Based on the comment that we have not been able to reproduce this after trying the same procedure as John, I am ok defer it. Require QE assessment here. I have now hit this in RHGS 3.4.0. (In reply to John Strunk from comment #23) > I have now hit this in RHGS 3.4.0. John - in the email exchange you highlighted the core wasn't available. Just trying to understand if you have observed any patterns to get to a conclusion we hit the same backtrace. I was able to fix the config problem and get cores after my initial e-mail. Backtrace: Core was generated by `/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO'. Program terminated with signal 11, Segmentation fault. #0 0x00007f185e867128 in memcpy (__len=18446744072547383994, __src=0x7f18400760d0, __dest=0x7f18941ea6b0) at /usr/include/bits/string3.h:51 51 /usr/include/bits/string3.h: No such file or directory. Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 elfutils-libelf-0.170-4.el7.x86_64 elfutils-libs-0.170-4.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-19.el7.x86_64 libattr-2.4.46-13.el7.x86_64 libcap-2.22-9.el7.x86_64 libcom_err-1.42.9-12.el7_5.x86_64 libselinux-2.5-12.el7.x86_64 libsepol-2.5-8.1.el7.x86_64 pcre-8.32-17.el7.x86_64 systemd-libs-219-57.el7_5.1.x86_64 xz-libs-5.2.2-1.el7.x86_64 (gdb) bt #0 0x00007f185e867128 in memcpy (__len=18446744072547383994, __src=0x7f18400760d0, __dest=0x7f18941ea6b0) at /usr/include/bits/string3.h:51 #1 data_to_int32_ptr (data=0x7f1840076020, val=val@entry=0x7f184ed9621c) at dict.c:1555 #2 0x00007f185e86a78d in dict_get_int32 (this=this@entry=0x7f181001f160, key=key@entry=0x7f1853412f5b "count", val=val@entry=0x7f184ed9621c) at dict.c:1813 #3 0x00007f1853335191 in glusterd_profile_volume_use_rsp_dict (aggr=0x7f181001f160, rsp_dict=0x7f184802fff0) at glusterd-utils.c:9950 #4 0x00007f185334a4ee in __glusterd_commit_op_cbk (req=req@entry=0x7f18380b3bf0, iov=iov@entry=0x7f18380b3c30, count=count@entry=1, myframe=myframe@entry=0x7f183809d5d0) at glusterd-rpc-ops.c:1436 #5 0x00007f185334c0ea in glusterd_big_locked_cbk (req=0x7f18380b3bf0, iov=0x7f18380b3c30, count=1, myframe=0x7f183809d5d0, fn=0x7f185334a070 <__glusterd_commit_op_cbk>) at glusterd-rpc-ops.c:223 #6 0x00007f185e638960 in rpc_clnt_handle_reply (clnt=clnt@entry=0x561848c16c50, pollin=pollin@entry=0x7f184801a820) at rpc-clnt.c:778 #7 0x00007f185e638d03 in rpc_clnt_notify (trans=<optimized out>, mydata=0x561848c16c80, event=<optimized out>, data=0x7f184801a820) at rpc-clnt.c:971 #8 0x00007f185e634a73 in rpc_transport_notify (this=this@entry=0x561848c17990, event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7f184801a820) at rpc-transport.c:538 #9 0x00007f1850542566 in socket_event_poll_in (this=this@entry=0x561848c17990, notify_handled=notify_handled@entry=_gf_false) at socket.c:2315 #10 0x00007f1850545242 in socket_poller (ctx=0x561848c17990) at socket.c:2590 #11 0x00007f185d6cfdd5 in start_thread (arg=0x7f184ed97700) at pthread_create.c:308 #12 0x00007f185cf98b3d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 This appears to be caused by an interaction w/ other processes on the machine. Last night, glusterd was repeatedly hitting this, crashing every 5-10 min. I was able to stabilize glusterd by rebooting the server. # rpm -qa | grep gluster glusterfs-fuse-3.12.2-18.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-client-xlators-3.12.2-18.el7rhgs.x86_64 glusterfs-rdma-3.12.2-18.el7rhgs.x86_64 python2-gluster-3.12.2-18.el7rhgs.x86_64 glusterfs-cli-3.12.2-18.el7rhgs.x86_64 glusterfs-api-3.12.2-18.el7rhgs.x86_64 glusterfs-server-3.12.2-18.el7rhgs.x86_64 libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.7.x86_64 pcp-pmda-gluster-4.1.0-0.201805281909.git68ab4b18.el7.x86_64 gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64 glusterfs-3.12.2-18.el7rhgs.x86_64 vdsm-gluster-4.19.43-2.3.el7rhgs.noarch glusterfs-libs-3.12.2-18.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-18.el7rhgs.x86_64 glusterfs-debuginfo-3.12.2-18.el7rhgs.x86_64 upstream patch: https://review.gluster.org/#/c/glusterfs/+/21736/ @Bala Here, we need to ensure that, while doing in-service upgrade, profile commands are working as expected. @Nithya, please add if you want anything else to be tested. (In reply to Sanju from comment #55) > @Bala Here, we need to ensure that, while doing in-service upgrade, profile > commands are working as expected. > > @Nithya, please add if you want anything else to be tested. There is nothing else that I can think of. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0263 |