Bug 1654161 - glusterd crashed with seg fault possibly during node reboot while volume creates and deletes were happening
Summary: glusterd crashed with seg fault possibly during node reboot while volume crea...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: RHGS 3.4.z Batch Update 3
Assignee: Atin Mukherjee
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On:
Blocks: 1654270
TreeView+ depends on / blocked
 
Reported: 2018-11-28 07:32 UTC by Nag Pavan Chilakam
Modified: 2019-02-06 05:53 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.12.2-35
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1654270 (view as bug list)
Environment:
Last Closed: 2019-02-04 07:41:44 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1653742 0 medium CLOSED glusterd crashed while running volume status detail continuosly from node N1 and restart glusterd on N2/N3 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHBA-2019:0263 0 None None None 2019-02-04 07:41:53 UTC

Internal Links: 1653742

Description Nag Pavan Chilakam 2018-11-28 07:32:44 UTC
Description of problem:
======================
On a 6 node cluster , where volume creates and deletes were happening through heketi, I see a glusterd crash, possibly when the node was getting rebooted

warning: core file may not match specified executable file.
Reading symbols from /usr/sbin/glusterfsd...Reading symbols from /usr/lib/debug/usr/sbin/glusterfsd.debug...done.
done.
Missing separate debuginfo for 
Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/16/3c2dc43405427478788bad0afd537a7acf7a13
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007ff22406a0ad in rcu_read_lock_bp () from /lib64/liburcu-bp.so.1
Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 device-mapper-event-libs-1.02.149-10.el7_6.1.x86_64 device-mapper-libs-1.02.149-10.el7_6.1.x86_64 elfutils-libelf-0.172-2.el7.x86_64 elfutils-libs-0.172-2.el7.x86_64 glibc-2.17-260.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-34.el7.x86_64 libaio-0.3.109-13.el7.x86_64 libattr-2.4.46-13.el7.x86_64 libblkid-2.23.2-59.el7.x86_64 libcap-2.22-9.el7.x86_64 libcom_err-1.42.9-13.el7.x86_64 libgcc-4.8.5-36.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 libsepol-2.5-10.el7.x86_64 libuuid-2.23.2-59.el7.x86_64 libxml2-2.9.1-6.el7_2.3.x86_64 lvm2-libs-2.02.180-10.el7_6.1.x86_64 openssl-libs-1.0.2k-16.el7.x86_64 pcre-8.32-17.el7.x86_64 systemd-libs-219-62.el7.x86_64 userspace-rcu-0.7.9-2.el7rhgs.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-18.el7.x86_64
(gdb) bt
#0  0x00007ff22406a0ad in rcu_read_lock_bp () from /lib64/liburcu-bp.so.1
#1  0x00007ff2246efdf0 in gd_peerinfo_find_from_hostname (hoststr=hoststr@entry=0x7ff208970510 "10.70.35.184") at glusterd-peer-utils.c:667
#2  0x00007ff2246f02cd in glusterd_peerinfo_find_by_hostname (hoststr=hoststr@entry=0x7ff208970510 "10.70.35.184") at glusterd-peer-utils.c:110
#3  0x00007ff2246f04b7 in glusterd_hostname_to_uuid (hostname=hostname@entry=0x7ff208970510 "10.70.35.184", uuid=uuid@entry=0x7ff215389300 "") at glusterd-peer-utils.c:154
#4  0x00007ff22462222c in glusterd_volume_brickinfo_get (uuid=uuid@entry=0x0, hostname=0x7ff208970510 "10.70.35.184", 
    path=0x7ff2092558f0 "/var/lib/heketi/mounts/vg_7700284475051d7a420513fffda25002/brick_60470452fbbb1e67e9d1ffe0f7021029/brick", volinfo=volinfo@entry=0x7ff20afa8e30, brickinfo=brickinfo@entry=0x7ff215389360) at glusterd-utils.c:1604
#5  0x00007ff224622409 in glusterd_is_brick_decommissioned (volinfo=volinfo@entry=0x7ff20afa8e30, hostname=<optimized out>, path=<optimized out>) at glusterd-utils.c:1667
#6  0x00007ff224670b1c in _xl_is_client_decommissioned (xl=0x7ff2096adf60, volinfo=volinfo@entry=0x7ff20afa8e30) at glusterd-volgen.c:3464
#7  0x00007ff224670d21 in _xl_has_decommissioned_clients (xl=<optimized out>, volinfo=volinfo@entry=0x7ff20afa8e30) at glusterd-volgen.c:3481
#8  0x00007ff224670cf4 in _xl_has_decommissioned_clients (xl=xl@entry=0x7ff2088bf840, volinfo=volinfo@entry=0x7ff20afa8e30) at glusterd-volgen.c:3490
#9  0x00007ff224670d89 in _graph_get_decommissioned_children (dht=dht@entry=0x7ff20837d1c0, volinfo=volinfo@entry=0x7ff20afa8e30, children=children@entry=0x7ff215389478) at glusterd-volgen.c:3513
#10 0x00007ff224670fa4 in volgen_graph_build_dht_cluster (is_quotad=_gf_false, child_count=1, volinfo=0x7ff20afa8e30, graph=0x7ff20837d1c0) at glusterd-volgen.c:3606
#11 volume_volgen_graph_build_clusters (graph=graph@entry=0x7ff215389740, volinfo=volinfo@entry=0x7ff20afa8e30, is_quotad=is_quotad@entry=_gf_false) at glusterd-volgen.c:3898
#12 0x00007ff224671801 in client_graph_builder (graph=0x7ff215389740, volinfo=0x7ff20afa8e30, set_dict=0x7ff208f475b0, param=<optimized out>) at glusterd-volgen.c:4265
#13 0x00007ff224668a32 in build_graph_generic (graph=graph@entry=0x7ff215389740, volinfo=volinfo@entry=0x7ff20afa8e30, mod_dict=mod_dict@entry=0x7ff20a4203c0, param=param@entry=0x0, 
    builder=builder@entry=0x7ff224671760 <client_graph_builder>) at glusterd-volgen.c:1066
#14 0x00007ff224669084 in build_client_graph (mod_dict=0x7ff20a4203c0, volinfo=0x7ff20afa8e30, graph=0x7ff215389740) at glusterd-volgen.c:4483
#15 generate_single_transport_client_volfile (volinfo=volinfo@entry=0x7ff20afa8e30, filepath=filepath@entry=0x7ff215389890 "/var/lib/glusterd/vols/D-4-5/trusted-D-4-5.tcp-fuse.vol", dict=dict@entry=0x7ff20a4203c0)
    at glusterd-volgen.c:5711
#16 0x00007ff2246743df in generate_client_volfiles (volinfo=volinfo@entry=0x7ff20afa8e30, client_type=client_type@entry=GF_CLIENT_TRUSTED) at glusterd-volgen.c:5923
#17 0x00007ff2246758bf in glusterd_create_volfiles (volinfo=volinfo@entry=0x7ff20afa8e30) at glusterd-volgen.c:6440
#18 0x00007ff2246759e9 in glusterd_create_volfiles_and_notify_services (volinfo=0x7ff20afa8e30) at glusterd-volgen.c:6468
#19 0x00007ff2246a8706 in glusterd_op_create_volume (dict=dict@entry=0x7ff209b23520, op_errstr=op_errstr@entry=0x7ff21538c098) at glusterd-volume-ops.c:2534
#20 0x00007ff224613153 in glusterd_op_commit_perform (op=GD_OP_CREATE_VOLUME, dict=dict@entry=0x7ff209b23520, op_errstr=op_errstr@entry=0x7ff21538c098, rsp_dict=rsp_dict@entry=0x7ff2090efd10) at glusterd-op-sm.c:6282
#21 0x00007ff22461cdc4 in glusterd_op_ac_commit_op (event=0x7ff2085fd510, ctx=0x7ff2087b6730) at glusterd-op-sm.c:6020
#22 0x00007ff224619d2f in glusterd_op_sm () at glusterd-op-sm.c:8393
#23 0x00007ff2245f3e52 in __glusterd_handle_commit_op (req=req@entry=0x7ff2145d5790) at glusterd-handler.c:1176
#24 0x00007ff2245fb9ce in glusterd_big_locked_handler (req=0x7ff2145d5790, actor_fn=0x7ff2245f3d30 <__glusterd_handle_commit_op>) at glusterd-handler.c:82
#25 0x00007ff22fbadb80 in synctask_wrap () at syncop.c:375
#26 0x00007ff22e1e6010 in ?? () from /lib64/libc.so.6
#27 0x0000000000000000 in ?? ()
(gdb) t a a bt

Thread 8 (Thread 0x7ff220234700 (LWP 24966)):
#0  0x00007ff22e9d7965 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007ff2246bc76b in hooks_worker (args=<optimized out>) at glusterd-hooks.c:529
#2  0x00007ff22e9d3dd5 in start_thread () from /lib64/libpthread.so.0
#3  0x00007ff22e29bead in clone () from /lib64/libc.so.6

Thread 7 (Thread 0x7ff21fa33700 (LWP 24967)):
#0  0x00007ff22e29c483 in epoll_wait () from /lib64/libc.so.6
#1  0x00007ff22fbd26f2 in event_dispatch_epoll_worker (data=0x55c63e161950) at event-epoll.c:649
#2  0x00007ff22e9d3dd5 in start_thread () from /lib64/libpthread.so.0
#3  0x00007ff22e29bead in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7ff225970700 (LWP 24787)):
#0  0x00007ff22e9d7d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007ff22fbb0158 in syncenv_task (proc=proc@entry=0x55c63e147270) at syncop.c:603
#2  0x00007ff22fbb1020 in syncenv_processor (thdata=0x55c63e147270) at syncop.c:695
#3  0x00007ff22e9d3dd5 in start_thread () from /lib64/libpthread.so.0
#4  0x00007ff22e29bead in clone () from /lib64/libc.so.6

Thread 5 (Thread 0x7ff227173700 (LWP 24784)):
#0  0x00007ff22e9dae3d in nanosleep () from /lib64/libpthread.so.0
#1  0x00007ff22fb82c76 in gf_timer_proc (data=0x55c63e146a50) at timer.c:174
#2  0x00007ff22e9d3dd5 in start_thread () from /lib64/libpthread.so.0
#3  0x00007ff22e29bead in clone () from /lib64/libc.so.6

Thread 4 (Thread 0x7ff226972700 (LWP 24785)):
#0  0x00007ff223af8ff0 in __do_global_dtors_aux () from /lib64/liblvm2app.so.2.2
#1  0x00007ff22fe59fca in _dl_fini () from /lib64/ld-linux-x86-64.so.2
#2  0x00007ff22e1d7b69 in __run_exit_handlers () from /lib64/libc.so.6
#3  0x00007ff22e1d7bb7 in exit () from /lib64/libc.so.6
#4  0x000055c63d6084af in cleanup_and_exit (signum=15) at glusterfsd.c:1423
#5  0x000055c63d6085a5 in glusterfs_sigwaiter (arg=<optimized out>) at glusterfsd.c:2145
#6  0x00007ff22e9d3dd5 in start_thread () from /lib64/libpthread.so.0
#7  0x00007ff22e29bead in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7ff23005a780 (LWP 24783)):
#0  0x00007ff22e9d4f47 in pthread_join () from /lib64/libpthread.so.0
#1  0x00007ff22fbd2e58 in event_dispatch_epoll (event_pool=0x55c63e13f210) at event-epoll.c:746
#2  0x000055c63d605277 in main (argc=5, argv=<optimized out>) at glusterfsd.c:2583

Thread 2 (Thread 0x7ff226171700 (LWP 24786)):
#0  0x00007ff22e262e2d in nanosleep () from /lib64/libc.so.6
#1  0x00007ff22e262cc4 in sleep () from /lib64/libc.so.6
#2  0x00007ff22fb9d4ed in pool_sweeper (arg=<optimized out>) at mem-pool.c:481
#3  0x00007ff22e9d3dd5 in start_thread () from /lib64/libpthread.so.0
#4  0x00007ff22e29bead in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7ff22516f700 (LWP 24788)):
#0  0x00007ff22406a0ad in rcu_read_lock_bp () from /lib64/liburcu-bp.so.1
#1  0x00007ff2246efdf0 in gd_peerinfo_find_from_hostname (hoststr=hoststr@entry=0x7ff208970510 "10.70.35.184") at glusterd-peer-utils.c:667
#2  0x00007ff2246f02cd in glusterd_peerinfo_find_by_hostname (hoststr=hoststr@entry=0x7ff208970510 "10.70.35.184") at glusterd-peer-utils.c:110
#3  0x00007ff2246f04b7 in glusterd_hostname_to_uuid (hostname=hostname@entry=0x7ff208970510 "10.70.35.184", uuid=uuid@entry=0x7ff215389300 "") at glusterd-peer-utils.c:154
#4  0x00007ff22462222c in glusterd_volume_brickinfo_get (uuid=uuid@entry=0x0, hostname=0x7ff208970510 "10.70.35.184", 
    path=0x7ff2092558f0 "/var/lib/heketi/mounts/vg_7700284475051d7a420513fffda25002/brick_60470452fbbb1e67e9d1ffe0f7021029/brick", volinfo=volinfo@entry=0x7ff20afa8e30, brickinfo=brickinfo@entry=0x7ff215389360) at glusterd-utils.c:1604
#5  0x00007ff224622409 in glusterd_is_brick_decommissioned (volinfo=volinfo@entry=0x7ff20afa8e30, hostname=<optimized out>, path=<optimized out>) at glusterd-utils.c:1667
#6  0x00007ff224670b1c in _xl_is_client_decommissioned (xl=0x7ff2096adf60, volinfo=volinfo@entry=0x7ff20afa8e30) at glusterd-volgen.c:3464
#7  0x00007ff224670d21 in _xl_has_decommissioned_clients (xl=<optimized out>, volinfo=volinfo@entry=0x7ff20afa8e30) at glusterd-volgen.c:3481
#8  0x00007ff224670cf4 in _xl_has_decommissioned_clients (xl=xl@entry=0x7ff2088bf840, volinfo=volinfo@entry=0x7ff20afa8e30) at glusterd-volgen.c:3490
#9  0x00007ff224670d89 in _graph_get_decommissioned_children (dht=dht@entry=0x7ff20837d1c0, volinfo=volinfo@entry=0x7ff20afa8e30, children=children@entry=0x7ff215389478) at glusterd-volgen.c:3513
#10 0x00007ff224670fa4 in volgen_graph_build_dht_cluster (is_quotad=_gf_false, child_count=1, volinfo=0x7ff20afa8e30, graph=0x7ff20837d1c0) at glusterd-volgen.c:3606
#11 volume_volgen_graph_build_clusters (graph=graph@entry=0x7ff215389740, volinfo=volinfo@entry=0x7ff20afa8e30, is_quotad=is_quotad@entry=_gf_false) at glusterd-volgen.c:3898
#12 0x00007ff224671801 in client_graph_builder (graph=0x7ff215389740, volinfo=0x7ff20afa8e30, set_dict=0x7ff208f475b0, param=<optimized out>) at glusterd-volgen.c:4265
---Type <return> to continue, or q <return> to quit---
#13 0x00007ff224668a32 in build_graph_generic (graph=graph@entry=0x7ff215389740, volinfo=volinfo@entry=0x7ff20afa8e30, mod_dict=mod_dict@entry=0x7ff20a4203c0, param=param@entry=0x0, 
    builder=builder@entry=0x7ff224671760 <client_graph_builder>) at glusterd-volgen.c:1066
#14 0x00007ff224669084 in build_client_graph (mod_dict=0x7ff20a4203c0, volinfo=0x7ff20afa8e30, graph=0x7ff215389740) at glusterd-volgen.c:4483
#15 generate_single_transport_client_volfile (volinfo=volinfo@entry=0x7ff20afa8e30, filepath=filepath@entry=0x7ff215389890 "/var/lib/glusterd/vols/D-4-5/trusted-D-4-5.tcp-fuse.vol", dict=dict@entry=0x7ff20a4203c0)
    at glusterd-volgen.c:5711
#16 0x00007ff2246743df in generate_client_volfiles (volinfo=volinfo@entry=0x7ff20afa8e30, client_type=client_type@entry=GF_CLIENT_TRUSTED) at glusterd-volgen.c:5923
#17 0x00007ff2246758bf in glusterd_create_volfiles (volinfo=volinfo@entry=0x7ff20afa8e30) at glusterd-volgen.c:6440
#18 0x00007ff2246759e9 in glusterd_create_volfiles_and_notify_services (volinfo=0x7ff20afa8e30) at glusterd-volgen.c:6468
#19 0x00007ff2246a8706 in glusterd_op_create_volume (dict=dict@entry=0x7ff209b23520, op_errstr=op_errstr@entry=0x7ff21538c098) at glusterd-volume-ops.c:2534
#20 0x00007ff224613153 in glusterd_op_commit_perform (op=GD_OP_CREATE_VOLUME, dict=dict@entry=0x7ff209b23520, op_errstr=op_errstr@entry=0x7ff21538c098, rsp_dict=rsp_dict@entry=0x7ff2090efd10) at glusterd-op-sm.c:6282
#21 0x00007ff22461cdc4 in glusterd_op_ac_commit_op (event=0x7ff2085fd510, ctx=0x7ff2087b6730) at glusterd-op-sm.c:6020
#22 0x00007ff224619d2f in glusterd_op_sm () at glusterd-op-sm.c:8393
#23 0x00007ff2245f3e52 in __glusterd_handle_commit_op (req=req@entry=0x7ff2145d5790) at glusterd-handler.c:1176
#24 0x00007ff2245fb9ce in glusterd_big_locked_handler (req=0x7ff2145d5790, actor_fn=0x7ff2245f3d30 <__glusterd_handle_commit_op>) at glusterd-handler.c:82
#25 0x00007ff22fbadb80 in synctask_wrap () at syncop.c:375
#26 0x00007ff22e1e6010 in ?? () from /lib64/libc.so.6
#27 0x0000000000000000 in ?? ()
(gdb) q








Version-Release number of selected component (if applicable):
==============
3.12.2-29
heketi-7.0.0-15

How reproducible:
===============
hit it once

Steps to Reproduce:
==================
1.6 node cluster
2.created about 11 volumes, on 8 of those volumes IOs going on(these will remain throughtout testcycle)
3.from heketi, started creating volumes in batches of 100, for 25 time
4. from another terminal of heketi , deleting volumes (delay introduced by 1 batch, that means when vol create is creating the 2nd batch of 100vols, vol dele will delete 1st batch of vols, so no conflicts)
5. did a node reboot on 2 nodes, and saw that one node had a glusterd core

Comment 2 Sanju 2018-11-28 10:16:44 UTC
From "t a a bt"

Thread 4 (Thread 0x7ff226972700 (LWP 24785)):
#0  0x00007ff223af8ff0 in __do_global_dtors_aux () from /lib64/liblvm2app.so.2.2
#1  0x00007ff22fe59fca in _dl_fini () from /lib64/ld-linux-x86-64.so.2
#2  0x00007ff22e1d7b69 in __run_exit_handlers () from /lib64/libc.so.6
#3  0x00007ff22e1d7bb7 in exit () from /lib64/libc.so.6
#4  0x000055c63d6084af in cleanup_and_exit (signum=15) at glusterfsd.c:1423
#5  0x000055c63d6085a5 in glusterfs_sigwaiter (arg=<optimized out>) at glusterfsd.c:2145
#6  0x00007ff22e9d3dd5 in start_thread () from /lib64/libpthread.so.0
#7  0x00007ff22e29bead in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7ff22516f700 (LWP 24788)):
#0  0x00007ff22406a0ad in rcu_read_lock_bp () from /lib64/liburcu-bp.so.1
#1  0x00007ff2246efdf0 in gd_peerinfo_find_from_hostname (hoststr=hoststr@entry=0x7ff208970510 "10.70.35.184") at glusterd-peer-utils.c:667
#2  0x00007ff2246f02cd in glusterd_peerinfo_find_by_hostname (hoststr=hoststr@entry=0x7ff208970510 "10.70.35.184") at glusterd-peer-utils.c:110
#3  0x00007ff2246f04b7 in glusterd_hostname_to_uuid (hostname=hostname@entry=0x7ff208970510 "10.70.35.184", uuid=uuid@entry=0x7ff215389300 "") at glusterd-peer-utils.c:154
#4  0x00007ff22462222c in glusterd_volume_brickinfo_get (uuid=uuid@entry=0x0, hostname=0x7ff208970510 "10.70.35.184", 
    path=0x7ff2092558f0 "/var/lib/heketi/mounts/vg_7700284475051d7a420513fffda25002/brick_60470452fbbb1e67e9d1ffe0f7021029/brick", volinfo=volinfo@entry=0x7ff20afa8e30, brickinfo=brickinfo@entry=0x7ff215389360) at glusterd-utils.c:1604
#5  0x00007ff224622409 in glusterd_is_brick_decommissioned (volinfo=volinfo@entry=0x7ff20afa8e30, hostname=<optimized out>, path=<optimized out>) at glusterd-utils.c:1667
#6  0x00007ff224670b1c in _xl_is_client_decommissioned (xl=0x7ff2096adf60, volinfo=volinfo@entry=0x7ff20afa8e30) at glusterd-volgen.c:3464
#7  0x00007ff224670d21 in _xl_has_decommissioned_clients (xl=<optimized out>, volinfo=volinfo@entry=0x7ff20afa8e30) at glusterd-volgen.c:3481
#8  0x00007ff224670cf4 in _xl_has_decommissioned_clients (xl=xl@entry=0x7ff2088bf840, volinfo=volinfo@entry=0x7ff20afa8e30) at glusterd-volgen.c:3490
#9  0x00007ff224670d89 in _graph_get_decommissioned_children (dht=dht@entry=0x7ff20837d1c0, volinfo=volinfo@entry=0x7ff20afa8e30, children=children@entry=0x7ff215389478) at glusterd-volgen.c:3513
#10 0x00007ff224670fa4 in volgen_graph_build_dht_cluster (is_quotad=_gf_false, child_count=1, volinfo=0x7ff20afa8e30, graph=0x7ff20837d1c0) at glusterd-volgen.c:3606
#11 volume_volgen_graph_build_clusters (graph=graph@entry=0x7ff215389740, volinfo=volinfo@entry=0x7ff20afa8e30, is_quotad=is_quotad@entry=_gf_false) at glusterd-volgen.c:3898
#12 0x00007ff224671801 in client_graph_builder (graph=0x7ff215389740, volinfo=0x7ff20afa8e30, set_dict=0x7ff208f475b0, param=<optimized out>) at glusterd-volgen.c:4265
---Type <return> to continue, or q <return> to quit---
#13 0x00007ff224668a32 in build_graph_generic (graph=graph@entry=0x7ff215389740, volinfo=volinfo@entry=0x7ff20afa8e30, mod_dict=mod_dict@entry=0x7ff20a4203c0, param=param@entry=0x0, 
    builder=builder@entry=0x7ff224671760 <client_graph_builder>) at glusterd-volgen.c:1066
#14 0x00007ff224669084 in build_client_graph (mod_dict=0x7ff20a4203c0, volinfo=0x7ff20afa8e30, graph=0x7ff215389740) at glusterd-volgen.c:4483
#15 generate_single_transport_client_volfile (volinfo=volinfo@entry=0x7ff20afa8e30, filepath=filepath@entry=0x7ff215389890 "/var/lib/glusterd/vols/D-4-5/trusted-D-4-5.tcp-fuse.vol", dict=dict@entry=0x7ff20a4203c0)
    at glusterd-volgen.c:5711
#16 0x00007ff2246743df in generate_client_volfiles (volinfo=volinfo@entry=0x7ff20afa8e30, client_type=client_type@entry=GF_CLIENT_TRUSTED) at glusterd-volgen.c:5923
#17 0x00007ff2246758bf in glusterd_create_volfiles (volinfo=volinfo@entry=0x7ff20afa8e30) at glusterd-volgen.c:6440
#18 0x00007ff2246759e9 in glusterd_create_volfiles_and_notify_services (volinfo=0x7ff20afa8e30) at glusterd-volgen.c:6468
#19 0x00007ff2246a8706 in glusterd_op_create_volume (dict=dict@entry=0x7ff209b23520, op_errstr=op_errstr@entry=0x7ff21538c098) at glusterd-volume-ops.c:2534
#20 0x00007ff224613153 in glusterd_op_commit_perform (op=GD_OP_CREATE_VOLUME, dict=dict@entry=0x7ff209b23520, op_errstr=op_errstr@entry=0x7ff21538c098, rsp_dict=rsp_dict@entry=0x7ff2090efd10) at glusterd-op-sm.c:6282
#21 0x00007ff22461cdc4 in glusterd_op_ac_commit_op (event=0x7ff2085fd510, ctx=0x7ff2087b6730) at glusterd-op-sm.c:6020
#22 0x00007ff224619d2f in glusterd_op_sm () at glusterd-op-sm.c:8393
#23 0x00007ff2245f3e52 in __glusterd_handle_commit_op (req=req@entry=0x7ff2145d5790) at glusterd-handler.c:1176
#24 0x00007ff2245fb9ce in glusterd_big_locked_handler (req=0x7ff2145d5790, actor_fn=0x7ff2245f3d30 <__glusterd_handle_commit_op>) at glusterd-handler.c:82
#25 0x00007ff22fbadb80 in synctask_wrap () at syncop.c:375
#26 0x00007ff22e1e6010 in ?? () from /lib64/libc.so.6
#27 0x0000000000000000 in ?? ()
(gdb) q

thread 4 is going through cleanup path. at the same time thread 1 is trying to acquire lock on resources which are already freed up by thread 4 as part of cleanup path, that resulted in segmentation fault.

Thanks,
Sanju

Comment 3 Sanju 2018-11-28 11:27:34 UTC
upstream patch: https://review.gluster.org/#/c/glusterfs/+/21743

Comment 6 Nag Pavan Chilakam 2018-11-28 13:11:06 UTC
logs,sosreport,health-check-report and core @http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/nchilaka/bug.1654161

Comment 7 Atin Mukherjee 2018-12-01 09:33:11 UTC
upstream patch : https://review.gluster.org/#/c/glusterfs/+/21743

Comment 19 Nag Pavan Chilakam 2019-01-10 14:08:30 UTC
ran test mentioned in description on 3.12.2-36 for about 3 days and didnt see any crash. hence moving to verified

Comment 21 errata-xmlrpc 2019-02-04 07:41:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0263


Note You need to log in before you can comment on or make changes to this bug.