| Summary: | crash in 3.3.0qa10 | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Saurabh <saurabh> |
| Component: | replicate | Assignee: | Vijay Bellur <vbellur> |
| Status: | CLOSED DUPLICATE | QA Contact: | |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | mainline | CC: | amarts, gluster-bugs, vijay |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | --- | Mount Type: | nfs |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Amar Tumballi
2011-09-23 10:55:35 UTC
volume is a dist-rep and sanity tests were executed over nfs mount [root@Centos2 common]# ssh root.12.190 "/export/3.3.0/inst/sbin/gluster volume info" root.12.190's password: Volume Name: dist-rep Type: Distributed-Replicate Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.1.12.190:/export/nightly/data/d1 Brick2: 10.1.12.190:/export/nightly/data/r1 Brick3: 10.1.12.190:/export/nightly/data/d2 Brick4: 10.1.12.190:/export/nightly/data/r2 (gdb) bt #0 synctask_wrap (task=0xffffffffb4000900) at syncop.c:104 #1 0x00000033fbc419c0 in ?? () from /lib64/libc.so.6 #2 0x0000000000000000 in ?? () (gdb) (gdb) info thr 4 Thread 3482 __iobuf_arena_init_iobufs (iobuf_arena=0x2aaab4000e30) at iobuf.c:53 3 Thread 3483 0x00000033fc40e838 in do_sigwait () from /lib64/libpthread.so.0 2 Thread 3485 0x00000033fbc9a541 in nanosleep () from /lib64/libc.so.6 * 1 Thread 3484 synctask_wrap (task=0xffffffffb4000900) at syncop.c:104 (gdb) t 1 [Switching to thread 1 (Thread 3484)]#0 synctask_wrap (task=0xffffffffb4000900) at syncop.c:104 104 ret = task->syncfn (task->opaque); (gdb) p *task Cannot access memory at address 0xffffffffb4000900 (gdb)
Thread 4 (Thread 3482):
#0 __iobuf_arena_init_iobufs (iobuf_arena=0x2aaab4000e30) at iobuf.c:53
page_size = <value optimized out>
iobuf_cnt = 65536
iobuf = 0x2aaab8050fb0
offset = 1061632
i = 8294
__FUNCTION__ = "__iobuf_arena_init_iobufs"
#1 0x00002b343523e358 in __iobuf_arena_alloc (iobuf_pool=0x1cc1b70, page_size=128) at iobuf.c:164
iobuf_arena = 0x2aaab4000e30
arena_size = 8388608
rounded_size = <value optimized out>
__FUNCTION__ = "__iobuf_arena_alloc"
#2 0x00002b343523e591 in __iobuf_pool_add_arena (iobuf_pool=0x1cc1b70, page_size=<value optimized out>) at iobuf.c:231
iobuf_arena = 0x0
index = 7
rounded_size = 128
__FUNCTION__ = "__iobuf_pool_add_arena"
#3 0x00002b343523ee7e in iobuf_get2 (iobuf_pool=0x1cc1b70, page_size=<value optimized out>) at iobuf.c:477
iobuf = 0x0
iobuf_arena = <value optimized out>
rounded_size = 128
#4 0x00002aaaaab5feb3 in __socket_proto_state_machine (this=0x1cd5100, pollin=0x7fff2dbcf5f0) at socket.c:1533
ret = 0
iobuf = <value optimized out>
iobref = <value optimized out>
vector = {{iov_base = 0x1cd4f00, iov_len = 46912652709120}, {iov_base = 0x1cd5100, iov_len = 46912652709280}}
__FUNCTION__ = "__socket_proto_state_machine"
#5 0x00002aaaaab60345 in socket_proto_state_machine (this=0x1cd5100, pollin=0x7fff2dbcf5f0) at socket.c:1657
ret = <value optimized out>
__FUNCTION__ = "socket_proto_state_machine"
#6 0x00002aaaaab60424 in socket_event_poll_in (this=0x2aaab8051018) at socket.c:1672
ret = <value optimized out>
pollin = 0x0
#7 0x00002aaaaab605e8 in socket_event_handler (fd=<value optimized out>, idx=4, data=0x1cd5100, poll_in=1, poll_out=0, poll_err=0)
at socket.c:1790
this = 0x2aaab8051018
priv = 0x1cd5440
ret = -1207959552
__FUNCTION__ = "socket_event_handler"
#8 0x00002b343523af81 in event_dispatch_epoll_handler (event_pool=0x1cc13c0) at event.c:794
__FUNCTION__ = "event_dispatch_epoll_handler"
#9 event_dispatch_epoll (event_pool=0x1cc13c0) at event.c:856
events = 0x1cc69d0
i = 1
ret = -1207627856
__FUNCTION__ = "event_dispatch_epoll"
#10 0x0000000000405bb2 in main (argc=11, argv=0x7fff2dbcfd68) at glusterfsd.c:1592
ctx = 0x1cc1010
ret = 0
__FUNCTION__ = "main"
Thread 3 (Thread 3483):
#0 0x00000033fc40e838 in do_sigwait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00000033fc40e8dd in sigwait () from /lib64/libpthread.so.0
No symbol table info available.
---Type <return> to continue, or q <return> to quit---
#2 0x00000000004049ed in glusterfs_sigwaiter (arg=<value optimized out>) at glusterfsd.c:1319
set = {__val = {18947, 0 <repeats 15 times>}}
ret = -4
sig = 0
#3 0x00000033fc40673d in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4 0x00000033fbcd40cd in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 2 (Thread 3485):
#0 0x00000033fbc9a541 in nanosleep () from /lib64/libc.so.6
No symbol table info available.
#1 0x00000033fbccdb24 in usleep () from /lib64/libc.so.6
No symbol table info available.
#2 0x00002b34352290fd in gf_timer_proc (ctx=0x1cc1010) at timer.c:181
now = 1316780115848018
now_tv = {tv_sec = 1316780115, tv_usec = 848018}
event = 0x1cc76c0
reg = 0x1cc6800
__FUNCTION__ = "gf_timer_proc"
#3 0x00000033fc40673d in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4 0x00000033fbcd40cd in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 1 (Thread 3484):
#0 synctask_wrap (task=0xffffffffb4000900) at syncop.c:104
ret = <value optimized out>
#1 0x00000033fbc419c0 in ?? () from /lib64/libc.so.6
---Type <return> to continue, or q <return> to quit---
No symbol table info available.
#2 0x0000000000000000 in ?? ()
No symbol table info available.
any ETA for this bug as I am still not able to execute the sanity runs. |