| Summary: | stripe with dd - file overwrite | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Lakshmipathi G <lakshmipathi> |
| Component: | stripe | Assignee: | Amar Tumballi <amarts> |
| Status: | CLOSED NOTABUG | QA Contact: | |
| Severity: | low | Docs Contact: | |
| Priority: | low | ||
| Version: | mainline | CC: | anush, gluster-bugs, rabhat, vraman |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | RTNR | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Lakshmipathi G
2010-01-19 05:17:21 UTC
produces following error while running dd command on stripe mount point with same filename and increased size.
-bash-3.2# dd if=/dev/zero of=dd.txt bs=1024 count=10
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.011036 seconds, 928 kB/s
-bash-3.2# ls
dd.txt
-bash-3.2# dd if=/dev/zero of=dd.txt bs=1024 count=100
dd: writing `dd.txt': Software caused connection abort
dd: closing output file `dd.txt': Transport endpoint is not connected
-bash-3.2#
-bash-3.2# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 10321208 4388764 5408156 45% /
/dev/sdb 433455904 203068 411234532 1% /mnt
glusterfs#/root/cfg.vol
153899008 26702848 119378560 19% /opt
df: `/mnt/stripe_mnt': Transport endpoint is not connected
-------------
log files:
-------------
server1.log
+++++++++++++
[2010-01-19 01:58:50] D [glusterfsd.c:1345:main] glusterfs: running in pid 4803
[2010-01-19 01:58:50] D [transport.c:145:transport_load] transport: attempt to load file /opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/transport/socket.so
[2010-01-19 01:58:50] D [xlator.c:284:_volume_option_value_validate] server-tcp: no range check required for 'option transport.socket.listen-port 6996'
[2010-01-19 01:58:50] D [io-threads.c:2841:init] brick1: io-threads: Autoscaling: off, min_threads: 8, max_threads: 8
[2010-01-19 01:58:50] W [glusterfsd.c:540:_log_if_option_is_invalid] posix1: option 'transport.socket.listen-port' is not recognized
[2010-01-19 01:58:50] N [glusterfsd.c:1371:main] glusterfs: Successfully started
[2010-01-19 02:02:44] D [addr.c:190:gf_auth] brick1: allowed = "*", received addr = "10.212.187.79"
[2010-01-19 02:02:44] N [server-protocol.c:5809:mop_setvolume] server-tcp: accepted client from 10.212.187.79:1020
[2010-01-19 02:02:44] D [addr.c:190:gf_auth] brick1: allowed = "*", received addr = "10.212.187.79"
[2010-01-19 02:02:44] N [server-protocol.c:5809:mop_setvolume] server-tcp: accepted client from 10.212.187.79:1021
[2010-01-19 02:03:47] N [server-protocol.c:6742:notify] server-tcp: 10.212.187.79:1020 disconnected
[2010-01-19 02:03:47] N [server-protocol.c:6742:notify] server-tcp: 10.212.187.79:1021 disconnected
[2010-01-19 02:03:47] N [server-helpers.c:842:server_connection_destroy] server-tcp: destroyed connection of ip-10-212-187-79-4886-2010/01/19-02:02:44:674972-ip-10-212-187-79-1
---------------------
-------------
server2.log
+++++++++++++
[2010-01-19 02:00:41] D [glusterfsd.c:1345:main] glusterfs: running in pid 4836
[2010-01-19 02:00:41] D [transport.c:145:transport_load] transport: attempt to load file /opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/transport/socket.so
[2010-01-19 02:00:41] D [xlator.c:284:_volume_option_value_validate] server-tcp: no range check required for 'option transport.socket.listen-port 6997'
[2010-01-19 02:00:41] D [io-threads.c:2841:init] brick2: io-threads: Autoscaling: off, min_threads: 8, max_threads: 8
[2010-01-19 02:00:41] W [glusterfsd.c:540:_log_if_option_is_invalid] posix1: option 'transport.socket.listen-port' is not recognized
[2010-01-19 02:00:41] N [glusterfsd.c:1371:main] glusterfs: Successfully started
[2010-01-19 02:02:44] D [addr.c:190:gf_auth] brick2: allowed = "*", received addr = "10.212.187.79"
[2010-01-19 02:02:44] N [server-protocol.c:5809:mop_setvolume] server-tcp: accepted client from 10.212.187.79:1018
[2010-01-19 02:02:44] D [addr.c:190:gf_auth] brick2: allowed = "*", received addr = "10.212.187.79"
[2010-01-19 02:02:44] N [server-protocol.c:5809:mop_setvolume] server-tcp: accepted client from 10.212.187.79:1019
[2010-01-19 02:03:47] N [server-protocol.c:6742:notify] server-tcp: 10.212.187.79:1018 disconnected
[2010-01-19 02:03:47] N [server-protocol.c:6742:notify] server-tcp: 10.212.187.79:1019 disconnected
[2010-01-19 02:03:47] N [server-helpers.c:842:server_connection_destroy] server-tcp: destroyed connection of ip-10-212-187-79-4886-2010/01/19-02:02:44:674972-ip-10-212-187-79-2
------------
+++++++++++
client.log
+++++++++++
2010-01-19 02:02:44] D [client-protocol.c:7006:notify] ip-10-212-187-79-4: got GF_EVENT_PARENT_UP, attempting connect on transport
[2010-01-19 02:02:44] N [glusterfsd.c:1371:main] glusterfs: Successfully started
[2010-01-19 02:02:44] D [client-protocol.c:7020:notify] ip-10-212-187-79-4: got GF_EVENT_CHILD_UP
[2010-01-19 02:02:44] D [client-protocol.c:7020:notify] ip-10-212-187-79-4: got GF_EVENT_CHILD_UP
[2010-01-19 02:02:44] D [client-protocol.c:7020:notify] ip-10-212-187-79-3: got GF_EVENT_CHILD_UP
[2010-01-19 02:02:44] D [client-protocol.c:7020:notify] ip-10-212-187-79-3: got GF_EVENT_CHILD_UP
[2010-01-19 02:02:44] D [client-protocol.c:7020:notify] ip-10-212-187-79-2: got GF_EVENT_CHILD_UP
[2010-01-19 02:02:44] D [client-protocol.c:7020:notify] ip-10-212-187-79-2: got GF_EVENT_CHILD_UP
[2010-01-19 02:02:44] D [client-protocol.c:7020:notify] ip-10-212-187-79-1: got GF_EVENT_CHILD_UP
[2010-01-19 02:02:44] D [client-protocol.c:7020:notify] ip-10-212-187-79-1: got GF_EVENT_CHILD_UP
[2010-01-19 02:02:44] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-2: Connected to 10.212.187.79:6997, attached to remote volume 'brick2'.
[2010-01-19 02:02:44] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-3: Connected to 10.212.187.79:6998, attached to remote volume 'brick3'.
[2010-01-19 02:02:44] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-3: Connected to 10.212.187.79:6998, attached to remote volume 'brick3'.
[2010-01-19 02:02:44] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-4: Connected to 10.212.187.79:6999, attached to remote volume 'brick4'.
[2010-01-19 02:02:44] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-4: Connected to 10.212.187.79:6999, attached to remote volume 'brick4'.
[2010-01-19 02:02:44] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-1: Connected to 10.212.187.79:6996, attached to remote volume 'brick1'.
[2010-01-19 02:02:44] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-2: Connected to 10.212.187.79:6997, attached to remote volume 'brick2'.
[2010-01-19 02:02:44] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-1: Connected to 10.212.187.79:6996, attached to remote volume 'brick1'.
[2010-01-19 02:02:44] D [fuse-bridge.c:3089:fuse_thread_proc] fuse: pthread_cond_timedout returned non zero value ret: 0 errno: 0
[2010-01-19 02:02:44] N [fuse-bridge.c:2939:fuse_init] glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.8
[2010-01-19 02:03:07] D [stat-prefetch.c:3843:sp_release] statprefetch: cache hits: 0, cache miss: 0
[2010-01-19 02:03:43] D [stat-prefetch.c:3843:sp_release] statprefetch: cache hits: 1, cache miss: 0
pending frames:
frame : type(1) op(WRITE)
patchset: v3.0.0-28-ge6f074f
signal received: 11
time of crash: 2010-01-19 02:03:47
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.1rc1
/lib64/libc.so.6[0x2aaaab397280]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/cluster/stripe.so(stripe_writev+0x476)[0x2aaaac1619f2]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/write-behind.so(wb_writev+0x54f)[0x2aaaac36dd5f]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/read-ahead.so(ra_writev+0x288)[0x2aaaac5789b7]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/io-cache.so(ioc_writev+0x324)[0x2aaaac7848f3]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/quick-read.so(qr_writev+0x50a)[0x2aaaac993060]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/stat-prefetch.so(sp_writev+0x290)[0x2aaaacbaa016]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/mount/fuse.so[0x2aaaacdc0d4e]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/mount/fuse.so[0x2aaaacdc695c]
/lib64/libpthread.so.0[0x2aaaab152367]
/lib64/libc.so.6(clone+0x6d)[0x2aaaab439f7d]
----------------------
+++++++++++++log file , while running ffsb test +++++++++++++++
--------------
[2010-01-19 02:37:16] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-2: Connected to 10.212.187.79:6997, attached to remote volume 'brick2'.
[2010-01-19 02:37:16] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-3: Connected to 10.212.187.79:6998, attached to remote volume 'brick3'.
[2010-01-19 02:37:16] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-3: Connected to 10.212.187.79:6998, attached to remote volume 'brick3'.
[2010-01-19 02:37:16] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-4: Connected to 10.212.187.79:6999, attached to remote volume 'brick4'.
[2010-01-19 02:37:16] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-4: Connected to 10.212.187.79:6999, attached to remote volume 'brick4'.
[2010-01-19 02:37:16] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-1: Connected to 10.212.187.79:6996, attached to remote volume 'brick1'.
[2010-01-19 02:37:16] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-1: Connected to 10.212.187.79:6996, attached to remote volume 'brick1'.
[2010-01-19 02:37:16] D [fuse-bridge.c:3089:fuse_thread_proc] fuse: pthread_cond_timedout returned non zero value ret: 0 errno: 0
[2010-01-19 02:37:16] N [client-protocol.c:6225:client_setvolume_cbk] ip-10-212-187-79-2: Connected to 10.212.187.79:6997, attached to remote volume 'brick2'.
[2010-01-19 02:37:16] N [fuse-bridge.c:2939:fuse_init] glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.8
[2010-01-19 02:37:55] D [stripe.c:463:stripe_stack_unwind_inode_cbk] stripe-0: ip-10-212-187-79-4 returned error File exists
[2010-01-19 02:37:55] D [stripe.c:463:stripe_stack_unwind_inode_cbk] stripe-0: ip-10-212-187-79-3 returned error File exists
pending frames:
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
patchset: v3.0.0-28-ge6f074f
signal received: 11
time of crash: 2010-01-19 02:37:57
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.1rc1
/lib64/libc.so.6[0x2aaaab397280]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/cluster/stripe.so(stripe_readv+0x5ba)[0x2aaaac160ed0]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/write-behind.so[0x2aaaac36e2d9]
/opt/glusterfs/3.0.1rc1/lib/libglusterfs.so.0[0x2aaaaacfa928]
/opt/glusterfs/3.0.1rc1/lib/libglusterfs.so.0(call_resume+0xc1)[0x2aaaaad009b6]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/write-behind.so(wb_resume_other_requests+0xcd)[0x2aaaac36d1b7]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/write-behind.so(wb_do_ops+0x60)[0x2aaaac36d261]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/write-behind.so(wb_process_queue+0x140)[0x2aaaac36d71f]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/write-behind.so(wb_readv+0x5aa)[0x2aaaac36e89e]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/read-ahead.so(ra_page_fault+0x280)[0x2aaaac57aa0e]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/read-ahead.so[0x2aaaac577227]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/read-ahead.so(ra_readv+0x566)[0x2aaaac577c66]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/io-cache.so(ioc_page_fault+0x3a2)[0x2aaaac787690]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/io-cache.so(ioc_dispatch_requests+0x4c3)[0x2aaaac783738]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/io-cache.so(ioc_readv+0xaae)[0x2aaaac78439e]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/quick-read.so(qr_readv+0xc58)[0x2aaaac9927cb]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/performance/stat-prefetch.so(sp_readv+0x256)[0x2aaaacba9c5c]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/mount/fuse.so[0x2aaaacdc068e]
/opt/glusterfs/3.0.1rc1/lib/glusterfs/3.0.1rc1/xlator/mount/fuse.so[0x2aaaacdc695c]
/lib64/libpthread.so.0[0x2aaaab152367]
/lib64/libc.so.6(clone+0x6d)[0x2aaaab439f7d]
---------
This is the backtrace output of the core file generated
(gdb) bt
#0 0x00002aaaac1619f2 in stripe_writev (frame=0x61dd40, this=0x611d20, fd=0x61ce00, vector=0x61ea20, count=1, offset=132096,
iobref=0x61f610) at stripe.c:3046
#1 0x00002aaaac3692ba in wb_sync (frame=0x61e688, file=0x61efe0, winds=0x41400b40) at write-behind.c:478
#2 0x00002aaaac36d247 in wb_do_ops (frame=0x61e688, file=0x61efe0, winds=0x41400b40, unwinds=0x41400b30, other_requests=0x41400b20)
at write-behind.c:1648
#3 0x00002aaaac36d71f in wb_process_queue (frame=0x61e688, file=0x61efe0, flush_all=0 '\0') at write-behind.c:1771
#4 0x00002aaaac36de82 in wb_writev (frame=0x61dc50, this=0x6126d0, fd=0x61ce00, vector=0x41401000, count=1, offset=262144,
iobref=0x61dda0) at write-behind.c:1890
#5 0x00002aaaac5789b7 in ra_writev (frame=0x61dd40, this=0x612fe0, fd=0x61ce00, vector=0x41401000, count=1, offset=262144,
iobref=0x61dda0) at read-ahead.c:665
#6 0x00002aaaac7848f3 in ioc_writev (frame=0x61deb0, this=0x613910, fd=0x61ce00, vector=0x41401000, count=1, offset=262144,
iobref=0x61dda0) at io-cache.c:1083
#7 0x00002aaaac993060 in qr_writev (frame=0x61cd60, this=0x614300, fd=0x61ce00, vector=0x41401000, count=1, off=262144,
iobref=0x61dda0) at quick-read.c:1077
#8 0x00002aaaacbaa016 in sp_writev (frame=0x61e4d0, this=0x614cf0, fd=0x61ce00, vector=0x41401000, count=1, off=262144,
iobref=0x61dda0) at stat-prefetch.c:2562
#9 0x00002aaaacdc0d4e in fuse_write (this=0x615650, finh=0x61eee0, msg=0x2aaaabe5f000) at fuse-bridge.c:1972
#10 0x00002aaaacdc695c in fuse_thread_proc (data=0x615650) at fuse-bridge.c:3179
#11 0x00002aaaab152367 in start_thread () from /lib64/libpthread.so.0
#12 0x00002aaaab439f7d in clone () from /lib64/libc.so.6
(gdb)
(In reply to comment #2) > This is the backtrace output of the core file generated > > (gdb) bt > #0 0x00002aaaac1619f2 in stripe_writev (frame=0x61dd40, this=0x611d20, > fd=0x61ce00, vector=0x61ea20, count=1, offset=132096, > iobref=0x61f610) at stripe.c:3046 > #1 0x00002aaaac3692ba in wb_sync (frame=0x61e688, file=0x61efe0, > winds=0x41400b40) at write-behind.c:478 > #2 0x00002aaaac36d247 in wb_do_ops (frame=0x61e688, file=0x61efe0, > winds=0x41400b40, unwinds=0x41400b30, other_requests=0x41400b20) > at write-behind.c:1648 > #3 0x00002aaaac36d71f in wb_process_queue (frame=0x61e688, file=0x61efe0, > flush_all=0 '\0') at write-behind.c:1771 > #4 0x00002aaaac36de82 in wb_writev (frame=0x61dc50, this=0x6126d0, > fd=0x61ce00, vector=0x41401000, count=1, offset=262144, > iobref=0x61dda0) at write-behind.c:1890 > #5 0x00002aaaac5789b7 in ra_writev (frame=0x61dd40, this=0x612fe0, > fd=0x61ce00, vector=0x41401000, count=1, offset=262144, > iobref=0x61dda0) at read-ahead.c:665 > #6 0x00002aaaac7848f3 in ioc_writev (frame=0x61deb0, this=0x613910, > fd=0x61ce00, vector=0x41401000, count=1, offset=262144, > iobref=0x61dda0) at io-cache.c:1083 > #7 0x00002aaaac993060 in qr_writev (frame=0x61cd60, this=0x614300, > fd=0x61ce00, vector=0x41401000, count=1, off=262144, > iobref=0x61dda0) at quick-read.c:1077 > #8 0x00002aaaacbaa016 in sp_writev (frame=0x61e4d0, this=0x614cf0, > fd=0x61ce00, vector=0x41401000, count=1, off=262144, > iobref=0x61dda0) at stat-prefetch.c:2562 > #9 0x00002aaaacdc0d4e in fuse_write (this=0x615650, finh=0x61eee0, > msg=0x2aaaabe5f000) at fuse-bridge.c:1972 > #10 0x00002aaaacdc695c in fuse_thread_proc (data=0x615650) at > fuse-bridge.c:3179 > #11 0x00002aaaab152367 in start_thread () from /lib64/libpthread.so.0 > #12 0x00002aaaab439f7d in clone () from /lib64/libc.so.6 > (gdb) And these are the output of some of the variables (gdb) p fctx->xl_array[idx] $8 = (xlator_t *) 0x0 (gdb) (gdb) p idx $9 = 2 (gdb) (gdb) p *(stripe_local_t *)frame->local $10 = {next = 0x0, orig_frame = 0x0, fctx = 0x0, stbuf = {st_dev = 0, st_ino = 0, st_nlink = 0, st_mode = 0, st_uid = 0, st_gid = 0, pad0 = 0, st_rdev = 0, st_size = 0, st_blksize = 0, st_blocks = 0, st_atim = {tv_sec = 0, tv_nsec = 0}, st_mtim = {tv_sec = 0, tv_nsec = 0}, st_ctim = {tv_sec = 0, tv_nsec = 0}, __unused = {0, 0, 0}}, pre_buf = {st_dev = 5428695154401738754, st_ino = 50298893, st_nlink = 1, st_mode = 33188, st_uid = 0, st_gid = 0, pad0 = 0, st_rdev = 0, st_size = 132096, st_blksize = 4096, st_blocks = 24, st_atim = {tv_sec = 1263966670, tv_nsec = 0}, st_mtim = {tv_sec = 1263966689, tv_nsec = 0}, st_ctim = {tv_sec = 1263966689, tv_nsec = 0}, __unused = {0, 0, 0}}, post_buf = {st_dev = 5428695154401738754, st_ino = 50298893, st_nlink = 1, st_mode = 33188, st_uid = 0, st_gid = 0, pad0 = 0, st_rdev = 0, st_size = 262144, st_blksize = 4096, st_blocks = 272, st_atim = {tv_sec = 1263966670, tv_nsec = 0}, st_mtim = {tv_sec = 1263966689, tv_nsec = 0}, st_ctim = { tv_sec = 1263966689, tv_nsec = 0}, __unused = {0, 0, 0}}, preparent = {st_dev = 0, st_ino = 0, st_nlink = 0, st_mode = 0, st_uid = 0, st_gid = 0, pad0 = 0, st_rdev = 0, st_size = 0, st_blksize = 0, st_blocks = 0, st_atim = {tv_sec = 0, tv_nsec = 0}, st_mtim = {tv_sec = 0, tv_nsec = 0}, st_ctim = {tv_sec = 0, tv_nsec = 0}, __unused = {0, 0, 0}}, postparent = {st_dev = 0, st_ino = 0, st_nlink = 0, st_mode = 0, st_uid = 0, st_gid = 0, pad0 = 0, st_rdev = 0, st_size = 0, st_blksize = 0, st_blocks = 0, st_atim = {tv_sec = 0, tv_nsec = 0}, st_mtim = {tv_sec = 0, tv_nsec = 0}, st_ctim = {tv_sec = 0, tv_nsec = 0}, __unused = {0, 0, 0}}, stbuf_size = 0, prebuf_size = 0, postbuf_size = 0, preparent_size = 0, postparent_size = 0, stbuf_blocks = 0, prebuf_blocks = 0, postbuf_blocks = 0, preparent_blocks = 0, postparent_blocks = 0, replies = 0x0, statvfs_buf = {f_bsize = 0, f_frsize = 0, f_blocks = 0, f_bfree = 0, f_bavail = 0, f_files = 0, f_ffree = 0, f_favail = 0, f_fsid = 0, f_flag = 0, f_namemax = 0, __f_spare = {0, 0, 0, 0, 0, 0}}, entry = 0x0, stats = {nr_files = 0, free_disk = 0, total_disk_size = 0, disk_usage = 0, disk_speed = 0, nr_clients = 0, write_usage = 0, read_usage = 0}, revalidate = 0 '\0', failed = 0 '\0', unwind = 1 '\001', entry_count = 0, node_index = 0, call_count = 1, wind_count = 2, op_ret = 130048, op_errno = 0, count = 0, flags = 0, name = 0x0, inode = 0x0, loc = {path = 0x0, name = 0x0, ino = 0, inode = 0x0, parent = 0x0}, loc2 = {path = 0x0, name = 0x0, ino = 0, inode = 0x0, parent = 0x0}, dict = 0x0, offset = 0, stripe_size = 131072, xattr_self_heal_needed = 0, entry_self_heal_needed = 0, list = 0x0, lock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0}, fd = 0x0, value = 0x0, iobref = 0x0} (gdb) (gdb) p *frame $11 = {root = 0x61cfb0, parent = 0x61d038, next = 0x0, prev = 0x61deb0, local = 0x61f870, this = 0x611d20, ret = 0x2aaaac3689e9 <wb_sync_cbk>, ref_count = 1, lock = 1, cookie = 0x61dd40, complete = _gf_false} (gdb) stripe dd didn't work because of wrong specification in vol files. same directory was used in 2 different server vol files. After correcting the vol. file it's working fine now. |