Bug 848327 - [07c9a96627932ad3fc8c99193f8cfdae522ca9c1]: glusterfs client crashed trying to access NULL graph object
Summary: [07c9a96627932ad3fc8c99193f8cfdae522ca9c1]: glusterfs client crashed trying t...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: fuse
Version: 2.0
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ---
: ---
Assignee: Amar Tumballi
QA Contact: Sachidananda Urs
URL:
Whiteboard:
Depends On: 822485
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-08-15 09:32 UTC by Vidya Sakar
Modified: 2013-12-19 00:08 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 822485
Environment:
Last Closed: 2013-09-23 22:36:15 UTC
Embargoed:


Attachments (Terms of Use)

Description Vidya Sakar 2012-08-15 09:32:18 UTC
+++ This bug was initially created as a clone of Bug #822485 +++

Description of problem:

2 replica volume. Started running bonnie on the fuse mount point, and did graph changes. glusterfs client crashed with the below backtrace.

Core was generated by `/usr/local/sbin/glusterfs --volfile-id=vol --volfile-server=hyperspace /mnt/cli'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007f399b6798e5 in __fd_ctx_set (fd=0x20aae7c, xlator=0x1fb0b20, value=34359456) at ../../../libglusterfs/src/fd.c:803
803	                new_xl_count = fd->xl_count + xlator->graph->xl_count;
(gdb) bt
#0  0x00007f399b6798e5 in __fd_ctx_set (fd=0x20aae7c, xlator=0x1fb0b20, value=34359456) at ../../../libglusterfs/src/fd.c:803
#1  0x00007f3999156d4d in __fuse_fd_ctx_check_n_create (this=0x1fb0b20, fd=0x20aae7c)
    at ../../../../../xlators/mount/fuse/src/fuse-bridge.c:43
#2  0x00007f3999156e06 in fuse_fd_ctx_check_n_create (this=0x1fb0b20, fd=0x20aae7c) at ../../../../../xlators/mount/fuse/src/fuse-bridge.c:67
#3  0x00007f399916c23e in fuse_handle_opened_fds (this=0x1fb0b20, old_subvol=0x2043340, new_subvol=0x7f3988016960)
    at ../../../../../xlators/mount/fuse/src/fuse-bridge.c:3681
#4  0x00007f399916c31f in fuse_graph_switch_task (data=0x202ebe0) at ../../../../../xlators/mount/fuse/src/fuse-bridge.c:3725
#5  0x00007f399b68e0cd in synctask_wrap (old_task=0x2032180) at ../../../libglusterfs/src/syncop.c:126
#6  0x00007f399a8fd1a0 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#7  0x0000000000000000 in ?? ()
(gdb) f 0
#0  0x00007f399b6798e5 in __fd_ctx_set (fd=0x20aae7c, xlator=0x1fb0b20, value=34359456) at ../../../libglusterfs/src/fd.c:803
803	                new_xl_count = fd->xl_count + xlator->graph->xl_count;
(gdb) p xlator->graph
$1 = (glusterfs_graph_t *) 0x0
(gdb) f 0
#0  0x00007f399b6798e5 in __fd_ctx_set (fd=0x20aae7c, xlator=0x1fb0b20, value=34359456) at ../../../libglusterfs/src/fd.c:803
803	                new_xl_count = fd->xl_count + xlator->graph->xl_count;
(gdb) p xlator->graph
$1 = (glusterfs_graph_t *) 0x0
(gdb) info thr
  7 Thread 8810  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
  6 Thread 8800  0x00007f399a99e6a3 in epoll_wait () at ../sysdeps/unix/syscall-template.S:82
  5 Thread 8801  do_sigwait (set=<value optimized out>, sig=0x7f399914ceb8)
    at ../nptl/sysdeps/unix/sysv/linux/../../../../../sysdeps/unix/sysv/linux/sigwait.c:65
  4 Thread 8811  0x00007f399afe9cbd in read () at ../sysdeps/unix/syscall-template.S:82
  3 Thread 8802  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
  2 Thread 8807  0x00007f399afea4bd in nanosleep () at ../sysdeps/unix/syscall-template.S:82
* 1 Thread 8803  0x00007f399b6798e5 in __fd_ctx_set (fd=0x20aae7c, xlator=0x1fb0b20, value=34359456) at ../../../libglusterfs/src/fd.c:803
(gdb) p *xl
No symbol "xl" in current context.
(gdb) p *xlator
$2 = {name = 0x1fb03a0 "fuse", type = 0x1fb06f0 "mount/fuse", next = 0x0, prev = 0x0, parents = 0x0, children = 0x0, 
  options = 0x7f399955c04c, dlhandle = 0x1fb14c0, fops = 0x7f39993781a0, cbks = 0x7f3999378440, dumpops = 0x7f3999378120, volume_options = {
    next = 0x1fb1a10, prev = 0x1fb1a10}, fini = 0x7f399916e83b <fini>, init = 0x7f399916d9ea <init>, reconfigure = 0, 
  mem_acct_init = 0x7f399916d80a <mem_acct_init>, notify = 0x7f399916d4b1 <notify>, loglevel = GF_LOG_NONE, latencies = {{min = 0, max = 0, 
      total = 0, std = 0, mean = 0, count = 0} <repeats 46 times>}, history = 0x0, ctx = 0x1f93010, graph = 0x0, itable = 0x0, 
  init_succeeded = 1 '\001', private = 0x1fb2770, mem_acct = {num_types = 98, rec = 0x1fb1b00}, winds = 0, switched = 0 '\000', 
  local_pool = 0x0}
(gdb) 



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

 gluster volume info vol
 
Volume Name: vol
Type: Replicate
Volume ID: 08835951-9500-47fa-9c2d-4868a8d96736
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: hyperspace:/mnt/sda8/export6
Brick2: hyperspace:/mnt/sda10/export3
Options Reconfigured:
features.limit-usage: /:22GB
features.quota: on
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
performance.stat-prefetch: off
root@hyperspace:/home/raghu#

Comment 2 Amar Tumballi 2012-08-23 06:45:00 UTC
This bug is not seen in current master branch (which will get branched as RHS 2.1.0 soon). To consider it for fixing, want to make sure this bug still exists in RHS servers. If not reproduced, would like to close this.

Comment 3 Amar Tumballi 2012-10-05 17:09:49 UTC
please re-open if seen again...

Comment 4 Sachidananda Urs 2013-01-07 12:20:34 UTC
The bug is not reproducible with many instances of bonnie running and graph changes in loop.

Comment 6 Scott Haines 2013-09-23 22:36:15 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.