Bug 960899 - [RHEV-RHS] Client crashed on hypervisor after rebalance on a distributed-replicate volume
Summary: [RHEV-RHS] Client crashed on hypervisor after rebalance on a distributed-repl...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Pranith Kumar K
QA Contact: shylesh
URL:
Whiteboard:
Depends On:
Blocks: 978794 978802 986158
TreeView+ depends on / blocked
 
Reported: 2013-05-08 09:33 UTC by shylesh
Modified: 2013-09-23 22:35 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.4.0.5rhs-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 978794 (view as bug list)
Environment:
virt rhev integration
Last Closed: 2013-09-23 22:35:27 UTC
Embargoed:


Attachments (Terms of Use)

Description shylesh 2013-05-08 09:33:46 UTC
Description of problem:
distributed-replicate volume after add-brick and rebalance caused client crash

Version-Release number of selected component (if applicable):
[root@rhs1-bb rpm]# rpm -qa | grep gluster
glusterfs-fuse-3.4.0.4rhs-1.el6rhs.x86_64
glusterfs-3.4.0.4rhs-1.el6rhs.x86_64
glusterfs-debuginfo-3.4.0.4rhs-1.el6rhs.x86_64
glusterfs-server-3.4.0.4rhs-1.el6rhs.x86_64


How reproducible:


Steps to Reproduce:
1.I had a 6x2 distributed-replicate volume 
2.kept on add-brick and rebalance
3. after sometime client crashed
   

Additional info:

#0  afr_fd_has_witnessed_unstable_write (this=<value optimized out>, fd=0x33cd1f0) at afr-transaction.c:1408
1408                    if (fdctx->witnessed_unstable_write) {
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.107.el6.x86_64 keyutils-libs-1.4-4.el6.x86_64 krb5-libs-1.10.3-10.el6_4.2.x86_64 libcom_err-1.41.12-14.el6.x86_64 libgcc-4.4.7-3.el6.x86_64 libselinux-2.0.94-5.3.el6.x86_64 openssl-1.0.0-27.el6_4.2.x86_64 zlib-1.2.3-29.el6.x86_64
(gdb) bt
#0  afr_fd_has_witnessed_unstable_write (this=<value optimized out>, fd=0x33cd1f0) at afr-transaction.c:1408
#1  0x00007f1cf535de3c in afr_fsync (frame=0x7f1cf8e43830, this=0xef1100, fd=0x33cd1f0, datasync=0, xdata=0x0) at afr-common.c:2799
#2  0x00007f1cf50f6faf in dht_fsync (frame=0x7f1cf8e45e7c, this=<value optimized out>, fd=0x33cd1f0, datasync=0, xdata=0x0) at dht-inode-read.c:804
#3  0x00007f1cf4ebee59 in wb_fsync (frame=0x7f1cf8e478b0, this=0xef4940, fd=0x33cd1f0, datasync=0, xdata=0x0) at write-behind.c:1471
#4  0x00007f1cf4a9e667 in io_stats_fsync (frame=0x7f1cf8e3eaec, this=0xef54a0, fd=0x33cd1f0, flags=0, xdata=0x0) at io-stats.c:2134
#5  0x00007f1cfa032eae in syncop_fsync (subvol=0xef54a0, fd=0x33cd1f0, dataonly=0) at syncop.c:1657
#6  0x00007f1cf83ffeed in fuse_migrate_fd (this=0xa03b70, basefd=0xe5e090, old_subvol=0xef54a0, new_subvol=0x3454da0) at fuse-bridge.c:4083
#7  0x00007f1cf840009c in fuse_handle_opened_fds (this=0xa03b70, old_subvol=0xef54a0, new_subvol=0x3454da0) at fuse-bridge.c:4167
#8  0x00007f1cf8400149 in fuse_graph_switch_task (data=<value optimized out>) at fuse-bridge.c:4218
#9  0x00007f1cfa02ac02 in synctask_wrap (old_task=<value optimized out>) at syncop.c:144
#10 0x0000003627443b70 in ?? () from /lib64/libc.so.6
#11 0x0000000000000000 in ?? ()



Volume Name: vstore
Type: Distributed-Replicate
Volume ID: e8fe6a61-6345-41f0-9329-a802b051a026
Status: Started
Number of Bricks: 15 x 2 = 30
Transport-type: tcp
Bricks:
Brick1: 10.70.37.76:/brick1/vs1
Brick2: 10.70.37.133:/brick1/vs1
Brick3: 10.70.37.76:/brick2/vs2
Brick4: 10.70.37.133:/brick2/vs2
Brick5: 10.70.37.76:/brick3/vs3
Brick6: 10.70.37.133:/brick3/vs3
Brick7: 10.70.37.76:/brick4/vs4
Brick8: 10.70.37.133:/brick4/vs4
Brick9: 10.70.37.76:/brick5/vs5
Brick10: 10.70.37.133:/brick5/vs5
Brick11: 10.70.37.76:/brick6/vs6
Brick12: 10.70.37.133:/brick6/vs6
Brick13: 10.70.37.134:/brick1/vs1
Brick14: 10.70.37.59:/brick1/vs1
Brick15: 10.70.37.134:/brick2/vs7
Brick16: 10.70.37.59:/brick2/vs7
Brick17: 10.70.37.134:/brick3/vs8
Brick18: 10.70.37.59:/brick3/vs8
Brick19: 10.70.37.134:/brick4/vs9
Brick20: 10.70.37.59:/brick4/vs9
Brick21: 10.70.37.134:/brick5/vs10
Brick22: 10.70.37.59:/brick5/vs10
Brick23: 10.70.37.76:/brick6/vs11
Brick24: 10.70.37.133:/brick6/vs11
Brick25: 10.70.37.134:/brick6/vs12
Brick26: 10.70.37.59:/brick6/vs12
Brick27: 10.70.37.134:/brick7/vs13
Brick28: 10.70.37.59:/brick7/vs13
Brick29: 10.70.37.134:/brick7/vs14
Brick30: 10.70.37.59:/brick7/vs14
Options Reconfigured:
performance.open-behind: false
storage.owner-gid: 36
storage.owner-uid: 36
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: on





#0  afr_fd_has_witnessed_unstable_write (this=<value optimized out>, fd=0x33cd1f0) at afr-transaction.c:1408
        fdctx = 0x0
        witness = _gf_false
#1  0x00007f1cf535de3c in afr_fsync (frame=0x7f1cf8e43830, this=0xef1100, fd=0x33cd1f0, datasync=0, xdata=0x0) at afr-common.c:2799
        priv = 0xff1b40
        local = 0x7f1cdbb923a8
        ret = <value optimized out>
        i = 0
        call_count = 2
        op_errno = 0
        __FUNCTION__ = "afr_fsync"
#2  0x00007f1cf50f6faf in dht_fsync (frame=0x7f1cf8e45e7c, this=<value optimized out>, fd=0x33cd1f0, datasync=0, xdata=0x0) at dht-inode-read.c:804
        _new = 0x7f1cf8e43830
        old_THIS = 0xef3c40
        tmp_cbk = 0x7f1cf50f9460 <dht_fsync_cbk>
        subvol = 0xef1100
        op_errno = -1
        local = <value optimized out>
        __FUNCTION__ = "dht_fsync"
#3  0x00007f1cf4ebee59 in wb_fsync (frame=0x7f1cf8e478b0, this=0xef4940, fd=0x33cd1f0, datasync=0, xdata=0x0) at write-behind.c:1471
        _new = 0x7f1cf8e45e7c
        old_THIS = 0xef4940
        wb_inode = <value optimized out>
        stub = 0x0
        op_errno = 22
        __FUNCTION__ = "wb_fsync"
#4  0x00007f1cf4a9e667 in io_stats_fsync (frame=0x7f1cf8e3eaec, this=0xef54a0, fd=0x33cd1f0, flags=0, xdata=0x0) at io-stats.c:2134
        _new = 0x7f1cf8e478b0
        old_THIS = 0xef54a0
        tmp_cbk = 0x7f1cf4aa5aa0 <io_stats_fsync_cbk>
        __FUNCTION__ = "io_stats_fsync"
#5  0x00007f1cfa032eae in syncop_fsync (subvol=0xef54a0, fd=0x33cd1f0, dataonly=0) at syncop.c:1657
        _new = 0x7f1cf8e3eaec
        old_THIS = 0xa03b70
        tmp_cbk = 0x7f1cfa02b230 <syncop_fsync_cbk>
        task = 0x7f1ce4028180
        frame = 0x7f1cf8c3190c
        args = {op_ret = 0, op_errno = 0, iatt1 = {ia_ino = 0, ia_gfid = '\000' <repeats 15 times>, ia_dev = 0, ia_type = IA_INVAL, ia_prot = {
              suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {
                read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, 
            ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0, 
            ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}, iatt2 = {ia_ino = 0, ia_gfid = '\000' <repeats 15 times>, ia_dev = 0, 
            ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', 
                exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', 
                exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, 
            ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}, xattr = 0x0, entries = {{list = {next = 0x0, 
                prev = 0x0}, {next = 0x0, prev = 0x0}}, d_ino = 0, d_off = 0, d_len = 0, d_type = 0, d_stat = {ia_ino = 0, 
              ia_gfid = '\000' <repeats 15 times>, ia_dev = 0, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', 
                owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {
                  read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, 
              ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}, 
            dict = 0x0, inode = 0x0, d_name = 0x7f1ce4238fe0 ""}, statvfs_buf = {f_bsize = 0, f_frsize = 0, f_blocks = 0, f_bfree = 0, f_bavail = 0, 
            f_files = 0, f_ffree = 0, f_favail = 0, f_fsid = 0, f_flag = 0, f_namemax = 0, __f_spare = {0, 0, 0, 0, 0, 0}}, vector = 0x0, count = 0, 
          iobref = 0x0, buffer = 0x0, xdata = 0x0, flock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0, 
              data = '\000' <repeats 1023 times>}}, uuid = '\000' <repeats 15 times>, errstr = 0x0, dict = 0x0, lock_dict = {__data = {__lock = 0, 
              __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, 
            __size = '\000' <repeats 39 times>, __align = 0}, barrier = {guard = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, 
                __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, cond = {__data = {
                __lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0, __nwaiters = 0, __broadcast_seq = 0}, 
              __size = '\000' <repeats 47 times>, __align = 0}, waitq = {next = 0x0, prev = 0x0}, count = 0}, task = 0x7f1ce4028180, mutex = {
            __data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, 
            __size = '\000' <repeats 39 times>, __align = 0}, cond = {__data = {__lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0, 
              __woken_seq = 0, __mutex = 0x0, __nwaiters = 0, __broadcast_seq = 0}, __size = '\000' <repeats 47 times>, __align = 0}, done = 0}
        __FUNCTION__ = "syncop_fsync"
#6  0x00007f1cf83ffeed in fuse_migrate_fd (this=0xa03b70, basefd=0xe5e090, old_subvol=0xef54a0, new_subvol=0x3454da0) at fuse-bridge.c:4083
        ret = -1
        create_in_progress = 0 '\000'
        basefd_ctx = <value optimized out>
        oldfd = 0x33cd1f0
        __FUNCTION__ = "fuse_migrate_fd"
#7  0x00007f1cf840009c in fuse_handle_opened_fds (this=0xa03b70, old_subvol=0xef54a0, new_subvol=0x3454da0) at fuse-bridge.c:4167
        priv = <value optimized out>
        fdentries = 0x7f1cdc001340
        count = 128
        fdtable = <value optimized out>
        i = <value optimized out>
        fd = 0xe5e090
        ret = <value optimized out>
        fdctx = <value optimized out>
#8  0x00007f1cf8400149 in fuse_graph_switch_task (data=<value optimized out>) at fuse-bridge.c:4218
---Type <return> to continue, or q <return> to quit---
        args = <value optimized out>
#9  0x00007f1cfa02ac02 in synctask_wrap (old_task=<value optimized out>) at syncop.c:144
        task = 0x7f1ce4028180
#10 0x0000003627443b70 in ?? ()
No symbol table info available.
#11 0x0000000000000000 in ?? ()

Comment 3 Amar Tumballi 2013-05-09 07:07:25 UTC
this gets fixed by series of patches https://code.engineering.redhat.com/gerrit/#/c/729{5,6,7,8,9}

All of these were reverts of some upstream fixes which were not completely ready for GA yet. Hence the issue.

Comment 4 shylesh 2013-05-17 11:02:48 UTC
Verified on 3.4.0.8rhs-1.el6rhs.x86_64

Comment 5 SATHEESARAN 2013-06-17 05:35:34 UTC
Tested the same on RHS 2.0.z U5 [  glusterfs-3.3.0.10rhs-1.x86_64 ] and there were no issues seen.

volume information
===================
1. gluster volume status
[root@localhost ~]# gluster volume status
Status of volume: dr-vmstore
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.124:/rhs/brick1/b1                       24009   Y       4193
Brick 10.70.37.73:/rhs/brick1/b1                        24009   Y       3627
Brick 10.70.37.124:/rhs/brick1/b2                       24010   Y       4198
Brick 10.70.37.73:/rhs/brick1/b2                        24010   Y       3633
Brick 10.70.37.124:/rhs/brick1/b3                       24011   Y       4204
Brick 10.70.37.73:/rhs/brick1/b3                        24011   Y       3638
Brick 10.70.37.217:/rhs/brick1/b1                       24009   Y       3559
Brick 10.70.37.166:/rhs/brick1/b1                       24009   Y       3550
Brick 10.70.37.217:/rhs/brick1/b2                       24010   Y       3565
Brick 10.70.37.166:/rhs/brick1/b2                       24010   Y       3555
Brick 10.70.37.217:/rhs/brick1/b3                       24011   Y       3570
Brick 10.70.37.166:/rhs/brick1/b3                       24011   Y       3561
Brick 10.70.37.124:/rhs/brick2/ab1                      24012   Y       29155
Brick 10.70.37.73:/rhs/brick2/ab1                       24012   Y       27537
Brick 10.70.37.124:/rhs/brick2/ab2                      24013   Y       1276
Brick 10.70.37.73:/rhs/brick2/ab2                       24013   Y       32061
Brick 10.70.37.217:/rhs/brick2/ab1                      24012   Y       10104
Brick 10.70.37.166:/rhs/brick2/ab1                      24012   Y       11212
Brick 10.70.37.217:/rhs/brick2/ab2                      24013   Y       14514
Brick 10.70.37.166:/rhs/brick2/ab2                      24013   Y       15622
Brick 10.70.37.124:/rhs/brick3/ab1                      24014   Y       20904
Brick 10.70.37.73:/rhs/brick3/ab1                       24014   Y       19192
NFS Server on localhost                                 38467   Y       20911
Self-heal Daemon on localhost                           N/A     Y       20917
NFS Server on 10.70.37.217                              38467   Y       18855
Self-heal Daemon on 10.70.37.217                        N/A     Y       18862
NFS Server on 10.70.37.166                              38467   Y       19938
Self-heal Daemon on 10.70.37.166                        N/A     Y       19944
NFS Server on 10.70.37.73                               38467   Y       19198
Self-heal Daemon on 10.70.37.73                         N/A     Y       19211

2. gluster volume info
[root@localhost ~]# gluster volume info
                                                                                                                                                                                                               
Volume Name: dr-vmstore                                                                                                                                                                                        
Type: Distributed-Replicate                                                                                                                                                                                    
Volume ID: 714ca532-5527-4c0e-8568-d9c5244f4927                                                                                                                                                                
Status: Started                                                                                                                                                                                                
Number of Bricks: 11 x 2 = 22                                                                                                                                                                                  
Transport-type: tcp                                                                                                                                                                                            
Bricks:                                                                                                                                                                                                        
Brick1: 10.70.37.124:/rhs/brick1/b1                                                                                                                                                                            
Brick2: 10.70.37.73:/rhs/brick1/b1                                                                                                                                                                             
Brick3: 10.70.37.124:/rhs/brick1/b2                                                                                                                                                                            
Brick4: 10.70.37.73:/rhs/brick1/b2                                                                                                                                                                             
Brick5: 10.70.37.124:/rhs/brick1/b3                                                                                                                                                                            
Brick6: 10.70.37.73:/rhs/brick1/b3                                                                                                                                                                             
Brick7: 10.70.37.217:/rhs/brick1/b1                                                                                                                                                                            
Brick8: 10.70.37.166:/rhs/brick1/b1                                                                                                                                                                            
Brick9: 10.70.37.217:/rhs/brick1/b2                                                                                                                                                                            
Brick10: 10.70.37.166:/rhs/brick1/b2                                                                                                                                                                           
Brick11: 10.70.37.217:/rhs/brick1/b3                                                                                                                                                                           
Brick12: 10.70.37.166:/rhs/brick1/b3                                                                                                                                                                           
Brick13: 10.70.37.124:/rhs/brick2/ab1                                                                                                                                                                          
Brick14: 10.70.37.73:/rhs/brick2/ab1                                                                                                                                                                           
Brick15: 10.70.37.124:/rhs/brick2/ab2
Brick16: 10.70.37.73:/rhs/brick2/ab2
Brick17: 10.70.37.217:/rhs/brick2/ab1
Brick18: 10.70.37.166:/rhs/brick2/ab1
Brick19: 10.70.37.217:/rhs/brick2/ab2
Brick20: 10.70.37.166:/rhs/brick2/ab2
Brick21: 10.70.37.124:/rhs/brick3/ab1
Brick22: 10.70.37.73:/rhs/brick3/ab1
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
network.remote-dio: on
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
nfs.disable: off

Comment 6 Scott Haines 2013-09-23 22:35:27 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.