Bug 1394131 - [md-cache]: All bricks crashed while performing symlink and rename from client at the same time
Summary: [md-cache]: All bricks crashed while performing symlink and rename from clien...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: marker
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Poornima G
QA Contact:
URL:
Whiteboard:
Depends On: 1387204
Blocks: 1396414 1396418 1396419
TreeView+ depends on / blocked
 
Reported: 2016-11-11 06:55 UTC by Poornima G
Modified: 2017-03-06 17:33 UTC (History)
13 users (show)

Fixed In Version: glusterfs-3.10.0
Clone Of: 1387204
: 1396414 (view as bug list)
Environment:
Last Closed: 2017-03-06 17:33:37 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Worker Ant 2016-11-11 09:06:58 UTC
REVIEW: http://review.gluster.org/15826 (marker: Fix inode value in loc, in setxattr fop) posted (#1) for review on master by Poornima G (pgurusid)

Comment 2 Poornima G 2016-11-14 10:13:52 UTC
All the 6 bricks of a volume (3x2) crashed with the upcall bt: 

[root@dhcp37-58 ~]# file core.5895.1476956627.dump.1
core.5895.1476956627.dump.1: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from '/usr/sbin/glusterfsd -s 10.70.37.58 --volfile-id master.10.70.37.58.rhs-brick1-', real uid: 0, effective uid: 0, real gid: 0, effective gid: 0, execfn: '/usr/sbin/glusterfsd', platform: 'x86_64'
[root@dhcp37-58 ~]# 

(gdb) bt
#0  0x00007f9530adc210 in pthread_spin_lock () from /lib64/libpthread.so.0
#1  0x00007f951de3b129 in upcall_inode_ctx_get () from /usr/lib64/glusterfs/3.8.4/xlator/features/upcall.so
#2  0x00007f951de3055f in upcall_local_init () from /usr/lib64/glusterfs/3.8.4/xlator/features/upcall.so
#3  0x00007f951de3431a in up_setxattr () from /usr/lib64/glusterfs/3.8.4/xlator/features/upcall.so
#4  0x00007f9531d072a4 in default_setxattr_resume () from /lib64/libglusterfs.so.0
#5  0x00007f9531c9947d in call_resume () from /lib64/libglusterfs.so.0
#6  0x00007f951dc20743 in iot_worker () from /usr/lib64/glusterfs/3.8.4/xlator/performance/io-threads.so
#7  0x00007f9530ad7dc5 in start_thread () from /lib64/libpthread.so.0
#8  0x00007f953041c73d in clone () from /lib64/libc.so.6
(gdb)




Steps Carried:
==============

This has happened in geo-rep setup, but all the master bricks are crashed and looks more generic issue. However, I will write all the steps 

1. Create Master and Slave volume (3x2) each from 3 node clusters
2. Enable md-cache on master and slave
3. Create geo-rep between master and slave
4. Mount the Master volume (Fuse) thrice on same client at different location
5. Create Data on Master volume from one client and keep stat from other client path:
crefi -T 10 -n 10 --multi -d 10 -b 10 --random --max=5K --min=1K --fop=create /mnt/master/
find . | xargs stat
6. Let the data be synced to slave. Confirm via arequal checksum
7. Chmod on master volume from one client and keep stat from other client path:
crefi -T 10 -n 10 --multi -d 10 -b 10 --random --max=5K --min=1K --fop=chmod /mnt/master/
find . | xargs stat
8. Let the data be synced to slave. Confirm via arequal checksum
9. Chown on master volume from one client and keep stat from other client path:
crefi -T 10 -n 10 --multi -d 10 -b 10 --random --max=5K --min=1K --fop=chown
/mnt/master/
find . | xargs stat
10. Let the data be synced to slave. Confirm via arequal checksum

11. Chgrp on master volume from one client and keep stat from other client path:
crefi -T 10 -n 10 --multi -d 10 -b 10 --random --max=5K --min=1K --fop=chgrp /mnt/master/
find . | xargs stat
12. Let the data be synced to slave. Confirm via arequal checksum

13. symlink  on master volume from one client and rename from another client client path:
crefi -T 10 -n 10 --multi -d 10 -b 10 --random --max=5K --min=1K --fop=symlink /mnt/master/
crefi -T 10 -n 10 --multi -d 10 -b 10 --random --max=5K --min=1K --fop=rename /mnt/new_1

Comment 3 Worker Ant 2016-11-17 10:53:20 UTC
COMMIT: http://review.gluster.org/15826 committed in master by Rajesh Joseph (rjoseph) 
------
commit 46e5466850311ee69e6ae9a11c2bba2aabadd5de
Author: Poornima G <pgurusid>
Date:   Fri Nov 11 12:08:57 2016 +0530

    marker: Fix inode value in loc, in setxattr fop
    
    On recieving a rename fop, marker_rename() stores the,
    oldloc and newloc in its 'local' struct, once the rename
    is done, the xtime marker(last updated time) is set on
    the file, but sending a setxattr fop. When upcall
    receives the setxattr fop, the loc->inode is NULL and
    it crashes. The loc->inode can be NULL only in one valid
    case, i.e. in rename case where the inode of new loc
    can be NULL. Hence, marker should have filled the inode
    of the new_loc before issuing a setxattr.
    
    Change-Id: Id638f678c3daaf4a5c29b970b58929d377ae8977
    BUG: 1394131
    Signed-off-by: Poornima G <pgurusid>
    Reviewed-on: http://review.gluster.org/15826
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Kotresh HR <khiremat>
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Rajesh Joseph <rjoseph>

Comment 4 Worker Ant 2016-11-18 09:31:04 UTC
REVIEW: http://review.gluster.org/15877 (marker: Fix inode value in loc, in setxattr fop) posted (#1) for review on release-3.9 by Poornima G (pgurusid)

Comment 5 Shyamsundar 2017-03-06 17:33:37 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.