Bug 1048831 - [SNAPSHOT]: brick process crashed while creating snapshot when multiple file/directory creation is inprogress from fuse and nfs mount
Summary: [SNAPSHOT]: brick process crashed while creating snapshot when multiple file/...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: RHGS 3.0.0
Assignee: Joseph Elwin Fernandes
QA Contact: Rahul Hinduja
URL:
Whiteboard: SNAPSHOT
Depends On:
Blocks: 1048840 1049278
TreeView+ depends on / blocked
 
Reported: 2014-01-06 12:11 UTC by Rahul Hinduja
Modified: 2016-09-17 13:03 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.6.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-09-22 19:31:16 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Rahul Hinduja 2014-01-06 12:11:29 UTC
Description of problem:
=======================

When tried to create multiple snapshots of multiple volumes from different servers when High IO was inprogress, glusterfsd process crashed on each server.

My setup consist of

1.Four servers: server1,server2,server3 and server4 in a cluster.
2.Four volumes: vol0,vol1,vol2,vol3
3.One client (6.4) with U2 client bits
4.Each volume is mounted (Fuse and NFS) on the client

Scenario: 

When high IO (Creation of files/directory) from Fuse and NFS mounts is in progress, tried to create snapshot of all the volumes at the same time from different server in the cluster. Snapshot creation failed and glusterfsd crashed on all the servers.

Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.4.0.snap.dec30.2013git-1.el6.x86_64


Steps to Reproduce:
===================
1. Setup a cluster of 4 servers (server1,server2,server3,server4)
2. Create four volumes from these servers (vol0,vol1,vol2,vol3)
3. Mount the volumes on client (Fuse and NFS mount)
4. create directories names as f and n from each fuse mount of volumes
5. cd to f from all fuse mounts of volumes
6. cd to n from all nfs mounts of volumes
7. Start creating heavy IO from all the fuse (f) mount and nfs (n) mount of every volume.

I used: for i in {1..50} ; do cp -rvf /etc etc.$i ; done 

8. While IO is in progress, try creating snaps of each volume from different servers (to avoid locking issue). Observed that the snapshot creation failed at many places

I used:

From server1: for i in {1..200} ; do gluster snapshot create vol0 -n snap$i ; done
From server2: for i in {1..200} ; do gluster snapshot create vol1 -n snap$i ; done
From server3: for i in {1..200} ; do gluster snapshot create vol2 -n snap$i ; done
From server4: for i in {1..200} ; do gluster snapshot create vol3 -n snap$i ; done

Snap failed with:

[root@snapshot-10 ~]# for i in {1..200} ; do gluster snapshot create vol1 -n snap$i ; done 
snapshot create: failed: Commit failed on 10.70.42.220. Please check log file for details.
Commit failed on 10.70.43.186. Please check log file for details.
Commit failed on 10.70.43.70. Please check log file for details.
Snapshot command failed
snapshot create: failed: Commit failed on 10.70.43.70. Please check log file for details.




Actual results:
===============

Snap creation failed, glusterfsd crashed on all the server

[root@snapshot-09 ~]# file /core.20633
/core.20633: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from '/usr/sbin/glusterfsd -s 10.70.42.220 --volfile-id vol1.10.70.42.220.brick1-b1 -'

[root@snapshot-10 ~]# file /core.20108
/core.20108: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from '/usr/sbin/glusterfsd -s 10.70.43.20 --volfile-id vol1.10.70.43.20.brick1-b1 -p'

[root@snapshot-11 ~]# file /core.23898
/core.23898: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from '/usr/sbin/glusterfsd -s 10.70.43.186 --volfile-id vol3.10.70.43.186.brick3-b3 -'

[root@snapshot-12 ~]# file  /core.20672 
/core.20672: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from '/usr/sbin/glusterfsd -s 10.70.43.70 --volfile-id vol3.10.70.43.70.brick3-b3 -p'


BT from one of the core is:
===========================

(gdb) bt
#0  0x0000003213032925 in raise () from /lib64/libc.so.6
#1  0x0000003213034105 in abort () from /lib64/libc.so.6
#2  0x0000003213070837 in __libc_message () from /lib64/libc.so.6
#3  0x0000003213076166 in malloc_printerr () from /lib64/libc.so.6
#4  0x0000003213078c93 in _int_free () from /lib64/libc.so.6
#5  0x00007fd79127003a in ltable_delete_locks (ltable=0x7fd76c000ef0) at posix.c:2559
#6  0x00007fd791270466 in disconnect_cbk (this=<value optimized out>, client=<value optimized out>) at posix.c:2619
#7  0x0000003f09c63d2d in gf_client_disconnect (client=0x21578b0) at client_t.c:374
#8  0x00007fd7907e65f8 in server_connection_cleanup (this=0x211c4b0, client=0x21578b0, flags=<value optimized out>) at server-helpers.c:244
#9  0x00007fd7907e1e0c in server_rpc_notify (rpc=<value optimized out>, xl=0x211c4b0, event=<value optimized out>, data=0x216a0f0) at server.c:558
#10 0x0000003f0a407cc5 in rpcsvc_handle_disconnect (svc=0x211e3c0, trans=0x216a0f0) at rpcsvc.c:682
#11 0x0000003f0a409800 in rpcsvc_notify (trans=0x216a0f0, mydata=<value optimized out>, event=<value optimized out>, data=0x216a0f0) at rpcsvc.c:720
#12 0x0000003f0a40af18 in rpc_transport_notify (this=<value optimized out>, event=<value optimized out>, data=<value optimized out>) at rpc-transport.c:512
#13 0x00007fd7928ec761 in socket_event_poll_err (fd=<value optimized out>, idx=<value optimized out>, data=0x216a0f0, poll_in=<value optimized out>, poll_out=0, 
    poll_err=0) at socket.c:1071
#14 socket_event_handler (fd=<value optimized out>, idx=<value optimized out>, data=0x216a0f0, poll_in=<value optimized out>, poll_out=0, poll_err=0) at socket.c:2239
#15 0x0000003f09c66097 in event_dispatch_epoll_handler (event_pool=0x20eeee0) at event-epoll.c:384
#16 event_dispatch_epoll (event_pool=0x20eeee0) at event-epoll.c:445
#17 0x000000000040680a in main (argc=19, argv=0x7fffcf2f4c68) at glusterfsd.c:1964




Expected results:
=================

Snap creation should be successful, NO brick crash should be observed.

Comment 3 Joseph Elwin Fernandes 2014-02-20 12:29:11 UTC
Assigning the bug to myself

Comment 4 Joseph Elwin Fernandes 2014-03-06 06:05:44 UTC
As investigated the reason for this behavior in glusterfsd is the race condition caused during clean up of the inode_lk and entry_lk table of the client when a client disconnect happens. The two threads contesting for the cleanup are snap barrier and the epoll thread. One of the thread de-allocates the memory of ltable structure but never assigns a NULL. The second thread which picks up the ltable never senses that the memory is de-allocated and tries freeing ltable.

This issue is addressed in one of the upstream patch by Anand Avati : http://review.gluster.org/#/c/6695/

Comment 6 Joseph Elwin Fernandes 2014-03-11 02:47:47 UTC
Applied the patch http://review.gluster.org/#/c/6695/ and tested with snapshot. The fix works.

Comment 7 Nagaprasad Sathyanarayana 2014-04-21 06:17:46 UTC
Marking snapshot BZs to RHS 3.0.

Comment 8 Nagaprasad Sathyanarayana 2014-05-19 10:56:30 UTC
Setting flags required to add BZs to RHS 3.0 Errata

Comment 10 Rahul Hinduja 2014-06-03 10:26:23 UTC
Verified with the build: glusterfs-3.6.0.11-1.el6rhs.x86_64

Didn't observe the glusterfsd crash. Moving the bug to verified state.

Comment 12 errata-xmlrpc 2014-09-22 19:31:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.