Bug 1216942 - glusterd crash when snapshot create was in progress on different volumes at the same time - job edited to create snapshots at the given time
Summary: glusterd crash when snapshot create was in progress on different volumes at t...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard: glusterd
Depends On: 1211640 1224231
Blocks: qe_tracker_everglades glusterfs-3.7.0
TreeView+ depends on / blocked
 
Reported: 2015-04-29 09:14 UTC by Anand Nekkunti
Modified: 2016-01-04 04:50 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1211640
Environment:
Last Closed: 2015-05-22 09:20:34 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Anand Nekkunti 2015-04-29 09:14:41 UTC
+++ This bug was initially created as a clone of Bug #1211640 +++

Description of problem:
=======================
Edited 2 jobs to create snapshots on 2 different volumes at the same time, snap create failed as glusterd crashed

Version-Release number of selected component (if applicable):
=============================================================
gluster --version
glusterfs 3.7dev built on Apr 13 2015 07:14:27


How reproducible:
=================
1/1

Steps to Reproduce:
===================
1.Create 2 volumes (vol0 and vol1)and start it. 
Fuse and NFS mount the volume and create some IO

2.Create another shared storage and fuse mount the shared storage on all nodes

3.Initialise the snapshot scheduler on all nodes using snap_scheduler.py init

4.Enable the snap scheduler on one of the nodes using snap_scheduler.py enable

5.Added a job which created snapshot on vol0 at 18:20 
  Added another job which created snapshot on vol1 at 18:20

snap_scheduler.py add "J1_vol0"  "20 18 * * * " "vol0"
snap_scheduler.py add "J1_vol1"  "20 18 * * * " "vol1"

Snapshots were created successfully on the volume 

gluster snapshot list
Scheduled-J1_vol0-vol0_GMT-2015.04.14-12.50.01
Scheduled-J1_vol1-vol1_GMT-2015.04.14-12.50.01

6.Edit both the jobs to create snapshots at 18:36
snap_scheduler.py edit "J1_vol1"  "36 18 * * * " "vol1"
snap_scheduler.py edit "J1_vol0"  "36 18 * * * " "vol0"

Snapshot creation failed 

-------------Part of var/log/glusterfs/gcron.log----------

[2015-04-14 18:36:01,350 gcron.py:67 takeSnap] DEBUG Running command 'gluster snapshot create Scheduled-J1_vol0-vol0 vol0'
[2015-04-14 18:36:01,351 gcron.py:95 doJob] DEBUG /var/run/gluster/shared_storage/snaps/lock_files/J1_vol1 last modified at Tue Apr 14 18:20:26 2015
[2015-04-14 18:36:01,351 gcron.py:97 doJob] DEBUG Processing job Scheduled-J1_vol1-vol1
[2015-04-14 18:36:01,352 gcron.py:67 takeSnap] DEBUG Running command 'gluster snapshot create Scheduled-J1_vol1-vol1 vol1'
[2015-04-14 18:36:20,009 gcron.py:74 takeSnap] DEBUG Command 'gluster snapshot create Scheduled-J1_vol0-vol0 vol0' returned '1'
[2015-04-14 18:36:20,009 gcron.py:74 takeSnap] DEBUG Command 'gluster snapshot create Scheduled-J1_vol1-vol1 vol1' returned '1'
[2015-04-14 18:36:20,010 gcron.py:77 takeSnap] ERROR Snapshot of vol0 failed
[2015-04-14 18:36:20,014 gcron.py:78 takeSnap] ERROR Command output:
[2015-04-14 18:36:20,014 gcron.py:79 takeSnap] ERROR 
[2015-04-14 18:36:20,014 gcron.py:101 doJob] ERROR Job Scheduled-J1_vol0-vol0 failed
[2015-04-14 18:36:20,010 gcron.py:77 takeSnap] ERROR Snapshot of vol1 failed
[2015-04-14 18:36:20,019 gcron.py:78 takeSnap] ERROR Command output:
[2015-04-14 18:36:20,020 gcron.py:79 takeSnap] ERROR 
[2015-04-14 18:36:20,020 gcron.py:101 doJob] ERROR Job Scheduled-J1_vol1-vol1 failed
------------------------------------------------------------

Actual results:
==============
glusterd crashed

Expected results:
================
There should be no crash observed

Additional info:
================
[2015-04-14 12:38:44.527287] W [socket.c:642:__socket_rwv] 0-quotad: readv on /var/run/gluster/7089eb2213ea459a8a12ba
56023bd163.socket failed (No data available)
[2015-04-14 12:38:47.918396] W [socket.c:642:__socket_rwv] 0-quotad: readv on /var/run/gluster/7089eb2213ea459a8a12ba
56023bd163.socket failed (No data available)
The message "I [MSGID: 106006] [glusterd-snapd-svc.c:379:glusterd_snapdsvc_rpc_notify] 0-management: snapd has discon
nected from glusterd." repeated 2 times between [2015-04-14 12:38:34.733354] and [2015-04-14 12:38:40.636734]
The message "I [MSGID: 106006] [glusterd-svc-mgmt.c:327:glusterd_svc_common_rpc_notify] 0-management: nfs has disconn
ected from glusterd." repeated 3 times between [2015-04-14 12:37:56.254712] and [2015-04-14 12:38:40.641386]
pending frames:
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 
2015-04-14 13:06:18
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.7dev


(gdb) bt
#0  0x00007fe862f85d16 in rcu_read_unlock_bp () from /usr/lib64/liburcu-bp.so.1
#1  0x00007fe863283252 in glusterd_mgmt_v3_commit (op=GD_OP_SNAP, op_ctx=0x7fe800000003, req_dict=0x7fe85c344c4c, op_errstr=0x7fe85879ace8, txn_generation=3)
    at glusterd-mgmt.c:1232
#2  0x00007fe863287078 in glusterd_mgmt_v3_initiate_snap_phases (req=0x1345108, op=GD_OP_SNAP, dict=0x7fe85c45d87c) at glusterd-mgmt.c:1998
#3  0x00007fe863272a10 in glusterd_handle_snapshot_create (req=0x1345108, op=GD_OP_SNAP, dict=0x7fe85c45d87c, err_str=<value optimized out>, len=140635893514400)
    at glusterd-snapshot.c:3763
#4  0x00007fe86327e7c1 in glusterd_handle_snapshot_fn (req=0x1345108) at glusterd-snapshot.c:8305
#5  0x00007fe8631c9d7f in glusterd_big_locked_handler (req=0x1345108, actor_fn=0x7fe86327dfa0 <glusterd_handle_snapshot_fn>) at glusterd-handler.c:83
#6  0x0000003abac61c72 in synctask_wrap (old_task=<value optimized out>) at syncop.c:375
#7  0x0000003a964438f0 in ?? () from /lib64/libc.so.6
#8  0x0000000000000000 in ?? ()

--- Additional comment from  on 2015-04-14 09:37:56 EDT ---

sosreport :
=========
http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/snapshots/1211640/

--- Additional comment from Avra Sengupta on 2015-04-15 02:57:10 EDT ---

The crash is seen at rcu_read_unlock_bp() which is not affected by change of snapshot schedules. Moving it to glusterd core team

--- Additional comment from Atin Mukherjee on 2015-04-15 11:12:15 EDT ---

http://review.gluster.org/10147 has introduced it and we are working on to find out a solution for this. At worst case the mentioned patch will be reverted which would solve this problem.

--- Additional comment from Anand Avati on 2015-04-17 05:39:14 EDT ---

REVIEW: http://review.gluster.org/10285 (glusterd:Implementation of sync lock as recurcive lock to avoid dead lock.) posted (#1) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-04-17 05:56:41 EDT ---

REVIEW: http://review.gluster.org/10285 (glusterd: Implementation of sync lock as recursive lock to avoid dead lock.) posted (#2) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-04-17 13:29:53 EDT ---

REVIEW: http://review.gluster.org/10285 (glusterd: Implementation of sync lock as recursive lock to avoid dead lock.) posted (#3) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-04-18 08:10:18 EDT ---

REVIEW: http://review.gluster.org/10285 (glusterd: Implementation of sync lock as recursive lock to avoid dead lock.) posted (#4) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-04-18 13:34:40 EDT ---

REVIEW: http://review.gluster.org/10285 (glusterd: Implementation of sync lock as recursive lock to avoid dead lock.) posted (#5) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from  on 2015-04-21 02:10:07 EDT ---

Facing the issue multiple times while testing Snapshots. 
Proposing this bug as a blocker

--- Additional comment from Anand Avati on 2015-04-22 00:46:30 EDT ---

REVIEW: http://review.gluster.org/10285 (glusterd: Implementation of sync lock as recursive lock to avoid crash.) posted (#6) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-04-22 05:18:54 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync lock as recursive lock to avoid crash.) posted (#7) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-04-22 07:25:22 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync lock as recursive lock to avoid crash.) posted (#8) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-04-22 08:27:46 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync lock as recursive lock to avoid crash.) posted (#9) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-04-22 13:10:57 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync lock as recursive lock to avoid crash.) posted (#10) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-04-23 05:44:02 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync lock as recursive lock to avoid crash.) posted (#11) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-04-24 06:29:18 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync lock as recursive lock to avoid crash.) posted (#12) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-04-27 04:44:50 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync lock as recursive lock to avoid crash.) posted (#13) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-04-28 04:53:31 EDT ---

COMMIT: http://review.gluster.org/10285 committed in master by Vijay Bellur (vbellur) 
------
commit ada6b3a8800867934af57a57d5312f5a5d8374f0
Author: anand <anekkunt>
Date:   Fri Apr 17 14:19:46 2015 +0530

    libglusterfs: Implementation of sync lock as recursive lock to avoid crash.
    
    Problem : In glusterd,we are using big lock which is implemented based on sync
    task frame work for thread synchronization and rcu lock for data consistency.
    sync task frame work swap the threads  if there is no worker poll threads
    available,due to this rcu lock and rcu unlock was happening in different threads
    (urcu-bp will not allow this),resulting into glusterd crash.
    
    fix : To avoid releasing the sync lock(big lock) in between rcu critical
    section,implemented sync lock as recursive lock.
    
    More details:
    link : http://www.spinics.net/lists/gluster-devel/msg14632.html
    
    Change-Id: I2b56c1caf3f0470f219b1adcaf62cce29cdc6b88
    BUG: 1211640
    Signed-off-by: anand <anekkunt>
    Reviewed-on: http://review.gluster.org/10285
    Reviewed-by: Atin Mukherjee <amukherj>
    Tested-by: Gluster Build System <jenkins.com>
    Tested-by: NetBSD Build System
    Reviewed-by: Vijay Bellur <vbellur>

--- Additional comment from Anand Avati on 2015-04-28 12:58:01 EDT ---

REVIEW: http://review.gluster.org/10432 (libglusterfs: Implementation of sync lock as recursive lock to avoid crash.) posted (#1) for review on release-3.7 by Anand Nekkunti (anekkunt)

Comment 1 Anand Avati 2015-04-29 09:16:45 UTC
REVIEW: http://review.gluster.org/10432 (libglusterfs: Implementation of sync lock as recursive lock to avoid crash.) posted (#2) for review on release-3.7 by Anand Nekkunti (anekkunt)

Comment 2 Anand Avati 2015-04-30 11:47:51 UTC
COMMIT: http://review.gluster.org/10432 committed in release-3.7 by Vijay Bellur (vbellur) 
------
commit 7bafeda4641582b5254f0e1f49e853dd87fe93f5
Author: anand <anekkunt>
Date:   Fri Apr 17 14:19:46 2015 +0530

    libglusterfs: Implementation of sync lock as recursive lock to avoid crash.
    
    Problem : In glusterd,we are using big lock which is implemented based on sync
    task frame work for thread synchronization and rcu lock for data consistency.
    sync task frame work swap the threads  if there is no worker poll threads
    available,due to this rcu lock and rcu unlock was happening in different threads
    (urcu-bp will not allow this),resulting into glusterd crash.
    
    fix : To avoid releasing the sync lock(big lock) in between rcu critical
    section,implemented sync lock as recursive lock.
    
    More details:
    link : http://www.spinics.net/lists/gluster-devel/msg14632.html
    
    Change-Id: I2b56c1caf3f0470f219b1adcaf62cce29cdc6b88
    BUG: 1216942
    Signed-off-by: anand <anekkunt>
    Reviewed-on: http://review.gluster.org/10285
    Reviewed-by: Atin Mukherjee <amukherj>
    Tested-by: Gluster Build System <jenkins.com>
    Tested-by: NetBSD Build System
    Reviewed-by: Vijay Bellur <vbellur>
    (cherry picked from commit ada6b3a8800867934af57a57d5312f5a5d8374f0)
    Reviewed-on: http://review.gluster.org/10432
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Vijay Bellur <vbellur>


Note You need to log in before you can comment on or make changes to this bug.