Bug 1307017 - snapd crash on the volume mounting node while accessing .snaps from NFS
Summary: snapd crash on the volume mounting node while accessing .snaps from NFS
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.1
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-12 13:07 UTC by Shashank Raj
Modified: 2018-04-07 19:09 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-07 19:09:02 UTC
Embargoed:


Attachments (Terms of Use)

Description Shashank Raj 2016-02-12 13:07:22 UTC
Description of problem:
snapd crash on the volume mounting node while accessing .snaps from NFS

Version-Release number of selected component (if applicable):
glusterfs-3.7.5-19

How reproducible:
Once

Steps to Reproduce:

Not sure about the exact steps however it crashed while accessing .snaps from NFS mount.

Prior to seeing this, was running below case:

1. Create 4 node cluster
2. Create a 2*2 volume
3. Create a directory (cp -rf /etc .)
4. Create a snapshot (snap1) of the volume
5. bring down glusterd on one of the node(for ex node2)
6. offline the volume using "gluster volume stop vol"
7. Restore the volume to snap1. Restore should be successful
8. Start the volume
9. Delete the etc directory from the client (It does get deleted from the brick which is participating on the online nodes and doesn't delete where the glusterd is down)
10. Start the glusterd on node2
11.When the glusterd is brought online on node2, the shd heals and remove the directory(etc) from node2 as well. 


Actual results:
Snapd crash while accessing .snaps from NFS mount

Expected results:
Snapd should not crash


Additional info:

bt below:

#0  0x00007fc82c101210 in pthread_spin_lock () from /lib64/libpthread.so.0
#1  0x00007fc82d2b7059 in inode_ctx_get0 (inode=0x7fc800f4302c, xlator=xlator@entry=0x7fc8140226b0, value1=value1@entry=0x7fc81d9aaa20)
    at inode.c:2099
#2  0x00007fc82d2b70f5 in inode_needs_lookup (inode=0x7fc800f4302c, this=0x7fc8140226b0) at inode.c:1880
#3  0x00007fc81f835876 in __glfs_resolve_inode (fs=fs@entry=0x7fc8140008e0, subvol=subvol@entry=0x7fc7f00130e0, object=object@entry=0x7fc7f4001440)
    at glfs-resolve.c:1004
#4  0x00007fc81f83597b in glfs_resolve_inode (fs=fs@entry=0x7fc8140008e0, subvol=subvol@entry=0x7fc7f00130e0, object=object@entry=0x7fc7f4001440)
    at glfs-resolve.c:1030
#5  0x00007fc81f835f88 in pub_glfs_h_stat (fs=0x7fc8140008e0, object=0x7fc7f4001440, stat=stat@entry=0x7fc81d9aac90) at glfs-handleops.c:223
#6  0x00007fc81fa4aea3 in svs_stat (frame=0x7fc82ada0434, this=0x7fc818005e70, loc=0x7fc82a84606c, xdata=0x0) at snapview-server.c:1711
#7  0x00007fc82d2a75de in default_stat_resume (frame=0x7fc82ada002c, this=0x7fc818009830, loc=0x7fc82a84606c, xdata=0x0) at defaults.c:1686
#8  0x00007fc82d2c417d in call_resume (stub=0x7fc82a84602c) at call-stub.c:2576
#9  0x00007fc81eb74363 in iot_worker (data=0x7fc81801cc80) at io-threads.c:215
#10 0x00007fc82c0fcdc5 in start_thread () from /lib64/libpthread.so.0
#11 0x00007fc82ba431cd in clone () from /lib64/libc.so.6

Comment 1 Shashank Raj 2016-02-12 13:12:54 UTC
sosreports are at http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1307017

Comment 3 Avra Sengupta 2016-02-29 08:29:16 UTC
Removing devel_ack, as this might be a known issue related to missed list.

Comment 4 RHEL Program Management 2016-02-29 08:36:00 UTC
Development Management has reviewed and declined this request.
You may appeal this decision by reopening this request.


Note You need to log in before you can comment on or make changes to this bug.