Bug 1151004 - [USS]: deletion and creation of snapshots with same name causes problems
Summary: [USS]: deletion and creation of snapshots with same name causes problems
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Raghavendra Bhat
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1157452 1157985 1158791
TreeView+ depends on / blocked
 
Reported: 2014-10-09 11:56 UTC by Raghavendra Bhat
Modified: 2015-05-14 17:44 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1157452 1157985 1158791 (view as bug list)
Environment:
Last Closed: 2015-05-14 17:27:57 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Raghavendra Bhat 2014-10-09 11:56:35 UTC
Description of problem:

The deletion and recreation of a snapshot with same name creates problems while accessing the contents of newly created snapshot bu giving ENOTCONN errors.


Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Create a file on the glusterfs mount point
2. Create a snapshot "snap1"
3. delete the file
4. delete the snapshot "snap1"
5. create a snapshot "snap1"
6. try to access snap1

Actual results:
applications get ENOTCONN

Expected results:
Applications should not get ENOTCONN

Additional info:
When the snapshot "snap1" is created first time and accessed (<mount point>/<entry point>/snap1/<filename>), a new inode and dentry were created for snap1 with parent being the entry point. Within the inode context the glfs_t instance to access the snapshot and the handle to access the inode in the snapshot world (or gfapi world in this case) is saved. But when the snapshot was deleted and recreated, a new glfs_t instance is established (older one being destroyed). So to access snap1 now, we have to do yet another lookup and save the glfs_t instance and the new handle within the inode context. Since that was not being done, older glfs_t instance was still used which gave ENOTCONN when accessed as the corresponding snapshot volume does not exist anymore.

Comment 1 Anand Avati 2014-10-09 12:30:19 UTC
REVIEW: http://review.gluster.org/8917 (features/snapview-server: check if the reference to the snapshot world is correct before doing any fop) posted (#1) for review on master by Raghavendra Bhat (raghavendra)

Comment 2 Anand Avati 2014-10-10 07:35:18 UTC
REVIEW: http://review.gluster.org/8917 (features/snapview-server: check if the reference to the snapshot world is correct before doing any fop) posted (#2) for review on master by Raghavendra Bhat (raghavendra)

Comment 3 Anand Avati 2014-10-28 07:08:14 UTC
COMMIT: http://review.gluster.org/8917 committed in master by Vijay Bellur (vbellur) 
------
commit 1fa3e87db77bb379173723a5e75b361a8e192f09
Author: Raghavendra Bhat <raghavendra>
Date:   Thu Oct 9 17:32:48 2014 +0530

    features/snapview-server: check if the reference to the snapshot world is
    correct before doing any fop
    
    The following operations might lead to problems:
    * Create a file on the glusterfs mount point
    * Create a snapshot (say "snap1")
    * Access the contents of the snapshot
    * Delete the file from the mount point
    * Delete the snapshot "snap1"
    * Create a new snapshot "snap1"
    
    Now accessing the new snapshot "snap1" gives problems. Because the inode and
    dentry created for snap1 would not be deleted upon the deletion of the snapshot
    (as deletion of snapshot is a gluster cli operation, not a fop). So next time
    upon creation of a new snap with same name, the previous inode and dentry itself
    will be used. But the inode context contains old information about the glfs_t
    instance and the handle in the gfapi world. Directly accessing them without
    proper check leads to ENOTCONN errors. Thus the glfs_t instance should be
    checked before accessing. If its wrong, then right instance should be obtained
    by doing the lookup.
    
    Change-Id: Idca0c8015ff632447cea206a4807d8ef968424fa
    BUG: 1151004
    Signed-off-by: Raghavendra Bhat <raghavendra>
    Reviewed-on: http://review.gluster.org/8917
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 4 Anand Avati 2014-10-28 09:33:18 UTC
REVIEW: http://review.gluster.org/8986 (features/snapview-server: check if the reference to the snapshot world is correct before doing any fop) posted (#1) for review on release-3.6 by Raghavendra Bhat (raghavendra)

Comment 5 Anand Avati 2014-10-29 12:19:59 UTC
REVIEW: http://review.gluster.org/8999 (features/snapview-server: verify the fs instance in revalidated lookups as well) posted (#1) for review on master by Raghavendra Bhat (raghavendra)

Comment 6 Anand Avati 2014-10-30 08:49:42 UTC
COMMIT: http://review.gluster.org/8999 committed in master by Vijay Bellur (vbellur) 
------
commit 8ab61c18d5de81aa613130e8f65b2f420476c08e
Author: Raghavendra Bhat <raghavendra>
Date:   Wed Oct 29 17:47:48 2014 +0530

    features/snapview-server: verify the fs instance in revalidated lookups as well
    
    Change-Id: Id5f9d5a23eb5932a0a53520b08ffba258952e000
    BUG: 1151004
    Signed-off-by: Raghavendra Bhat <raghavendra>
    Reviewed-on: http://review.gluster.org/8999
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 7 Niels de Vos 2015-05-14 17:27:57 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 8 Niels de Vos 2015-05-14 17:35:38 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 9 Niels de Vos 2015-05-14 17:38:00 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 10 Niels de Vos 2015-05-14 17:44:09 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.