Bug 1232428
Summary: | [SNAPSHOT] : Snapshot delete fails with error - Snap might not be in an usable state | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | senaik | |
Component: | snapshot | Assignee: | Avra Sengupta <asengupt> | |
Status: | CLOSED ERRATA | QA Contact: | senaik | |
Severity: | urgent | Docs Contact: | ||
Priority: | urgent | |||
Version: | rhgs-3.1 | CC: | asengupt, hawk, nsathyan, rhs-bugs, sgraf, storage-qa-internal, vagarwal | |
Target Milestone: | --- | Keywords: | Regression, TestBlocker, Triaged | |
Target Release: | RHGS 3.1.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.7.1-4 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1232430 (view as bug list) | Environment: | ||
Last Closed: | 2015-07-29 05:05:11 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1202842, 1223636, 1232430, 1232887 |
Description
senaik
2015-06-16 18:02:49 UTC
Snapshot delete worked in the previous build glusterfs-3.7.1-2.el6rhs.x86_64. This is a Regression caused in the latest build glusterfs-3.7.1-3.el6rhs.x86_64 Mailine - http://review.gluster.org/#/c/11262/ 3.7 - http://review.gluster.org/#/c/11294/ Downstream - https://code.engineering.redhat.com/gerrit/51013 Version : glusterfs-3.7.1-4.el6rhs.x86_64 Deleting a snapshot is successful. Marking the bug 'Verified'. Performed the following steps: gluster snapshot create S1 vol0 snapshot create: success: Snap S1_GMT-2015.06.19-09.32.42 created successfully [root@inception ~]# gluster snapshot activate S1_GMT-2015.06.19-09.32.42 Snapshot activate: S1_GMT-2015.06.19-09.32.42: Snap activated successfully [root@inception ~]# gluster snapshot delete S1_GMT-2015.06.19-09.32.42 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: S1_GMT-2015.06.19-09.32.42: snap removed successfully Performed a Recursive Restore on the volume and it was successful: gluster snapshot restore S12_GMT-2015.06.19-09.38.37 Restore operation will replace the original volume with the snapshotted volume. Do you still want to continue? (y/n) y Snapshot restore: S12_GMT-2015.06.19-09.38.37: Snap restored successfully [root@inception ~]# gluster snapshot restore S12_GMT-2015.06.19-09.39.41 Restore operation will replace the original volume with the snapshotted volume. Do you still want to continue? (y/n) y Snapshot restore: S12_GMT-2015.06.19-09.39.41: Snap restored successfully I can confirm that the problem (still) exists in glusterfs 3.7.2 Version: glusterfs-3.7.2-3.el7.x86_64 (gluster repo) OS: CentOS 7.1 64bit Steps to recreate: # gluster snapshot create snap1 tvol1 description 'test snapshot' snapshot create: success: Snap snap1_GMT-2015.07.16-11.16.03 created successfully t # gluster snapshot list snap1_GMT-2015.07.16-11.16.03 # gluster snapshot delete snap1_GMT-2015.07.16-11.16.03 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: failed: Snapshot snap1_GMT-2015.07.16-11.16.03 might not be in an usable state. Snapshot command failed # gluster snapshot delete all System contains 1 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: failed: Snapshot snap1_GMT-2015.07.16-11.16.03 might not be in an usable state. Snapshot command failed Richard, the build you are using looks to be the latest upstream gluster release. This bug is for tracking the downstream RHS product. Can we please continue further investigation regarding the issue on the upstream bug (https://bugzilla.redhat.com/show_bug.cgi?id=1232430) Could you also update that bug, with the details of your setup, as in what kind of volume are you using, how many bricks are there in the volume, how many nodes in the cluster. Also could you please attach to that bug, all the logs present in /var/log/glusterfs/ from all the nodes in the cluster Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html |