Bug 1113961
Summary: | [SNAPSHOT] : Restoring the snap-volume once the volume is already restored results in a failure. | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Sachin Pandit <spandit> | |
Component: | snapshot | Assignee: | Avra Sengupta <asengupt> | |
Status: | CLOSED ERRATA | QA Contact: | Rahul Hinduja <rhinduja> | |
Severity: | urgent | Docs Contact: | ||
Priority: | urgent | |||
Version: | rhgs-3.0 | CC: | nsathyan, rhs-bugs, sdharane, senaik, storage-qa-internal, vagarwal | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.0.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | SNAPSHOT | |||
Fixed In Version: | glusterfs-3.6.0.24-1 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1113975 (view as bug list) | Environment: | ||
Last Closed: | 2014-09-22 19:42:59 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1113975 |
Description
Sachin Pandit
2014-06-27 10:58:55 UTC
Upstream fix at http://review.gluster.org/#/c/8192/ Version : glusterfs 3.6.0.22 built on Jun 23 2014 ======= Similar issue seen on glusterfs 3.6.0.22 build after the volume is already restored once. Consecutive restore operation on the volume fails. Steps to reproduce: =================== Create a 2x2 dist rep volume Fuse/NFS mount the volume Create 1000+ files (empty files) 100+ directories Stop IO Create 2/3 snapshots of the volume [root@snapshot13 ~]# gluster snapshot create ss1 vol0 snapshot create: success: Snap ss1 created successfully [root@snapshot13 ~]# gluster snapshot create ss2 vol0 snapshot create: success: Snap ss2 created successfully [root@snapshot13 ~]# gluster snapshot create ss3 vol0 snapshot create: success: Snap ss3 created successfully stop the volume [root@snapshot13 ~]# gluster volume stop vol0 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: vol0: success Restore the first snap--> restore is successful [root@snapshot13 ~]# gluster snapshot restore ss1 Snapshot restore: ss1: Snap restored successfully Restore the volume to another snap . It fails as below : [root@snapshot13 ~]# gluster snapshot restore ss2 Snapshot command failed Additional Info : =============== -CLI says Snapshot command failed as it crossed the 2 min CLI window. But snapshot is restored successfully after sometime. - During clean up, it does a recursive remove of files/dir which takes a long time. Until this clean up is completed User gets "Another Transaction is in progress" if he tries any other operation on the volume as the volume lock is still held. --------------------Part of the log------------------- [2014-07-02 07:03:49.582498] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed manhoos-fuse.vol [2014-07-02 07:03:49.582575] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed 10.70.35.240:-var-run-gluster-snaps-ac142178aaf a40939e22f4fcb642b18a-brick1 [2014-07-02 07:03:49.582657] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed 10.70.35.172:-var-run-gluster-snaps-ac142178aaf a40939e22f4fcb642b18a-brick2 [2014-07-02 07:03:49.582723] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed bricks [2014-07-02 07:03:49.582773] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed node_state.info [2014-07-02 07:03:49.582823] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed manhoos.10.70.35.172.var-run-gluster-snaps-ac142178aafa40939e22f4fcb642b18a-brick2.vol [2014-07-02 07:03:49.582868] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed rbstate [2014-07-02 07:03:49.582913] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed cksum [2014-07-02 07:03:49.582965] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed vols-manhoos.deleted [2014-07-02 07:03:49.583409] D [glusterd-utils.c:12640:glusterd_recursive_rmdir] 0-management: Failed to open directory /var/lib/glusterd/trash/vols-manhoos.deleted. Reason : No such file or directory [2014-07-02 07:13:16.079409] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed xattrop [2014-07-02 07:13:16.101493] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed indices [2014-07-02 07:13:16.125600] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed htime [2014-07-02 07:13:16.152486] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed changelogs [2014-07-02 07:13:16.187373] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed 00000000-0000-0000-0000-000000000001 [2014-07-02 07:13:16.204389] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed 0000e5cb-cf24-4883-b39a-284adef3afe8 [2014-07-02 07:13:16.223420] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed 000098c4-c46c-42f1-b9c2-88a054e24093 [2014-07-02 07:13:16.251561] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed 0000aee4-37a8-4cdf-a008-32127a29e474 [2014-07-02 07:13:16.251646] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed 00 [2014-07-02 07:13:16.286690] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed 00b23374-9fee-498d-b05d-b3046a625033 [2014-07-02 07:13:16.305662] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed 00b246cc-dbc0-4bde-a8d5-17edbd60d360 [2014-07-02 07:13:16.305742] D [glusterd-utils.c:12667:glusterd_recursive_rmdir] 0-management: Removed b2 Verified with build: glusterfs-3.6.0.24-1 Working as expected and multiple restored are successful. Moving the bug to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1278.html |