Bug 1165648
Summary: | [USS]: If glusterd goes down on the originator node while snapshots are activated, after glusterd comes back up, accessing .snaps do not list any snapshots even if they are present | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | senaik |
Component: | snapshot | Assignee: | Mohammed Rafi KC <rkavunga> |
Status: | CLOSED ERRATA | QA Contact: | Anil Shah <ashah> |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.0 | CC: | amukherj, asengupt, asrivast, rhinduja, rhs-bugs, rkavunga |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | RHGS 3.3.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | USS | ||
Fixed In Version: | glusterfs-3.8.4-25 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-09-21 04:25:52 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1448150, 1463512 | ||
Bug Blocks: | 1417147 |
Description
senaik
2014-11-19 12:28:38 UTC
duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1122064. upstream patch : http://review.gluster.org/#/c/9664/ not seeing snapshots in .snaps directory, where glusterd was down when snap activate was done. (where glusterd was down while snap activate was done) [root@dhcp46-157 fuse]# cd .snaps [root@dhcp46-157 .snaps]# pwd /mnt/fuse/.snaps [root@dhcp46-157 .snaps]# ll total 0 Able to see snaps from other client: [root@dhcp47-13 .snaps]# pwd /mnt/fuse/.snaps [root@dhcp47-13 .snaps]# ls snap2 Snapshot info output from node which was down when snapshot was activated [root@rhs-arch-srv2 ~]# gluster snapshot info snap2 Snapshot : snap2 Snap UUID : 1421b902-bda3-4604-aa6b-9d2ef52832a1 Created : 2017-05-03 08:15:20 Snap Volumes: Snap Volume Name : 1cd3bcaeea0447418f0b1ad80c3ec3b6 Origin Volume name : vol1 Snaps taken for vol1 : 2 Snaps available for vol1 : 254 Status : Started Able to reproduce this bug. Hence marking this bug as failed QA upstream master patch : https://review.gluster.org/17178 downstream patch : https://code.engineering.redhat.com/gerrit/#/c/105517 [root@rhs-arch-srv2 core]# gluster snapshot info snap0 Snapshot : snap0 Snap UUID : 80f0b29d-b7da-419b-b595-8d216f1ffafc Created : 2017-06-20 06:58:05 Snap Volumes: Snap Volume Name : 6ce57c9284d34f828a1927c9aaeb14db Origin Volume name : newvolume Snaps taken for newvolume : 1 Snaps available for newvolume : 255 Status : Stopped [root@rhs-arch-srv2 core]# service glusterd stop [root@rhs-arch-srv1 core]# gluster snapshot activate snap0 Snapshot activate: snap0: Snap activated successfully [root@rhs-arch-srv2 core]# service glusterd start Redirecting to /bin/systemctl start glusterd.service [root@rhs-arch-srv2 core]# gluster snapshot info snap0 Snapshot : snap0 Snap UUID : 80f0b29d-b7da-419b-b595-8d216f1ffafc Created : 2017-06-20 06:58:05 Snap Volumes: Snap Volume Name : 6ce57c9284d34f828a1927c9aaeb14db Origin Volume name : newvolume Snaps taken for newvolume : 1 Snaps available for newvolume : 255 Status : Started bug verified on build glusterfs-3.8.4-28.el7rhgs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 |