Bug 1236696
Summary: | [New] - Restoring snapshot generates brick delete alerts on the dashboard | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | RamaKasturi <knarra> | ||||
Component: | rhsc | Assignee: | Shubhendu Tripathi <shtripat> | ||||
Status: | CLOSED ERRATA | QA Contact: | Triveni Rao <trao> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | rhgs-3.1 | CC: | asriram, asrivast, bmohanra, dpati, nlevinki, rhs-bugs, rnachimu, sabose, sashinde, shtripat | ||||
Target Milestone: | --- | Keywords: | ZStream | ||||
Target Release: | RHGS 3.1.1 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | rhsc-3.1.1-0.64 | Doc Type: | Bug Fix | ||||
Doc Text: |
Previously, when a volume was restored to the state of one of its snapshots, the dashboard used to display brick delete alerts. This happened as part of snapshot restore where the existing bricks were removed and new bricks were added with a new mount point. The sync job generated an alert for this operation. With this fix, brick delete alerts are not generated after restoring the volume to the state of a snapshot.
|
Story Points: | --- | ||||
Clone Of: | Environment: | ||||||
Last Closed: | 2015-10-05 09:22:08 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1216951, 1251815 | ||||||
Attachments: |
|
Description
RamaKasturi
2015-06-29 18:22:03 UTC
As part of restore of a snapshot, actually the volume moves to the state of snapshot, and this might be causing brick removal event logs. Nothing specific as part of snapshot restore. Doc text is edited. Please sign off to be included in Known Issues. Modified the doc-text a little and the summary of the BZ as summary was incorrect. Instead of bricks DOWN actually bricks delete alerts are generated while snapshot restore. Slight changes made to Doc text. doc-text looks good By mistake it was marked MODIFIED :-) Modified to sync the restored bricks immediately after restore, so that sync job does not generate the brick delete alerts. this bug is verified and found no issues. Steps followed: 1. create volume and create snapshot 2. activate the snapshot and check it from backend. 3. restore snapshot. Output: [root@casino-vm3 ~]# gluster snapshot list No snapshots present [root@casino-vm3 ~]# gluster snapshot list vol_snap1_GMT-2015.08.31-03.19.39 [root@casino-vm3 ~]# [root@casino-vm3 ~]# [root@casino-vm3 ~]# gluster snapshot list No snapshots present [root@casino-vm3 ~]# gluster snapshot list vol1_sn2_GMT-2015.08.31-11.02.06 [root@casino-vm3 ~]# [root@casino-vm5 ~]# rpm -qa | grep rhsc rhsc-setup-plugin-ovirt-engine-3.1.1-0.64.el6.noarch rhsc-doc-3.1.0-1.el6eng.noarch rhsc-branding-rhs-3.1.0-1.el6rhs.noarch rhsc-setup-base-3.1.1-0.64.el6.noarch redhat-access-plugin-rhsc-3.0.0-3.el6rhs.noarch rhsc-restapi-3.1.1-0.64.el6.noarch rhsc-setup-plugins-3.1.0-3.el6rhs.noarch rhsc-cli-3.0.0.0-0.2.el6rhs.noarch rhsc-lib-3.1.1-0.64.el6.noarch rhsc-webadmin-portal-3.1.1-0.64.el6.noarch rhsc-dbscripts-3.1.1-0.64.el6.noarch rhsc-extensions-api-impl-3.1.1-0.64.el6.noarch rhsc-sdk-python-3.0.0.0-0.2.el6rhs.noarch rhsc-setup-plugin-ovirt-engine-common-3.1.1-0.64.el6.noarch rhsc-backend-3.1.1-0.64.el6.noarch rhsc-setup-3.1.1-0.64.el6.noarch rhsc-3.1.1-0.64.el6.noarch rhsc-monitoring-uiplugin-0.2.4-1.el6rhs.noarch rhsc-log-collector-3.1.0-1.0.el6rhs.noarch rhsc-tools-3.1.1-0.64.el6.noarch [root@casino-vm5 ~]# Created attachment 1068620 [details]
snap_pic
the alerts were not seen on the dashborad Hi Shubhendu, The doc text is updated. Please review it and share your technical review comments. If it looks ok, then sign-off on the same. Modified the doc-text a little Updated the text and changed the doc text flag to "+". Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-1848.html |