Bug 1236696

Summary: [New] - Restoring snapshot generates brick delete alerts on the dashboard
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: RamaKasturi <knarra>
Component: rhscAssignee: Shubhendu Tripathi <shtripat>
Status: CLOSED ERRATA QA Contact: Triveni Rao <trao>
Severity: medium Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: asriram, asrivast, bmohanra, dpati, nlevinki, rhs-bugs, rnachimu, sabose, sashinde, shtripat
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.1.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: rhsc-3.1.1-0.64 Doc Type: Bug Fix
Doc Text:
Previously, when a volume was restored to the state of one of its snapshots, the dashboard used to display brick delete alerts. This happened as part of snapshot restore where the existing bricks were removed and new bricks were added with a new mount point. The sync job generated an alert for this operation. With this fix, brick delete alerts are not generated after restoring the volume to the state of a snapshot.
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-10-05 09:22:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1216951, 1251815    
Attachments:
Description Flags
snap_pic none

Description RamaKasturi 2015-06-29 18:22:03 UTC
Description of problem:
When a snapshot is restored UI removes the existing bricks and adds brick with new mount point. Due to this alerts are generated which says that "detected brick is removed" and these are being shown in the alerts tab in the Dashboard. Dashboard should not display these alerts as the bricks are really not being removed form the volume.

Version-Release number of selected component (if applicable):
rhsc-3.1.0-0.61.el6.noarch

How reproducible:
Always

Steps to Reproduce:
1. Create a volume and take snapshot of the volume
2. Now restore the snapshot of the volume.
3.

Actual results:
Brick removal during restore process are being generated as alerts in the dashboard.

Expected results:
Brick removal during restore process should not be generated as alerts as it is not a destructive operation.

Additional info:

Comment 2 Shubhendu Tripathi 2015-06-30 04:14:48 UTC
As part of restore of a snapshot, actually the volume moves to the state of snapshot, and this might be causing brick removal event logs. Nothing specific as part of snapshot restore.

Comment 3 monti lawrence 2015-07-22 19:31:28 UTC
Doc text is edited. Please sign off to be included in Known Issues.

Comment 4 Shubhendu Tripathi 2015-07-23 03:44:05 UTC
Modified the doc-text a little and the summary of the BZ as summary was incorrect.
Instead of bricks DOWN actually bricks delete alerts are generated while snapshot restore.

Comment 5 monti lawrence 2015-07-23 14:39:13 UTC
Slight changes made to Doc text.

Comment 6 Shubhendu Tripathi 2015-07-23 14:41:19 UTC
doc-text looks good

Comment 7 Shubhendu Tripathi 2015-08-03 11:34:02 UTC
By mistake it was marked MODIFIED :-)

Comment 11 Shubhendu Tripathi 2015-08-24 03:37:35 UTC
Modified to sync the restored bricks immediately after restore, so that sync job does not generate the brick delete alerts.

Comment 12 Triveni Rao 2015-08-31 11:06:13 UTC
this bug is verified and found no issues.

Steps followed:
1. create volume and create snapshot
2. activate the snapshot and check it from backend.
3. restore snapshot.

Output:

[root@casino-vm3 ~]# gluster snapshot list
No snapshots present
[root@casino-vm3 ~]# gluster snapshot list
vol_snap1_GMT-2015.08.31-03.19.39
[root@casino-vm3 ~]# 
[root@casino-vm3 ~]# 
[root@casino-vm3 ~]# gluster snapshot list
No snapshots present
[root@casino-vm3 ~]# gluster snapshot list
vol1_sn2_GMT-2015.08.31-11.02.06
[root@casino-vm3 ~]# 


[root@casino-vm5 ~]# rpm -qa | grep rhsc
rhsc-setup-plugin-ovirt-engine-3.1.1-0.64.el6.noarch
rhsc-doc-3.1.0-1.el6eng.noarch
rhsc-branding-rhs-3.1.0-1.el6rhs.noarch
rhsc-setup-base-3.1.1-0.64.el6.noarch
redhat-access-plugin-rhsc-3.0.0-3.el6rhs.noarch
rhsc-restapi-3.1.1-0.64.el6.noarch
rhsc-setup-plugins-3.1.0-3.el6rhs.noarch
rhsc-cli-3.0.0.0-0.2.el6rhs.noarch
rhsc-lib-3.1.1-0.64.el6.noarch
rhsc-webadmin-portal-3.1.1-0.64.el6.noarch
rhsc-dbscripts-3.1.1-0.64.el6.noarch
rhsc-extensions-api-impl-3.1.1-0.64.el6.noarch
rhsc-sdk-python-3.0.0.0-0.2.el6rhs.noarch
rhsc-setup-plugin-ovirt-engine-common-3.1.1-0.64.el6.noarch
rhsc-backend-3.1.1-0.64.el6.noarch
rhsc-setup-3.1.1-0.64.el6.noarch
rhsc-3.1.1-0.64.el6.noarch
rhsc-monitoring-uiplugin-0.2.4-1.el6rhs.noarch
rhsc-log-collector-3.1.0-1.0.el6rhs.noarch
rhsc-tools-3.1.1-0.64.el6.noarch
[root@casino-vm5 ~]#

Comment 13 Triveni Rao 2015-08-31 11:48:19 UTC
Created attachment 1068620 [details]
snap_pic

Comment 14 Triveni Rao 2015-08-31 11:48:53 UTC
the alerts were not seen on the dashborad

Comment 15 Bhavana 2015-09-22 09:03:40 UTC
Hi Shubhendu,

The doc text is updated. Please review it and share your technical review comments. If it looks ok, then sign-off on the same.

Comment 16 Shubhendu Tripathi 2015-09-22 10:51:09 UTC
Modified the doc-text a little

Comment 17 Bhavana 2015-09-23 06:29:00 UTC
Updated the text and changed the doc text flag to "+".

Comment 19 errata-xmlrpc 2015-10-05 09:22:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1848.html