Bug 1460231 - snapshot: snapshot status command shows brick running as yes though snapshot is deactivated
Summary: snapshot: snapshot status command shows brick running as yes though snapshot ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.3
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.3.0
Assignee: Mohammed Rafi KC
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On:
Blocks: 1417151
TreeView+ depends on / blocked
 
Reported: 2017-06-09 12:25 UTC by Anil Shah
Modified: 2017-09-21 04:59 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.8.4-28
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-21 04:59:42 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2774 0 normal SHIPPED_LIVE glusterfs bug fix and enhancement update 2017-09-21 08:16:29 UTC

Description Anil Shah 2017-06-09 12:25:05 UTC
Description of problem:

After activate and deactivate snapshot, snapshot status command shows brick running field as yes, though snapshot is deactivated


Version-Release number of selected component (if applicable):

glusterfs-3.8.4-27.el7rhgs.x86_64

How reproducible:

100%

Steps to Reproduce:
1. Create 2*2 distribute replicate volume
2. do fuse/NFS mount , start IOs
3. Create snapshot
3. Activate and deactivate snapshot 
4. Check gluster snapshot status 

Actual results:

snapshot status shows brick running field as "yes"

Expected results:

Snapshot status command brick running field should  be "No"


Additional info:



[root@dhcp46-47 ~]# gluster snapshot status snap0

Snap Name : snap0
Snap UUID : 8275cb0e-c6c5-46c5-b0f5-2b938689e6f3

	Brick Path        :   dhcp47-127.lab.eng.blr.redhat.com:/run/gluster/snaps/3ac5cfe7cdfd4905af259e4386a2f233/brick1/topgun_brick0
	Volume Group      :   RHS_vg5
	Brick Running     :   Yes
	Brick PID         :   N/A
	Data Percentage   :   22.51
	LV Size           :   14.92g


	Brick Path        :   dhcp46-181.lab.eng.blr.redhat.com:/run/gluster/snaps/3ac5cfe7cdfd4905af259e4386a2f233/brick2/topgun_brick1
	Volume Group      :   RHS_vg5
	Brick Running     :   Yes
	Brick PID         :   N/A
	Data Percentage   :   22.44
	LV Size           :   14.92g


	Brick Path        :   dhcp46-47.lab.eng.blr.redhat.com:/run/gluster/snaps/3ac5cfe7cdfd4905af259e4386a2f233/brick3/topgun_brick2
	Volume Group      :   RHS_vg4
	Brick Running     :   Yes
	Brick PID         :   N/A
	Data Percentage   :   18.16
	LV Size           :   14.92g


	Brick Path        :   dhcp47-140.lab.eng.blr.redhat.com:/run/gluster/snaps/3ac5cfe7cdfd4905af259e4386a2f233/brick4/topgun_brick3
	Volume Group      :   RHS_vg4
	Brick Running     :   Yes
	Brick PID         :   N/A
	Data Percentage   :   18.15
	LV Size           :   14.92g
========================================================

[root@dhcp46-47 ~]# gluster snapshot info snap0
Snapshot                  : snap0
Snap UUID                 : 8275cb0e-c6c5-46c5-b0f5-2b938689e6f3
Created                   : 2017-06-09 08:26:40
Snap Volumes:

	Snap Volume Name          : 3ac5cfe7cdfd4905af259e4386a2f233
	Origin Volume name        : topgun
	Snaps taken for topgun      : 8
	Snaps available for topgun  : 248
	Status                    : Stopped

Comment 3 Atin Mukherjee 2017-06-13 05:39:20 UTC
No logs/sosreport attached yet. Can this be made available asap?

Comment 4 Mohammed Rafi KC 2017-06-13 12:54:45 UTC
This is a regression cused by multiplexing changes. With multiplexing changes we were not updating the status even when the volume is stopped. This is already fixed as part of the changes https://code.engineering.redhat.com/gerrit/108299 . So moving this to Modified.

Comment 8 Anil Shah 2017-06-20 06:45:14 UTC
[root@rhs-arch-srv1 core]# gluster snapshot activate snap0
Snapshot activate: snap0: Snap activated successfully

[root@rhs-arch-srv1 core]# gluster snapshot deactivate snap0
Deactivating snap will make its data inaccessible. Do you want to continue? (y/n) y
Snapshot deactivate: snap0: Snap deactivated successfully
[root@rhs-arch-srv1 core]# gluster snapshot status snap0

Snap Name : snap0
Snap UUID : bf25e5fb-03ac-4c1c-82ab-0b792681a91f

	Brick Path        :   rhs-arch-srv3.lab.eng.blr.redhat.com:/run/gluster/snaps/c4a4124d144d4cabaf485c6b814cf9b2/brick1/newvolume_brick0
	Volume Group      :   RHS_vg0
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   0.05
	LV Size           :   1.80t


	Brick Path        :   rhs-arch-srv2.lab.eng.blr.redhat.com:/run/gluster/snaps/c4a4124d144d4cabaf485c6b814cf9b2/brick2/newvolume_brick1
	Volume Group      :   RHS_vg0
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   0.05
	LV Size           :   1.80t


	Brick Path        :   rhs-arch-srv4.lab.eng.blr.redhat.com:/run/gluster/snaps/c4a4124d144d4cabaf485c6b814cf9b2/brick3/newvolume_brick2
	Volume Group      :   RHS_vg0
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   0.05
	LV Size           :   1.80t


	Brick Path        :   rhs-arch-srv1.lab.eng.blr.redhat.com:/run/gluster/snaps/c4a4124d144d4cabaf485c6b814cf9b2/brick4/newvolume_brick3
	Volume Group      :   RHS_vg0
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   0.05
	LV Size           :   1.80t


	Brick Path        :   rhs-arch-srv3.lab.eng.blr.redhat.com:/run/gluster/snaps/c4a4124d144d4cabaf485c6b814cf9b2/brick5/newvolume_brick4
	Volume Group      :   RHS_vg1
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   0.05
	LV Size           :   1.80t


	Brick Path        :   rhs-arch-srv2.lab.eng.blr.redhat.com:/run/gluster/snaps/c4a4124d144d4cabaf485c6b814cf9b2/brick6/newvolume_brick5
	Volume Group      :   RHS_vg1
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   0.32
	LV Size           :   10.19g


	Brick Path        :   rhs-arch-srv4.lab.eng.blr.redhat.com:/run/gluster/snaps/c4a4124d144d4cabaf485c6b814cf9b2/brick7/newvolume_brick6
	Volume Group      :   RHS_vg1
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   0.05
	LV Size           :   1.80t


	Brick Path        :   rhs-arch-srv1.lab.eng.blr.redhat.com:/run/gluster/snaps/c4a4124d144d4cabaf485c6b814cf9b2/brick8/newvolume_brick7
	Volume Group      :   RHS_vg1
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   0.05
	LV Size           :   1.80t


	Brick Path        :   rhs-arch-srv3.lab.eng.blr.redhat.com:/run/gluster/snaps/c4a4124d144d4cabaf485c6b814cf9b2/brick9/newvolume_brick8
	Volume Group      :   RHS_vg2
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   0.05
	LV Size           :   1.80t


	Brick Path        :   rhs-arch-srv2.lab.eng.blr.redhat.com:/run/gluster/snaps/c4a4124d144d4cabaf485c6b814cf9b2/brick10/newvolume_brick9
	Volume Group      :   RHS_vg2
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   0.16
	LV Size           :   19.46g


	Brick Path        :   rhs-arch-srv4.lab.eng.blr.redhat.com:/run/gluster/snaps/c4a4124d144d4cabaf485c6b814cf9b2/brick11/newvolume_brick10
	Volume Group      :   RHS_vg2
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   0.05
	LV Size           :   1.80t


	Brick Path        :   rhs-arch-srv1.lab.eng.blr.redhat.com:/run/gluster/snaps/c4a4124d144d4cabaf485c6b814cf9b2/brick12/newvolume_brick11
	Volume Group      :   RHS_vg2
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   0.05
	LV Size           :   1.80t


Bug verified on build glusterfs-3.8.4-28.el7rhgs.x86_64

Comment 10 errata-xmlrpc 2017-09-21 04:59:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774


Note You need to log in before you can comment on or make changes to this bug.