Bug 1837926 - Snapshot clone fails with wrong error message.
Summary: Snapshot clone fails with wrong error message.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 3
Assignee: Srijan Sivakumar
QA Contact: Arthy Loganathan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-20 08:35 UTC by susgupta
Modified: 2020-12-17 04:52 UTC (History)
7 users (show)

Fixed In Version: glusterfs-6.0-347
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-17 04:51:50 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:5603 0 None None None 2020-12-17 04:52:24 UTC

Comment 1 Mohammed Rafi KC 2020-05-21 12:09:14 UTC
upstream patch: https://review.gluster.org/#/c/glusterfs/+/24478/

Comment 6 Arthy Loganathan 2020-11-04 06:09:11 UTC
I am still getting the different error message and is not pointing that snapshot is not activated.

[root@dhcp47-141 ~]# gluster snapshot clone snap_clone_1 snap_1_GMT-2020.11.02-17.31.40
snapshot clone: failed: Post-validation failed on localhost. Please check log file for details
Snapshot command failed

Steps:
-------

Volume Name: vol2
Type: Disperse
Volume ID: 980e720f-3bed-44ae-9f1b-0faba30218ca
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.141:/bricks/brick2/vol2_brick2
Brick2: 10.70.47.41:/bricks/brick2/vol2_brick2
Brick3: 10.70.47.178:/bricks/brick2/vol2_brick2
Brick4: 10.70.46.186:/bricks/brick2/vol2_brick2
Brick5: 10.70.47.141:/bricks/brick3/vol2_brick3
Brick6: 10.70.47.41:/bricks/brick3/vol2_brick3
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on

[root@dhcp47-141 ~]# gluster snapshot  create snap_1 vol2
snapshot create: success: Snap snap_1_GMT-2020.11.02-17.31.40 created successfully
[root@dhcp47-141 ~]# gluster snapshot info snap_1_GMT-2020.11.02-17.31.40
Snapshot                  : snap_1_GMT-2020.11.02-17.31.40
Snap UUID                 : 64cc16ac-32c1-4130-b28b-93fd407dcda1
Created                   : 2020-11-02 17:31:40
Snap Volumes:

	Snap Volume Name          : e87b65bf3c4641d6be6fc819e508222a
	Origin Volume name        : vol2
	Snaps taken for vol2      : 1
	Snaps available for vol2  : 255
	Status                    : Stopped
 
[root@dhcp47-141 ~]# gluster snapshot clone snap_clone_1 snap_1_GMT-2020.11.02-17.31.40
snapshot clone: failed: Post-validation failed on localhost. Please check log file for details
Snapshot command failed

Glusterd logs:

[2020-11-02 17:32:17.257816] E [MSGID: 106061] [glusterd-snapshot.c:1484:glusterd_snap_create_clone_pre_val_use_rsp_dict] 0-management: failed to get the volume count
[2020-11-02 17:32:17.257884] E [MSGID: 106061] [glusterd-snapshot.c:1763:glusterd_snap_pre_validate_use_rsp_dict] 0-management: Unable to use rsp dict
[2020-11-02 17:32:17.257903] E [MSGID: 106121] [glusterd-mgmt.c:839:glusterd_pre_validate_aggr_rsp_dict] 0-management: Failed to aggregate prevalidate response dictionaries.
[2020-11-02 17:32:17.257923] E [MSGID: 106121] [glusterd-mgmt.c:1104:glusterd_mgmt_v3_pre_validate] 0-management: Failed to aggregate response from  node/brick
[2020-11-02 17:32:17.257937] E [MSGID: 106121] [glusterd-mgmt.c:2698:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Pre Validation Failed
[2020-11-02 17:32:17.257960] E [MSGID: 106026] [glusterd-snapshot.c:8122:glusterd_snapshot_clone_postvalidate] 0-management: unable to find clone snap_clone_1 volinfo
[2020-11-02 17:32:17.257980] W [MSGID: 106444] [glusterd-snapshot.c:9179:glusterd_snapshot_postvalidate] 0-management: Snapshot create post-validation failed
[2020-11-02 17:32:17.257991] W [MSGID: 106120] [glusterd-mgmt.c:474:gd_mgmt_v3_post_validate_fn] 0-management: postvalidate operation failed
[2020-11-02 17:32:17.258004] E [MSGID: 106120] [glusterd-mgmt.c:1957:glusterd_mgmt_v3_post_validate] 0-management: Post Validation failed for operation Snapshot on local node
[2020-11-02 17:32:17.258017] E [MSGID: 106120] [glusterd-mgmt.c:2816:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Post Validation Failed


Build : glusterfs-server-6.0-46.el8rhgs.x86_64

Comment 8 Arthy Loganathan 2020-11-06 09:50:59 UTC
[root@dhcp47-141 ~]# gluster vol info vol2
 
Volume Name: vol2
Type: Disperse
Volume ID: a8316233-465d-4602-8fae-3dc25891333a
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.141:/bricks/brick2/vol2_brick2
Brick2: 10.70.47.41:/bricks/brick2/vol2_brick2
Brick3: 10.70.47.178:/bricks/brick2/vol2_brick2
Brick4: 10.70.46.186:/bricks/brick2/vol2_brick2
Brick5: 10.70.47.141:/bricks/brick3/vol2_brick3
Brick6: 10.70.47.41:/bricks/brick3/vol2_brick3
Options Reconfigured:
features.barrier: disable
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on

Started IO on mount point using cmd, for i in seq{1..20}; do dd if=/dev/urandom of=FILE_$i bs=1024 count=102400& done

[root@dhcp47-141 ~]# gluster snapshot create snap_1 vol2
snapshot create: success: Snap snap_1_GMT-2020.11.06-09.46.29 created successfully
[root@dhcp47-141 ~]# gluster snapshot info snap_1_GMT-2020.11.06-09.46.29
Snapshot                  : snap_1_GMT-2020.11.06-09.46.29
Snap UUID                 : 04fd9c7b-d411-42f5-960a-fea17077ceba
Created                   : 2020-11-06 09:46:29
Snap Volumes:

	Snap Volume Name          : 531f608f147c45a09b13e67209a7d741
	Origin Volume name        : vol2
	Snaps taken for vol2      : 1
	Snaps available for vol2  : 255
	Status                    : Stopped
 
[root@dhcp47-141 ~]# gluster snapshot clone snap_clone_1 snap_1_GMT-2020.11.06-09.46.29
snapshot clone: failed: Snapshot snap_1_GMT-2020.11.06-09.46.29 is not activated
Snapshot command failed


Verified the fix in,
glusterfs-server-6.0-47.el8rhgs.x86_64

Comment 10 errata-xmlrpc 2020-12-17 04:51:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5603


Note You need to log in before you can comment on or make changes to this bug.