Bug 1526822

Summary: [RFE] Allow snapshots with geo replication
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: emahoney
Component: snapshotAssignee: Patric Uebele <puebele>
Status: CLOSED DEFERRED QA Contact: Rahul Hinduja <rhinduja>
Severity: low Docs Contact:
Priority: low    
Version: unspecifiedCC: adeshpan, amukherj, apaladug, bkunal, dmoessne, fshaikh, hgomes, puebele, rhs-bugs, rkavunga, skumar, storage-qa-internal, sunkumar
Target Milestone: ---Keywords: EasyFix, FutureFeature, Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: Improvement
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-09-23 06:32:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1481177    

Description emahoney 2017-12-17 18:11:15 UTC
Description of problem: Customer has requested a way to geo replicate bricks during snapshot operations. Executed similarly to how KVM does live migration of VMs. 

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Geo-replication.html

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Snapshots.html

The business need here is to allow for geo replication while also allowing backups (snapshots) of the CNS bricks in an OCP environment.

Comment 3 Sunny Kumar 2017-12-18 12:23:13 UTC
We discussed this RFE in mailing list and following are the outcome -

Q- What are the exact technical issues with running scheduled snapshots on geo-replicated volumes?

A- This limitation is to maintain same state when Snapshot is restored. For example, if Snapshot is taken only in Master Volume without taking snapshot of Slave volume. After some time if Master volume is restored to old snapshot then Slave Volume will be ahead of Master Volume and it is not possible to bring Slave Volume to same state as of restored Volume.

If Slave snapshot is taken without Master snapshot, and after some time Slave volume is restored. Geo-rep session will not have any clue that Slave volume is restored and it will not sync any old files.


-- So, we are planning to come up with one snapshot configure option where we can take snapshot of volume (Master or Slave) even geo-rep session is active.

-- Using above option is advisable only when you are not planning to restore volume(master or slave) in future.

Comment 5 Pan Ousley 2018-02-26 13:32:25 UTC
*** Bug 1512700 has been marked as a duplicate of this bug. ***

Comment 12 Mohammed Rafi KC 2018-11-19 04:44:08 UTC
If the use case it to take a snapshot to use it as a backup, then we can introduce a flag that overrides the geo-replication check,

steps to fix,

1) Pass one flag from cli to glusterd for allowing snapshot during geo-replication

2) Bypass the geo-replication session check (glusterd_snapshot_create_prevalidate) based on the flag

3) Maybe, store the information in the snapinfo, and during restore throw a warning

Comment 19 Red Hat Bugzilla 2023-09-14 04:14:27 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days