Bug 1349955 - After demotion/promotion, the image is again syncing from the beginning
Summary: After demotion/promotion, the image is again syncing from the beginning
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RBD
Version: 2.0
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: rc
: 2.1
Assignee: Jason Dillaman
QA Contact: Rachana Patel
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1322504 1383917
TreeView+ depends on / blocked
 
Reported: 2016-06-24 16:01 UTC by Tanay Ganguly
Modified: 2017-07-30 15:33 UTC (History)
7 users (show)

Fixed In Version: RHEL: ceph-10.2.3-2.el7cp Ubuntu: ceph_10.2.3-3redhat1xenial
Doc Type: Bug Fix
Doc Text:
."rbd-mirror" no longer synchronizes images from the beginning after their demotion and promotion With RADOS Block Device (RBD) mirroring enabled, an image can be demoted to non-primary on one cluster and promoted to primary on a peer cluster. Previously, when this happened, the `rbd-mirror` daemon started to synchronize the newly demoted image with the newly promoted image even though the image was already successfully synchronized. This behavior has been fixed, and `rbd-mirror` no longer synchronizes images from the beginning after their demotion and promotion.
Clone Of:
Environment:
Last Closed: 2016-11-22 19:27:25 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Ceph Project Bug Tracker 16473 None None None 2016-06-25 12:32:41 UTC
Red Hat Product Errata RHSA-2016:2815 normal SHIPPED_LIVE Moderate: Red Hat Ceph Storage security, bug fix, and enhancement update 2017-03-22 02:06:33 UTC

Description Tanay Ganguly 2016-06-24 16:01:18 UTC
Description of problem:
I performed a Demote on an Image from Master Node.
After that i promoted the same image from the Slave Node.

The process completed successfully on both Nodes, but again the Image sync starts from the beginning.


Version-Release number of selected component (if applicable):
10.2.2-5.el7cp

How reproducible:


Steps to Reproduce:
1. Create an Image on Master node with Journal enabled.
2. Write lots of data ( Actually i created a VM on top of it )
3. Let the sync process to complete (checked it was properly synced on Slave Node), and do not run any IO
4. Demote the same image from the Master Node (Waited for it to complete)
5. Promoted the same image from Slave Node.

Actual results:
After the successful promotion of the Image on Slave Side, now i am seeing the image is again getting synced from the Master Node. ( The Images were already synced before )

qemuimage1:
  global_id:   8b312690-8707-4dc3-8113-15a554ff3a26
  state:       up+syncing
  description: bootstrapping, IMAGE_COPY/COPY_OBJECT 43%
  last_update: 2016-06-24 21:29:12

Expected results:
There should not be any sync

Additional info:

Comment 9 Jason Dillaman 2016-08-12 12:10:02 UTC
Upstream pull request: https://github.com/ceph/ceph/pull/10703

Comment 15 Rachana Patel 2016-10-27 12:08:25 UTC
verified with 10.2.3-10.el7cp.x86_64
working as expected hence moving to verified

Comment 19 errata-xmlrpc 2016-11-22 19:27:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2815.html


Note You need to log in before you can comment on or make changes to this bug.