Bug 1428634

Summary: [RFE] [rbd CLI]: ability to mass promote / demote all mirrored images within a pool
Product: Red Hat Ceph Storage Reporter: Jason Dillaman <jdillama>
Component: RBDAssignee: Jason Dillaman <jdillama>
Status: CLOSED ERRATA QA Contact: Parikshith <pbyregow>
Severity: medium Docs Contact: Bara Ancincova <bancinco>
Priority: high    
Version: 2.2CC: ceph-eng-bugs, flucifre, hnallurv, jdillama
Target Milestone: rcKeywords: FutureFeature
Target Release: 3.0   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: RHEL: ceph-12.1.2-1.el7cp Ubuntu: ceph_12.1.2-2redhat1xenial Doc Type: Enhancement
Doc Text:
.Promoting and demoting all images in a pool at once You can now promote or demote all images in a pool at the same time by using the following commands: ---- rbd mirror pool promote <pool> rbd mirror pool demote <pool> ---- This is especially useful in an event of a failover, when all non-primary images must be promoted to primary ones.
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-05 23:32:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1494421    

Description Jason Dillaman 2017-03-03 00:48:31 UTC
Description of problem:
In an event of a failover, non-primary images need to be promoted to primary, with OpenStack Cinder, this is being done by iterating through each image of a pool. Could we have a more global call that could promote all images (at the same time?) of a pool to primary? The main idea is to have a fast process for this, faster than iterating through each image.

Version-Release number of selected component (if applicable):

Comment 4 Jason Dillaman 2017-09-15 11:34:27 UTC
Use the new rbd CLI commands of "rbd mirror pool promote <pool>" and "rbd mirror pool demote <pool>".

Comment 11 errata-xmlrpc 2017-12-05 23:32:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.