Bug 2109935
| Summary: | [RFE] [rbd-mirror] : mirror image promote : error message can be tuned when demotion is not completely propagate | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vasishta <vashastr> |
| Component: | RBD-Mirror | Assignee: | Ilya Dryomov <idryomov> |
| Status: | CLOSED ERRATA | QA Contact: | Vasishta <vashastr> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 5.2 | CC: | ceph-eng-bugs, cephqe-warriors, fkellehe, idryomov, kdreyer, ocs-bugs |
| Target Milestone: | --- | Keywords: | FutureFeature |
| Target Release: | 5.3 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-16.2.10-23.el9cp | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-01-11 17:40:00 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:0076 |
Description of problem: Currently orderly failover of an image might fail saying "image is still primary within a remote cluster" when mirror image status description says >> description: remote image demoted This could be because of in-progress propagation of demotion. So error message can be tuned to convey that demotion is not yet propagated. Version-Release number of selected component (if applicable): <latest> Steps to Reproduce: 1. Configure mirroring between peer cluster on an image 2. While performing orderly failover, promote secondary right after the demote operations at secondary Actual results: [ubuntu@magna031 ~]$ sudo rbd mirror image status test_mirror/set_2_image_20 --debug-rbd 0 set_2_image_20: <... ..> description: remote image demoted <... ..> description: remote image is not primary [ubuntu@magna031 ~]$ sudo rbd mirror image promote test_mirror/set_2_image_20 --debug-rbd 0 2022-07-22T11:49:36.803+0000 7f63a7fff700 -1 librbd::mirror::PromoteRequest: 0x56162c7d0a70 handle_get_info: image is still primary within a remote cluster rbd: error promoting image to primary 2022-07-22T11:49:36.803+0000 7f63bde42380 -1 librbd::api::Mirror: image_promote: failed to promote image Expected results: Error message can be tuned to convey that demotion is not yet propagated. Additional info: