This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.
Bug 2361474 - Renamed mirror group on secondary cluster is not reflected on primary side and a subsequent planned failover results in up+error state
Summary: Renamed mirror group on secondary cluster is not reflected on primary side an...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RBD-Mirror
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 9.0
Assignee: Vinay Bhaskar
QA Contact: Chaitanya
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-04-21 16:20 UTC by Chaitanya
Modified: 2026-01-29 06:55 UTC (History)
4 users (show)

Fixed In Version: ceph-20.1.0-16
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2026-01-29 06:55:26 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-11219 0 None None None 2025-04-21 16:21:53 UTC
Red Hat Product Errata RHSA-2026:1536 0 None None None 2026-01-29 06:55:29 UTC

Description Chaitanya 2025-04-21 16:20:07 UTC
Description of problem:
Mirrored Group renaming is successful on secondary side. (Not sure if group renaming is supposed to be blocked on secondary side). The new name is reflected on secondary side but not on primary. A subsequent demote on primary and promote on secondary resulted the cluster in up+error state (description: bootstrap failed)



Version-Release number of selected component (if applicable):

 19.2.1-138.el9cp


How reproducible:
Always

Steps to Reproduce:
 - Initial state of the cluster on secondary side:
   [root@ceph-rbd2-cd-cg126-dz8z92-node2 ~]#  rbd mirror group status p1/g1
g1:
  global_id:   a8a05ecc-38d3-4d3d-95d5-439cbe9c827d
  state:       up+replaying
  description: replaying, {"last_snapshot_bytes":0,"last_snapshot_complete_seconds":2,"local_snapshot_timestamp":1744984629,"remote_snapshot_timestamp":1744984629}
  service:     ceph-rbd2-cd-cg126-dz8z92-node5.wnxcfv on ceph-rbd2-cd-cg126-dz8z92-node5
  last_update: 2025-04-18 13:57:30
  images:
    image:       57/a1660cdd-f1ef-4c02-87c2-9de0148d8074
    state:       up+replaying
    description: replaying, {"bytes_per_second":0.0,"bytes_per_snapshot":0.0,"last_snapshot_bytes":0,"last_snapshot_sync_seconds":0,"local_snapshot_timestamp":1744984629,"remote_snapshot_timestamp":1744984629,"replay_state":"idle"}
  peer_sites:
    name: ceph-rbd1
    state: up+stopped
    description: 
    last_update: 2025-04-18 13:57:15
    images:
      image:       57/a1660cdd-f1ef-4c02-87c2-9de0148d8074
      state:       up+stopped
      description: local image is primary


- Rename the group on secondary cluster and check the status on seconday as well as primary cluster. On secondary cluster the new name is reflected. However, on primary the group still has old name.


root@ceph-rbd2-cd-cg126-dz8z92-node2 ~]# rbd group rename --pool p1 --group g1 --dest-pool p1 --dest-group g1_new 

[root@ceph-rbd2-cd-cg126-dz8z92-node2 ~]# rbd mirror group status p1/g1
rbd: failed to get mirror info for group: (2) No such file or directory
[root@ceph-rbd2-cd-cg126-dz8z92-node2 ~]# rbd mirror group status p1/g1_new
g1_new:
  global_id:   a8a05ecc-38d3-4d3d-95d5-439cbe9c827d
  state:       up+replaying
  description: replaying, {"last_snapshot_bytes":0,"last_snapshot_complete_seconds":2,"local_snapshot_timestamp":1744984629,"remote_snapshot_timestamp":1744984629}
  service:     ceph-rbd2-cd-cg126-dz8z92-node5.wnxcfv on ceph-rbd2-cd-cg126-dz8z92-node5
  last_update: 2025-04-18 14:01:00
  images:
    image:       57/a1660cdd-f1ef-4c02-87c2-9de0148d8074
    state:       up+replaying
    description: replaying, {"bytes_per_second":0.0,"bytes_per_snapshot":0.0,"last_snapshot_bytes":0,"last_snapshot_sync_seconds":0,"local_snapshot_timestamp":1744984629,"remote_snapshot_timestamp":1744984629,"replay_state":"idle"}
  peer_sites:
    name: ceph-rbd1
    state: up+stopped
    description: 
    last_update: 2025-04-18 14:00:43
    images:
      image:       57/a1660cdd-f1ef-4c02-87c2-9de0148d8074
      state:       up+stopped
      description: local image is primary


[root@ceph-rbd1-cd-cg126-dz8z92-node2 ~]# rbd mirror group status p1/g1_new
rbd: failed to get mirror info for group: (2) No such file or directory

root@ceph-rbd1-cd-cg126-dz8z92-node2 ~]# rbd mirror group status p1/g1
g1:
  global_id:   a8a05ecc-38d3-4d3d-95d5-439cbe9c827d
  state:       up+stopped
  description: 
  service:     ceph-rbd1-cd-cg126-dz8z92-node5.ekkvnc on ceph-rbd1-cd-cg126-dz8z92-node5
  last_update: 2025-04-18 14:02:13
  images:
    image:       61/a1660cdd-f1ef-4c02-87c2-9de0148d8074
    state:       up+stopped
    description: local image is primary
  peer_sites:
    name: ceph-rbd2
    state: up+replaying
    description: replaying, {"last_snapshot_bytes":0,"last_snapshot_complete_seconds":2,"local_snapshot_timestamp":1744984629,"remote_snapshot_timestamp":1744984629}
    last_update: 2025-04-18 14:02:30
    images:
      image:       61/a1660cdd-f1ef-4c02-87c2-9de0148d8074
      state:       up+replaying
      description: replaying, {"bytes_per_second":0.0,"bytes_per_snapshot":0.0,"last_snapshot_bytes":0,"last_snapshot_sync_seconds":0,"local_snapshot_timestamp":1744984629,"remote_snapshot_timestamp":1744984629,"replay_state":"idle"}
  snapshots:
    .mirror.primary.a8a05ecc-38d3-4d3d-95d5-439cbe9c827d.1e6d23f9d8c48


  - Now, the group(g1) is demoted on primary  and group(g1_new) is promoted on secondary. This results in up_error state. 

  [root@ceph-rbd1-cd-cg126-dz8z92-node2 ~]#  rbd mirror group demote --pool p1  --group g1
Group demoted to non-primary

[root@ceph-rbd1-cd-cg126-dz8z92-node2 ~]# rbd mirror group status p1/g1
g1:
  global_id:   a8a05ecc-38d3-4d3d-95d5-439cbe9c827d
  state:       up+stopped
  description: 
  service:     ceph-rbd1-cd-cg126-dz8z92-node5.ekkvnc on ceph-rbd1-cd-cg126-dz8z92-node5
  last_update: 2025-04-18 14:07:13
  images:
    image:       61/a1660cdd-f1ef-4c02-87c2-9de0148d8074
    state:       up+stopped
    description: local image is primary
  peer_sites:
    name: ceph-rbd2
    state: up+unknown
    description: remote group is non-primary
    last_update: 2025-04-18 14:07:30
    images:


[root@ceph-rbd2-cd-cg126-dz8z92-node2 ~]# rbd mirror group promote --pool p1  --group g1_new
Group promoted to primary

    [root@ceph-rbd2-cd-cg126-dz8z92-node2 ~]# rbd mirror group demote --pool p1  --group g1_new
Group demoted to non-primary

[root@ceph-rbd2-cd-cg126-dz8z92-node2 ~]# rbd mirror group status p1/g1_new
g1_new:
  global_id:   a8a05ecc-38d3-4d3d-95d5-439cbe9c827d
  state:       up+error
  description: bootstrap failed
  service:     ceph-rbd2-cd-cg126-dz8z92-node5.wnxcfv on ceph-rbd2-cd-cg126-dz8z92-node5
  last_update: 2025-04-18 14:12:00
  images:
  peer_sites:
    name: ceph-rbd1
    state: up+unknown
    description: remote group is non-primary
    last_update: 2025-04-18 14:11:43
    images:

[root@ceph-rbd1-cd-cg126-dz8z92-node2 ~]# rbd mirror group status p1/g1
g1:
  global_id:   a8a05ecc-38d3-4d3d-95d5-439cbe9c827d
  state:       up+error
  description: bootstrap failed
  service:     ceph-rbd1-cd-cg126-dz8z92-node5.ekkvnc on ceph-rbd1-cd-cg126-dz8z92-node5
  last_update: 2025-04-18 14:11:13
  images:
  peer_sites:
    name: ceph-rbd2
    state: up+stopped
    description: 
    last_update: 2025-04-18 14:11:00
    images:
      image:       61/a1660cdd-f1ef-4c02-87c2-9de0148d8074
      state:       up+stopped
      description: local image is primary



Actual results:
 New group name is not reflected on primary side and the status of group after failover operations is not proper.

Expected results:

 New group name should be reflected on primary side if group renaming is allowed on secondary cluster and the subsequent failover operations should show proper status

Additional info:

Comment 10 errata-xmlrpc 2026-01-29 06:55:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 9.0 Security and Enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2026:1536


Note You need to log in before you can comment on or make changes to this bug.