Bug 2357127 - Renamed group reverts to old name in group mirroring
Summary: Renamed group reverts to old name in group mirroring
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RBD-Mirror
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 8.1
Assignee: Ilya Dryomov
QA Contact: Chaitanya
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-04-03 07:12 UTC by Chaitanya
Modified: 2025-06-26 12:22 UTC (History)
3 users (show)

Fixed In Version: ceph-19.2.1-74.el9cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2025-06-26 12:22:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-11046 0 None None None 2025-04-03 07:13:09 UTC
Red Hat Product Errata RHSA-2025:9775 0 None None None 2025-06-26 12:22:28 UTC

Description Chaitanya 2025-04-03 07:12:48 UTC
Description of problem:

When group is renamed to a new name, the new name is reflected in group list but after few seconds the group name is reverting to the old name.

Initial group mirroring status of cluster:

[root@ceph-rbd1-cd-cg-x49tk0-node2 ~]# rbd mirror group status  p2/g2
g2:
  global_id:   e0e4a535-7635-4747-982b-3b1c82db7642
  state:       up+stopped
  description: 
  service:     ceph-rbd1-cd-cg-x49tk0-node5.qphwpz on ceph-rbd1-cd-cg-x49tk0-node5
  last_update: 2025-04-03 06:56:38
  images:
    image:       8/a9fe0700-a82d-4293-a75c-57d912b24143
    state:       up+stopped
    description: local image is primary
  peer_sites:
    name: ceph-rbd2
    state: up+replaying
    description: replaying
    last_update: 2025-04-03 06:56:39
    images:
      image:       8/a9fe0700-a82d-4293-a75c-57d912b24143
      state:       up+replaying
      description: replaying, {"bytes_per_second":0.0,"bytes_per_snapshot":0.0,"last_snapshot_bytes":0,"last_snapshot_sync_seconds":0,"remote_snapshot_timestamp":1743663396,"replay_state":"idle"}
  snapshots:
    .mirror.primary.e0e4a535-7635-4747-982b-3b1c82db7642.643f219647bd


Renaming the group from 'g2' to 'g2_new':

[root@ceph-rbd1-cd-cg-x49tk0-node2 ~]# rbd group rename --pool p2 --group g2 --dest-pool p2 --dest-group g2_new



mirroring Status of 'g2_new' immediately after above step:

[root@ceph-rbd1-cd-cg-x49tk0-node2 ~]# rbd mirror group status  p2/g2_new
g2_new:
  global_id:   e0e4a535-7635-4747-982b-3b1c82db7642
  state:       up+stopped
  description: 
  service:     ceph-rbd1-cd-cg-x49tk0-node5.qphwpz on ceph-rbd1-cd-cg-x49tk0-node5
  last_update: 2025-04-03 06:57:20
  images:
    image:       8/a9fe0700-a82d-4293-a75c-57d912b24143
    state:       up+stopped
    description: local image is primary
  peer_sites:
    name: ceph-rbd2
    state: up+stopped
    description: 
    last_update: 2025-04-03 06:57:31
    images:
      image:       8/a9fe0700-a82d-4293-a75c-57d912b24143
      state:       up+stopped
      description: 
  snapshots:
    .mirror.primary.e0e4a535-7635-4747-982b-3b1c82db7642.643f219647bd



[root@ceph-rbd1-cd-cg-x49tk0-node2 ~]# rbd mirror group status  p2/g2
rbd: failed to get mirror info for group: (2) No such file or directory

[root@ceph-rbd1-cd-cg-x49tk0-node2 ~]# rbd group ls p2
g2_new



After a minute or so, I see 'g2' is back while 'g2_new' is not seen:

[root@ceph-rbd1-cd-cg-x49tk0-node2 ~]# rbd group ls p2
g2

[root@ceph-rbd1-cd-cg-x49tk0-node2 ~]# rbd mirror group status  p2/g2
g2:
  global_id:   e0e4a535-7635-4747-982b-3b1c82db7642
  state:       up+error
  description: bootstrap failed
  service:     ceph-rbd1-cd-cg-x49tk0-node5.qphwpz on ceph-rbd1-cd-cg-x49tk0-node5
  last_update: 2025-04-03 06:57:50
  images:
  peer_sites:
    name: ceph-rbd2
    state: up+replaying
    description: replaying
    last_update: 2025-04-03 06:57:51
    images:
      image:       8/a9fe0700-a82d-4293-a75c-57d912b24143
      state:       up+replaying
      description: replaying, {"bytes_per_second":0.0,"bytes_per_snapshot":0.0,"last_snapshot_bytes":0,"last_snapshot_sync_seconds":0,"local_snapshot_timestamp":1743663396,"remote_snapshot_timestamp":1743663396,"replay_state":"idle"}
  snapshots:
    .mirror.primary.e0e4a535-7635-4747-982b-3b1c82db7642.643f219647bd
[root@ceph-rbd1-cd-cg-x49tk0-node2 ~]# rbd mirror group status  p2/g2_new
rbd: failed to get mirror info for group: (2) No such file or directory



Version-Release number of selected component (if applicable):
ceph version 19.2.1-57.el9cp (25ca432e5c2874ac833d0f13057a1b7d98913317) squid (stable)

How reproducible:
frequently

Steps to Reproduce:
1. Rename the group
2. Check the status after a a minute
3.

Actual results:
Group name reverted to old name.


Expected results:
group name should continue to exist with new name

rbd mirror group status  p2/g2_new should show the status
and 
rbd group ls p2 should list 'g2_new' instead of 'g2'

Comment 5 errata-xmlrpc 2025-06-26 12:22:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775


Note You need to log in before you can comment on or make changes to this bug.