Bug 2248176 - [CephFS-Mirror] - snapshot mirror peer-list shows only 1 Mon IP instead of all the Mon Host IP's
Summary: [CephFS-Mirror] - snapshot mirror peer-list shows only 1 Mon IP instead of al...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 7.0z2
Assignee: Jos Collin
QA Contact: Hemanth Kumar
Disha Walvekar
URL:
Whiteboard:
: 2248175 (view as bug list)
Depends On:
Blocks: 2270485
TreeView+ depends on / blocked
 
Reported: 2023-11-06 17:52 UTC by Hemanth Kumar
Modified: 2024-05-09 17:12 UTC (History)
9 users (show)

Fixed In Version: ceph-18.2.0-178.el9cp
Doc Type: Bug Fix
Doc Text:
Previously, the snapshot mirror peer-list showed more information than just the peer list. This output caused confusion if there should be only one MON IP or all the MON host IP's should be displayed. With this fix, mon_host is removed from the fs snapshot mirror peer_list command and the target mon_host details are removed from the peer List and mirror daemon status.
Clone Of:
: 2277143 2277144 (view as bug list)
Environment:
Last Closed: 2024-05-07 12:10:08 UTC
Embargoed:
amk: needinfo+
hyelloji: needinfo-
hyelloji: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 63614 0 None None None 2023-11-23 05:37:14 UTC
Red Hat Issue Tracker RHCEPH-7857 0 None None None 2023-11-06 17:53:58 UTC
Red Hat Product Errata RHBA-2024:2743 0 None None None 2024-05-07 12:10:12 UTC

Description Hemanth Kumar 2023-11-06 17:52:18 UTC
Description of problem:
-----------------------

The secondary cluster is configured with 3 mons, when a peer connection is established, the peer list/snapshot mirror status always displays only one mon host instead of all the mon details.

[root@ceph1-hk-m-4p31kb-node7 ~]# ceph fs snapshot mirror peer_list cephfs
{"c4f15c31-8d2b-445e-8fa7-3137a1b638ca": {"client_name": "client.mirror_remote", "site_name": "remote_site", "fs_name": "cephfs", "mon_host": "[v2:10.0.97.224:3300,v1:10.0.97.224:6789]"}}
[root@ceph1-hk-m-4p31kb-node7 ~]#

The secondary cluster is configured with 3 mon hosts : 
[root@ceph2-hk-m-4p31kb-node6 ~]# ceph mon dump
epoch 3
fsid b9052a36-7c88-11ee-aee7-fa163e9f50e7
last_changed 2023-11-06T09:45:25.866973+0000
created 2023-11-06T09:42:13.026736+0000
min_mon_release 18 (reef)
election_strategy: 1
0: [v2:10.0.97.224:3300/0,v1:10.0.97.224:6789/0] mon.ceph2-hk-m-4p31kb-node1-installer
1: [v2:10.0.97.14:3300/0,v1:10.0.97.14:6789/0] mon.ceph2-hk-m-4p31kb-node3
2: [v2:10.0.99.147:3300/0,v1:10.0.99.147:6789/0] mon.ceph2-hk-m-4p31kb-node2
dumped monmap epoch 3
[root@ceph2-hk-m-4p31kb-node6 ~]#


Version-Release number of selected component (if applicable):
---------
ceph version 18.2.0-113.el9cp

How reproducible:
----------------
Always


Actual results:
--------------
ceph fs snapshot mirror peer-list is displaying only 1 host instead of all mon hosts

Expected results:
----------------
ceph fs snapshot mirror peer-list must display details of all mon hosts of the secondary cluster.

Comment 1 Hemanth Kumar 2023-11-07 05:30:55 UTC
*** Bug 2248175 has been marked as a duplicate of this bug. ***

Comment 6 Amarnath 2023-11-09 06:23:45 UTC
Hi Venky,


Source cluster : 
[root@ceph1-amk-fs-tc-nmj42e-node7 ~]# ceph fs snapshot mirror peer_list cephfs
{"88d9f42f-a22d-4e4f-bb01-e2ae11c248a0": {"client_name": "client.mirror_remote_caps", "site_name": "remote_site_caps", "fs_name": "cephfs", "mon_host": "[v2:10.0.209.6:3300,v1:10.0.209.6:6789]"}}


Target cluster:
[root@ceph2-amk-fs-tc-nmj42e-node6 ~]# ceph-conf --show-config mon | grep mon_host
mon_host = [v2:10.0.209.6:3300/0,v1:10.0.209.6:6789/0] [v2:10.0.210.190:3300/0,v1:10.0.210.190:6789/0] [v2:10.0.206.59:3300/0,v1:10.0.206.59:6789/0]

In global conf also it has 3 mon ips listed. Is there any other place I need to check


Regards,
Amarnath

Comment 17 Amarnath 2024-04-16 05:48:56 UTC
Hi Venky,

Do we require Doc_text for this BZ

Regards,
Amarnath

Comment 21 errata-xmlrpc 2024-05-07 12:10:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:2743


Note You need to log in before you can comment on or make changes to this bug.