Bug 1859226 - [RFE] how to separate management from data traffic
Summary: [RFE] how to separate management from data traffic
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: 5.0
Assignee: Juan Miguel Olmo
QA Contact: Vasishta
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-21 13:42 UTC by Tim Wilkinson
Modified: 2021-08-30 08:26 UTC (History)
8 users (show)

Fixed In Version: rhceph:ceph-5.0-rhel-8-containers-candidate-54312-20210519174049
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:26:28 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 38911 0 None closed cephadm: Add cluster network to bootstrap 2021-04-23 06:51:39 UTC
Red Hat Issue Tracker RHCEPH-1074 0 None None None 2021-08-27 05:46:54 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:26:45 UTC

Comment 14 Juan Miguel Olmo 2021-04-23 06:51:42 UTC
@Vikhyat: You are right. Implemented in https://github.com/ceph/ceph/pull/38911

Comment 15 Vasishta 2021-05-20 12:22:07 UTC
Bootstrapped using - 
#cephadm -v --image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-54312-20210519174049 bootstrap --mon-ip 10.8.129.101 --cluster-network 172.20.20.0/24

Added OSDs
[ceph: root@pluto001 /]# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...

Checked osd config
[ceph: root@pluto001 /]# ceph config show osd.1 cluster_network
172.20.20.0/24


checked netstat
[ubuntu@pluto001 ~]$ sudo netstat -lntp|grep osd
tcp        0      0 172.20.20.233:6821      0.0.0.0:*               LISTEN      424600/ceph-osd     
tcp        0      0 0.0.0.0:6822            0.0.0.0:*               LISTEN      424600/ceph-osd     
tcp        0      0 0.0.0.0:6823            0.0.0.0:*               LISTEN      424600/ceph-osd     
tcp        0      0 172.20.20.233:6824      0.0.0.0:*               LISTEN      424600/ceph-osd     
tcp        0      0 172.20.20.233:6825      0.0.0.0:*               LISTEN      424600/ceph-osd     
..
,,
.

Moving to VERIFIED state.
Please let me know if I have missed anything

Regards,
Vasisha Shastry
QE, Ceph

Comment 19 errata-xmlrpc 2021-08-30 08:26:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.