Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1859226

Summary: [RFE] how to separate management from data traffic
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Tim Wilkinson <twilkins>
Component: CephadmAssignee: Juan Miguel Olmo <jolmomar>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: high Docs Contact: Karen Norteman <knortema>
Priority: high    
Version: 5.0CC: jharriga, jolmomar, nojha, racpatel, sewagner, tserlin, vereddy, vumrao
Target Milestone: ---Keywords: FutureFeature
Target Release: 5.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: rhceph:ceph-5.0-rhel-8-containers-candidate-54312-20210519174049 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:26:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 14 Juan Miguel Olmo 2021-04-23 06:51:42 UTC
@Vikhyat: You are right. Implemented in https://github.com/ceph/ceph/pull/38911

Comment 15 Vasishta 2021-05-20 12:22:07 UTC
Bootstrapped using - 
#cephadm -v --image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-54312-20210519174049 bootstrap --mon-ip 10.8.129.101 --cluster-network 172.20.20.0/24

Added OSDs
[ceph: root@pluto001 /]# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...

Checked osd config
[ceph: root@pluto001 /]# ceph config show osd.1 cluster_network
172.20.20.0/24


checked netstat
[ubuntu@pluto001 ~]$ sudo netstat -lntp|grep osd
tcp        0      0 172.20.20.233:6821      0.0.0.0:*               LISTEN      424600/ceph-osd     
tcp        0      0 0.0.0.0:6822            0.0.0.0:*               LISTEN      424600/ceph-osd     
tcp        0      0 0.0.0.0:6823            0.0.0.0:*               LISTEN      424600/ceph-osd     
tcp        0      0 172.20.20.233:6824      0.0.0.0:*               LISTEN      424600/ceph-osd     
tcp        0      0 172.20.20.233:6825      0.0.0.0:*               LISTEN      424600/ceph-osd     
..
,,
.

Moving to VERIFIED state.
Please let me know if I have missed anything

Regards,
Vasisha Shastry
QE, Ceph

Comment 19 errata-xmlrpc 2021-08-30 08:26:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294