@Vikhyat: You are right. Implemented in https://github.com/ceph/ceph/pull/38911
Bootstrapped using - #cephadm -v --image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-54312-20210519174049 bootstrap --mon-ip 10.8.129.101 --cluster-network 172.20.20.0/24 Added OSDs [ceph: root@pluto001 /]# ceph orch apply osd --all-available-devices Scheduled osd.all-available-devices update... Checked osd config [ceph: root@pluto001 /]# ceph config show osd.1 cluster_network 172.20.20.0/24 checked netstat [ubuntu@pluto001 ~]$ sudo netstat -lntp|grep osd tcp 0 0 172.20.20.233:6821 0.0.0.0:* LISTEN 424600/ceph-osd tcp 0 0 0.0.0.0:6822 0.0.0.0:* LISTEN 424600/ceph-osd tcp 0 0 0.0.0.0:6823 0.0.0.0:* LISTEN 424600/ceph-osd tcp 0 0 172.20.20.233:6824 0.0.0.0:* LISTEN 424600/ceph-osd tcp 0 0 172.20.20.233:6825 0.0.0.0:* LISTEN 424600/ceph-osd .. ,, . Moving to VERIFIED state. Please let me know if I have missed anything Regards, Vasisha Shastry QE, Ceph
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294