This bug was initially created as a copy of Bug #2104947 I am copying this bug because: Lot of automation runs fail because of this issue, fixing this would help downstream automation runs. Description of problem: Wanted to add a monitor with address 10.1.xxx.xxx to cluster which had 10.8.xxx.0/21 as public_network. changed public network to 10.0.0.0/8, and updated placement mgr logs said >> Filtered out host depxxxx003.xxxx.rxxxxx.com: does not belong to mon public_network (10.0.0.0/8) cephadm list-networks had listed down below as network - >> "10.1.xxx.0/23" Updated mon public_network to (10.8.xxx.0/21,10.1.xxx.0/23) and reapplied placement mon got successfully added. This RFE is to make orchestrator lenient as consider 10.0.0.0/8 as valid public_network as it is. Version-Release number of selected component (if applicable): <latest>
Missed the 5.3 z1 window. Moving to 6.1. Please advise if this is a problem.
(In reply to Scott Ostapovicz from comment #4) > Missed the 5.3 z1 window. Moving to 6.1. Please advise if this is a > problem. there's already another BZ for tracking this for 6 (see linked BZ in https://bugzilla.redhat.com/show_bug.cgi?id=2145119#c0). This was just a copy to get this in RHCS 5. Moving to 5.3z2.
Yes Adam. We can move this BZ to ON_QA
Based on comment #10 comment #13 and comment #14, moving this BZ to verified state as this is already tested with RHCS 5.3z2 builds and code fixes are already live for same
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:3259