Bug 2145119 - [RFE] [orchestrator] : orchestrator can be lenient in matching subnets : says 10.1.xxx.xxx doesn't belong to 10.0.0.0/8
Summary: [RFE] [orchestrator] : orchestrator can be lenient in matching subnets : says...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.3
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 5.3z3
Assignee: Adam King
QA Contact: Manisha Saini
lysanche
URL:
Whiteboard:
Depends On:
Blocks: 2203283
TreeView+ depends on / blocked
 
Reported: 2022-11-23 09:35 UTC by Vasishta
Modified: 2023-05-23 00:19 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
.Cephadm checks the interface within a network Previously, if a host had an interface like 10.1.1.0/24 and the `pulic_network` was set to 10.0.0.0/8, `cephadm` would consider the host to not be on the public network as the interface would not match the public network exactly. With this fix, `cephadm` properly checks whether an interface is within a network. Users can set a less specific `public_network` than the interface on the hosts, and cephadm does not filter out those hosts when deciding if the monitor daemons can be placed there.
Clone Of:
Environment:
Last Closed: 2023-05-23 00:19:10 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-5674 0 None None None 2022-11-23 10:24:08 UTC
Red Hat Product Errata RHBA-2023:3259 0 None None None 2023-05-23 00:19:46 UTC

Description Vasishta 2022-11-23 09:35:21 UTC
This bug was initially created as a copy of Bug #2104947

I am copying this bug because: 
Lot of automation runs fail because of this issue, fixing this would help downstream automation runs.


Description of problem:
Wanted to add a monitor with address 10.1.xxx.xxx to cluster which had 10.8.xxx.0/21 as public_network.

changed public network to 10.0.0.0/8, and updated placement
mgr logs said 
>> Filtered out host depxxxx003.xxxx.rxxxxx.com: does not belong to mon public_network (10.0.0.0/8)

cephadm list-networks had listed down below as network -
>>  "10.1.xxx.0/23"

Updated mon public_network to (10.8.xxx.0/21,10.1.xxx.0/23) and reapplied placement

mon got successfully added.

This RFE is to make orchestrator lenient as consider 10.0.0.0/8 as valid public_network as it is.

Version-Release number of selected component (if applicable):
<latest>

Comment 4 Scott Ostapovicz 2023-02-06 17:00:01 UTC
 Missed the 5.3 z1 window.  Moving to 6.1.  Please advise if this is a problem.

Comment 5 Adam King 2023-02-21 14:08:58 UTC
(In reply to Scott Ostapovicz from comment #4)
>  Missed the 5.3 z1 window.  Moving to 6.1.  Please advise if this is a
> problem.

there's already another BZ for tracking this for 6 (see linked BZ in https://bugzilla.redhat.com/show_bug.cgi?id=2145119#c0). This was just a copy to get this in RHCS 5. Moving to 5.3z2.

Comment 16 Manisha Saini 2023-05-04 19:20:17 UTC
Yes Adam. We can move this BZ to ON_QA

Comment 20 Manisha Saini 2023-05-12 20:31:28 UTC
Based on comment #10 comment #13 and comment #14, moving this BZ to verified state as this is already tested with RHCS 5.3z2 builds and code fixes are already live for same

Comment 25 errata-xmlrpc 2023-05-23 00:19:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3259


Note You need to log in before you can comment on or make changes to this bug.