Bug 2145119
| Summary: | [RFE] [orchestrator] : orchestrator can be lenient in matching subnets : says 10.1.xxx.xxx doesn't belong to 10.0.0.0/8 | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vasishta <vashastr> |
| Component: | Cephadm | Assignee: | Adam King <adking> |
| Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> |
| Severity: | medium | Docs Contact: | lysanche |
| Priority: | unspecified | ||
| Version: | 5.3 | CC: | adking, cephqe-warriors, kdreyer, msaini, rmandyam, sostapov, tserlin |
| Target Milestone: | --- | Keywords: | FutureFeature, TestOnly |
| Target Release: | 5.3z3 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: |
.Cephadm checks the interface within a network
Previously, if a host had an interface like 10.1.1.0/24 and the `pulic_network` was set to 10.0.0.0/8, `cephadm` would consider the host to not be on the public network as the interface would not match the public network exactly.
With this fix, `cephadm` properly checks whether an interface is within a network. Users can set a less specific `public_network` than the interface on the hosts, and cephadm does not filter out those hosts when deciding if the monitor daemons can be placed there.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-05-23 00:19:10 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2203283 | ||
|
Description
Vasishta
2022-11-23 09:35:21 UTC
Missed the 5.3 z1 window. Moving to 6.1. Please advise if this is a problem. (In reply to Scott Ostapovicz from comment #4) > Missed the 5.3 z1 window. Moving to 6.1. Please advise if this is a > problem. there's already another BZ for tracking this for 6 (see linked BZ in https://bugzilla.redhat.com/show_bug.cgi?id=2145119#c0). This was just a copy to get this in RHCS 5. Moving to 5.3z2. Yes Adam. We can move this BZ to ON_QA Based on comment #10 comment #13 and comment #14, moving this BZ to verified state as this is already tested with RHCS 5.3z2 builds and code fixes are already live for same Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:3259 |