Bug 1842456
Summary: | [Docs] W/A of bz #1840539 to be added to the node replacement procedure | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat OpenShift Container Storage | Reporter: | Pratik Surve <prsurve> |
Component: | documentation | Assignee: | Erin Donnelly <edonnell> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Pratik Surve <prsurve> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.4 | CC: | asriram, bkunal, ebenahar, jarrpa, madam, nberry, ocs-bugs, tnielsen |
Target Milestone: | --- | ||
Target Release: | OCS 4.4.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
.Node replacement no longer leads to `Ceph HEALTH_WARN` state
Previously, after node replacement, the Ceph CRUSH map tree still contained the stale hostname entry of the removed node in the particular rack. While replacing a node in a different rack, if any node with the same old hostname was added back to the cluster, it received a new rack label from the `ocs-operator`, but was inserted into its old place in the CRUSH map, resulting in an indefinite `Ceph HEALTH_WARN` state. With this release, this bug has been fixed and node replacement behaves as expected.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2020-06-10 10:02:25 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1840539, 1848455 | ||
Bug Blocks: | 1826040, 1826482, 1859307 |
Comment 10
Travis Nielsen
2020-06-02 16:46:54 UTC
|