Bug 2042602
| Summary: | [RHCS 5] Performing a `ceph orch restart mgr` results in endless restart loop | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Mustafa Aydın <maydin> |
| Component: | Cephadm | Assignee: | Redouane Kachach Elhichou <rkachach> |
| Status: | CLOSED ERRATA | QA Contact: | Manasa <mgowri> |
| Severity: | medium | Docs Contact: | Akash Raj <akraj> |
| Priority: | unspecified | ||
| Version: | 5.0 | CC: | adking, akraj, asriram, gjose, kdreyer, rkachach, vereddy |
| Target Milestone: | --- | Keywords: | Rebase, Regression |
| Target Release: | 5.2 | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-16.2.8-2.el8cp | Doc Type: | Bug Fix |
| Doc Text: |
.The `ceph orch redeploy mgr` command redeploys the active Manager daemon last
Previously, the `ceph orch redeploy mgr` command would cause the Ceph Manager daemons to continually redeploy themselves without clearing the scheduled redeploy action which would result in the Ceph Manager daemons endlessly flapping.
With this release, the ordering of the redeployment was adjusted so that the active manager daemon is always redeployed last and the command `ceph orch redeploy mgr` now only redeploys each Ceph Manager once.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-08-09 17:37:27 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2102272 | ||
|
Description
Mustafa Aydın
2022-01-19 19:33:15 UTC
This issue seems to be fixed in last releases. I tried to reproduce it both using 16.2.7 and master and couldn't observe this behavior in these branches. There's a PR (already merged) that seems to fix this issue. https://github.com/ceph/ceph/pull/41002. docs look good to me Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5997 |