Bug 2229829
| Summary: | [GSS] Upgrade: Using RHCS version with multisite regressions. Check "ceph health detail" | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | kelwhite |
| Component: | Cephadm | Assignee: | Adam King <adking> |
| Status: | CLOSED CANTFIX | QA Contact: | Mohit Bisht <mobisht> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 5.1 | CC: | adking, bhubbard, bkunal, ceph-eng-bugs, cephqe-warriors, knakai, lithomas, nojha, vumrao |
| Target Milestone: | --- | ||
| Target Release: | 7.1 | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-08-08 18:55:49 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Yes, they plan on getting up to RHCS 5.X. However, they can't at this point since they are trying to run the cephadm adopt playbook first, then upgrade. They don't use multi site nor do they use RGW, so I don't understand why this error would even show, this might be some fault in the way we check to see if they are running multi site? Since this is outside of the scope of this BZ, I'll create a new BZ for this fault in the error checking... |
Description of problem: When upgrading from RHCS 5.1 to RHCS 5.2 (or any RHCS 5.X version), if you're running multi-site you're greeted with the following error: health: HEALTH_ERR Upgrade: Using RHCS version with multisite regressions. Check "ceph health detail" Reviewing the ceph health detail, we can see this output has a lot of grammatical errors and typos: [ERR] USING_VERSION_WITH_MULTISITE_REGRESSIONS: Upgrade: Using RHCS version with multisite regressions. Check "ceph health detail" Please check the release notes for more information on this build and any potential multisite issues. If you do not plan to use RGW multisite or you want to continue using this release with multisite you may set "config set mgr mgr/cephadm/yes_i_know true" to return cephadm to normal operation. If that is the case and you are seeing this after an upgrade, follow setting the config option with "ceph orch upgrade stop" and then "ceph orch upgrade start <image-name>" where <image-name> is the same image used for upgrade before to coninue upgrade If this is a functional cluster you have upgraded from 5.x, you wish to use multisite, and are not okay with the regressions, you can attempt to downgrade via the command "ceph cephadm fallback" followed by "ceph orch daemon redeploy <daemon-name> <previous-image-name>" for each ceph daemon showing the newer version in "ceph orch ps" (will likely be just one mgr) Can we get these resolved?