Bug 2152963

Summary: ceph cluster upgrade failure/handling report with offline hosts needs to be improved
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vasishta <vashastr>
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED ERRATA QA Contact: Manisha Saini <msaini>
Severity: medium Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 5.3CC: adking, akraj, cephqe-warriors, vereddy
Target Milestone: ---   
Target Release: 6.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-17.2.6-5.el9cp Doc Type: Enhancement
Doc Text:
.Cephadm now raises a specific health warning `UPGRADE_OFFLINE_HOST` when the host goes offline during upgrade Previously, when upgrades failed due to a host going offline, a generic `UPGRADE_EXCEPTION` health warning would be raised that was too ambiguous for users to understand. With this release, when an upgrade fails due to a host being offline, Cephadm raises a specific health warning - `UPGRADE_OFFLINE_HOST`, and the issue is now made transparent to the user.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-06-15 09:16:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2180567    
Bug Blocks: 2192813    

Description Vasishta 2022-12-13 15:58:14 UTC
Description of problem:
ceph orch upgrade was stuck without starting in a cluster where one of the node had stale mount and ceph-volume inventory list was getting stuck.

The affected node was rebooted.

upgrade started but cluster ended up in 
>>>>
health: HEALTH_ERR
            Upgrade: failed due to an unexpected exception
    Unexpected exception occurred during upgrade process: Failed to connect to <hostname with P>.
Please make sure that the host is reachable and accepts connections using the cephadm SSH key
>>>>

Orchestrator had tried upgrading all daemons (crash and osds) on that node, but cluster status had not changed from HEALTH_ERR due to reach-ability to that node. (OSDs were down could be due to different reason)

Tried 
>> ceph orch upgrade start <same-old-image>

Cluster state got refreshed and it was HEALTH_OK

Version-Release number of selected component (if applicable):
16.2.10-82.el8cp

How reproducible:
Tried once.

Steps to Reproduce:
<Explained above>

Actual results:
Scope for improvement in cluster upgrade failure/handling report 
Could be a stale error report

Expected results:
Cluster status to get updated based on actual progress even with hosts offline.

Comment 13 errata-xmlrpc 2023-06-15 09:16:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3623