Bug 2368271

Summary: "Module 'cephadm' has failed: grace tool failed: Failure - during upgrade
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Amarnath <amk>
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED ERRATA QA Contact: Manisha Saini <msaini>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 8.1CC: cephqe-warriors, msaini, spunadik, tserlin
Target Milestone: ---Keywords: Regression, TestBlocker
Target Release: 8.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-19.2.1-210.el9cp; nfs-ganesha-6.5-15.el9cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2025-06-26 12:32:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Amarnath 2025-05-23 15:34:18 UTC
Description of problem:
Please refer BZ : https://bugzilla.redhat.com/show_bug.cgi?id=2367952

We are hitting the same issue in 8.1 builds 19.2.1-209.el9cp

[root@ceph-upgrade-81-i3z1xi-node8 ~]# ceph -s
  cluster:
    id:     c526df88-37aa-11f0-bba4-fa163e06f05b
    health: HEALTH_ERR
            Module 'cephadm' has failed: grace tool failed: Failure: -126
 
  services:
    mon: 3 daemons, quorum ceph-upgrade-81-i3z1xi-node1-installer,ceph-upgrade-81-i3z1xi-node3,ceph-upgrade-81-i3z1xi-node2 (age 91m)
    mgr: ceph-upgrade-81-i3z1xi-node2.spnvin(active, since 52m), standbys: ceph-upgrade-81-i3z1xi-node1-installer.wghwjt
    mds: 3/3 daemons up, 2 standby
    osd: 16 osds: 16 up (since 86m), 16 in (since 87m)
 
  data:
    volumes: 2/2 healthy
    pools:   6 pools, 161 pgs
    objects: 10.90k objects, 3.1 GiB
    usage:   12 GiB used, 227 GiB / 240 GiB avail
    pgs:     161 active+clean
 
  io:
    client:   170 B/s rd, 0 op/s rd, 0 op/s wr
 
[root@ceph-upgrade-81-i3z1xi-node8 ~]# ceph versions
{
    "mon": {
        "ceph version 18.2.0-192.el9cp (d96dcaf9fd7ef5530bddebb07f804049c840d87e) reef (stable)": 3
    },
    "mgr": {
        "ceph version 18.2.0-192.el9cp (d96dcaf9fd7ef5530bddebb07f804049c840d87e) reef (stable)": 1,
        "ceph version 19.2.1-209.el9cp (6e681cff7741a4f197e910b0b49596dac71e3f2b) squid (stable)": 1
    },
    "osd": {
        "ceph version 18.2.0-192.el9cp (d96dcaf9fd7ef5530bddebb07f804049c840d87e) reef (stable)": 16
    },
    "mds": {
        "ceph version 18.2.0-192.el9cp (d96dcaf9fd7ef5530bddebb07f804049c840d87e) reef (stable)": 5
    },
    "overall": {
        "ceph version 18.2.0-192.el9cp (d96dcaf9fd7ef5530bddebb07f804049c840d87e) reef (stable)": 25,
        "ceph version 19.2.1-209.el9cp (6e681cff7741a4f197e910b0b49596dac71e3f2b) squid (stable)": 1
    }
}



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 9 errata-xmlrpc 2025-06-26 12:32:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775