Bug 2368271 - "Module 'cephadm' has failed: grace tool failed: Failure - during upgrade
Summary: "Module 'cephadm' has failed: grace tool failed: Failure - during upgrade
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 8.1
Assignee: Adam King
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-05-23 15:34 UTC by Amarnath
Modified: 2025-06-26 12:32 UTC (History)
4 users (show)

Fixed In Version: ceph-19.2.1-210.el9cp; nfs-ganesha-6.5-15.el9cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2025-06-26 12:32:17 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-11471 0 None None None 2025-05-23 15:35:11 UTC
Red Hat Product Errata RHSA-2025:9775 0 None None None 2025-06-26 12:32:21 UTC

Description Amarnath 2025-05-23 15:34:18 UTC
Description of problem:
Please refer BZ : https://bugzilla.redhat.com/show_bug.cgi?id=2367952

We are hitting the same issue in 8.1 builds 19.2.1-209.el9cp

[root@ceph-upgrade-81-i3z1xi-node8 ~]# ceph -s
  cluster:
    id:     c526df88-37aa-11f0-bba4-fa163e06f05b
    health: HEALTH_ERR
            Module 'cephadm' has failed: grace tool failed: Failure: -126
 
  services:
    mon: 3 daemons, quorum ceph-upgrade-81-i3z1xi-node1-installer,ceph-upgrade-81-i3z1xi-node3,ceph-upgrade-81-i3z1xi-node2 (age 91m)
    mgr: ceph-upgrade-81-i3z1xi-node2.spnvin(active, since 52m), standbys: ceph-upgrade-81-i3z1xi-node1-installer.wghwjt
    mds: 3/3 daemons up, 2 standby
    osd: 16 osds: 16 up (since 86m), 16 in (since 87m)
 
  data:
    volumes: 2/2 healthy
    pools:   6 pools, 161 pgs
    objects: 10.90k objects, 3.1 GiB
    usage:   12 GiB used, 227 GiB / 240 GiB avail
    pgs:     161 active+clean
 
  io:
    client:   170 B/s rd, 0 op/s rd, 0 op/s wr
 
[root@ceph-upgrade-81-i3z1xi-node8 ~]# ceph versions
{
    "mon": {
        "ceph version 18.2.0-192.el9cp (d96dcaf9fd7ef5530bddebb07f804049c840d87e) reef (stable)": 3
    },
    "mgr": {
        "ceph version 18.2.0-192.el9cp (d96dcaf9fd7ef5530bddebb07f804049c840d87e) reef (stable)": 1,
        "ceph version 19.2.1-209.el9cp (6e681cff7741a4f197e910b0b49596dac71e3f2b) squid (stable)": 1
    },
    "osd": {
        "ceph version 18.2.0-192.el9cp (d96dcaf9fd7ef5530bddebb07f804049c840d87e) reef (stable)": 16
    },
    "mds": {
        "ceph version 18.2.0-192.el9cp (d96dcaf9fd7ef5530bddebb07f804049c840d87e) reef (stable)": 5
    },
    "overall": {
        "ceph version 18.2.0-192.el9cp (d96dcaf9fd7ef5530bddebb07f804049c840d87e) reef (stable)": 25,
        "ceph version 19.2.1-209.el9cp (6e681cff7741a4f197e910b0b49596dac71e3f2b) squid (stable)": 1
    }
}



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 9 errata-xmlrpc 2025-06-26 12:32:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775


Note You need to log in before you can comment on or make changes to this bug.