Bug 2367952 - "Module 'cephadm' has failed: grace tool failed: Failure - during upgrade
Summary: "Module 'cephadm' has failed: grace tool failed: Failure - during upgrade
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 8.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 8.0z4
Assignee: Adam King
QA Contact: hacharya
URL:
Whiteboard:
Depends On:
Blocks: 2365634
TreeView+ depends on / blocked
 
Reported: 2025-05-22 10:03 UTC by hacharya
Modified: 2025-05-28 13:19 UTC (History)
7 users (show)

Fixed In Version: ceph-19.2.0-137.el9cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2025-05-28 13:19:38 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-11464 0 None None None 2025-05-22 10:05:59 UTC
Red Hat Product Errata RHBA-2025:8259 0 None None None 2025-05-28 13:19:45 UTC

Description hacharya 2025-05-22 10:03:50 UTC
Description of problem:
"Module 'cephadm' has failed: grace tool failed: Failure - during upgrade from 7.x to 8.0z4

Version-Release number of selected component (if applicable):
upgrade from "ceph version 18.2.1-329.el9cp -> ceph version 19.2.0-136.el9cp

How reproducible:
1/1

Steps to Reproduce:
1.Upgrade 
2.
3.

Actual results:
"Module 'cephadm' has failed: grace tool failed: Failure -  upgrade failed

Expected results:
Upgrade shouldn't fail

Additional info:
http://magna002.ceph.redhat.com/ceph-qe-logs/hacharya/misc/upgrade_7.1_to_8.0/part5/upgrade_logs_7to8oz4_bugverification/

http://magna002.ceph.redhat.com/ceph-qe-logs/hacharya/misc/upgrade_7.1_to_8.0/part3/upgrade-7_to_8oz4/


# ceph -s
  cluster:
    id:     608a7682-36da-11f0-bd6d-fa163e76d160
    health: HEALTH_ERR
            Module 'cephadm' has failed: grace tool failed: Failure: -126
            noout,noscrub,nodeep-scrub flag(s) set
  services:
    mon: 3 daemons, quorum ceph-harish-upgrade-5eqqrl-node1-installer,ceph-harish-upgrade-5eqqrl-node3,ceph-harish-upgrade-5eqqrl-node2 (age 2h)
    mgr: ceph-harish-upgrade-5eqqrl-node2.fdcnsx(active, since 2h), standbys: ceph-harish-upgrade-5eqqrl-node1-installer.nrygpu
    mds: 3/3 daemons up, 2 standby
    osd: 16 osds: 16 up (since 2h), 16 in (since 2h)
         flags noout,noscrub,nodeep-scrub
  data:
    volumes: 2/2 healthy
    pools:   6 pools, 641 pgs
    objects: 11.02k objects, 3.6 GiB
    usage:   16 GiB used, 224 GiB / 240 GiB avail
    pgs:     641 active+clean
  io:
    client:   85 B/s rd, 0 op/s rd, 0 op/s wr
  progress:
    Upgrade to cp.stg.icr.io/cp/ibm-ceph/ceph-8-rhel9:8-139 (0s)
      [............................]

Comment 1 Storage PM bot 2025-05-22 10:03:58 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 9 errata-xmlrpc 2025-05-28 13:19:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.0 bug fix updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2025:8259


Note You need to log in before you can comment on or make changes to this bug.