Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2091153

Summary: [consulting] Predictive failures - Smart data is available, but Ceph does not predict failures based on critical status
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Tupper Cole <tcole>
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED DEFERRED QA Contact: Manasa <mgowri>
Severity: high Docs Contact:
Priority: medium    
Version: 5.1CC: mobisht, saraut
Target Milestone: ---   
Target Release: 9.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2025-08-28 10:03:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Tupper Cole 2022-05-27 16:59:51 UTC
Description of problem: Cephadm seems to have access to smart data, and can report on drive status, but does not act on a drive reporting "critical" status.


Version-Release number of selected component (if applicable):
5.1

How reproducible:
consistent

Steps to Reproduce:
1.Wait until drive is reporting "critical" in smart tools
2.Do health check to see if cephadm has this data
3.Observe that nothing happens

Actual results:
No action taken

Expected results:
OSD should be marked out

Additional info:

Comment 1 RHEL Program Management 2022-05-27 16:59:56 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 7 Sahina Bose 2025-08-28 10:03:37 UTC
Closing this bug as part of bulk closing of bugs that have been open for more than 2 years without any significant updates for the last 3 months. Please reopen with justification if you think this bug is still relevant and needs to be addressed in an upcoming release.