Bug 2370370
| Summary: | [8.x Backport] - ceph fs snap-schedule command is erroring with EIO: disk I/O error | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Hemanth Kumar <hyelloji> |
| Component: | CephFS | Assignee: | Milind Changire <mchangir> |
| Status: | CLOSED ERRATA | QA Contact: | Hemanth Kumar <hyelloji> |
| Severity: | medium | Docs Contact: | Rivka Pollack <rpollack> |
| Priority: | unspecified | ||
| Version: | 8.0 | CC: | ceph-eng-bugs, cephqe-warriors, gfarnum, mchangir, ngangadh, rpollack, vshankar |
| Target Milestone: | --- | ||
| Target Release: | 8.1z1 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-19.2.1-229 | Doc Type: | Bug Fix |
| Doc Text: |
.Improved handling of `fs_map` notifications after file system removal
Previously, after a Ceph File System (CephFS) was removed from the cluster, the `fs_map` notification about the change was not handled properly. This oversight caused the `snap_schedule` manager module to continue accessing the associated `snap_schedule` SQLite Database in the metadata pool. As a result, disk I/O errors occured.
With this fix, all timers related to the CephFS are now canceled and the SQLite Database connection is closed after deletion, helping ensure no invalid metadata pool references remain.
NOTE: A small window still exists between CephFS deletion and notification processing, during which a snapshot schedule could run for a recently deleted CephFS and occasionally report disk I/O errors in the manager logs or at the console.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2025-08-18 14:02:03 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Hemanth Kumar
2025-06-05 03:24:47 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 8.1 security and bug fix updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2025:14015 |