Bug 2267040

Summary: [CephFS - Consistency Group] - MDS slowness occurs when quiesce started on pre-upgrade subvolumes while IO was in-progress
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: sumr
Component: CephFSAssignee: Patrick Donnelly <pdonnell>
Status: CLOSED ERRATA QA Contact: sumr
Severity: high Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 7.1CC: akraj, amk, ceph-eng-bugs, cephqe-warriors, hyelloji, lusov, ngangadh, pdonnell, tserlin
Target Milestone: ---   
Target Release: 7.1   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-18.2.1-58.el9cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-06-13 14:28:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2267614, 2298578, 2298579    

Description sumr 2024-02-29 11:19:51 UTC
Description of problem:
Quiesce op when started on 5 pre-upgrade subvolumes and with IO in-progress, results in MDS slowness. Could not create snapshots when subvolumes were quiesced.
No ceph fs commands could be processed. Ceph status reports '1 MDSs report slow requests'

Version-Release number of selected component (if applicable): ceph version 18.2.1-40.el9cp (a842b5916501030e0c0be773e91c33daf38fd65f) reef (stable)


How reproducible:


Steps to Reproduce:
1. Setup 6.1 ceph cluster.Create 5 subvolumes across default and non-default subvolumegroups
2. Add Dataset to all subvolumes, create snapshots.
3. Upgrade Ceph to 7.1
4. Run IO on subvolumes. Start Quiesce on 5 existing subvolumes created in step1.
5. While in QUIESCED state, Create snapshot on subvolumes


Actual results: Snapshot create request doesn't proceed, remains hung. Ceph status reports '1 MDSs report slow requests'
No ceph fs cmds gets processed.


Expected results: Snapshot creation should succeed.


Additional info:

Comment 12 errata-xmlrpc 2024-06-13 14:28:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925