Bug 2266223

Summary: [CephFS - Consistency Group] - Quiesce doesn't succeed on mixed set of subvolumes
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: sumr
Component: CephFSAssignee: Patrick Donnelly <pdonnell>
Status: CLOSED ERRATA QA Contact: sumr
Severity: high Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 7.1CC: akraj, amk, ceph-eng-bugs, cephqe-warriors, hyelloji, lusov, ngangadh, pdonnell, tserlin
Target Milestone: ---Flags: lusov: needinfo+
Target Release: 7.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-18.2.1-58.el9cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-06-13 14:27:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2267614, 2298578, 2298579    

Description sumr 2024-02-27 06:36:06 UTC
Description of problem:

Quiesce doesn't succeed with mixed set of subvolumes - subvolumes of default and non-default subvolume groups.

It remains in quiescing until timeout of 300secs and then TIMEDOUT


Version-Release number of selected component (if applicable):
ceph version 18.2.1-38.el9cp (c709c61e19be87249f04b05dec1e586c1f9dd7b0) reef (stable)


How reproducible:


Steps to Reproduce:
1. Run quiesce on set of subvolumes across default and default groups as below with timeout expiration and await params
2. Wait for Quiesce to succeed


Actual results: Quiesce op TIMEDOUT, didn't succeed


Expected results: Quiesce should succeed


Additional info:
> No IO was run in parallel to quiesce op
> update on previous quiesce ops on same set of subvolumes:
  Performed include and exclude ops on same set of subvolumes and then released the quiesce set

Comment 13 errata-xmlrpc 2024-06-13 14:27:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925