Bug 2266227 - [CephFS - Consistency Group] - subvolume include with --await to existing quiesced set doesn't succeed
Summary: [CephFS - Consistency Group] - subvolume include with --await to existing qui...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 7.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 7.1
Assignee: Patrick Donnelly
QA Contact: sumr
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2267614 2298578 2298579
TreeView+ depends on / blocked
 
Reported: 2024-02-27 07:16 UTC by sumr
Modified: 2024-07-18 07:59 UTC (History)
9 users (show)

Fixed In Version: ceph-18.2.1-58.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-06-13 14:27:56 UTC
Embargoed:
lusov: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-8383 0 None None None 2024-02-27 07:16:49 UTC
Red Hat Product Errata RHSA-2024:3925 0 None None None 2024-06-13 14:28:03 UTC

Description sumr 2024-02-27 07:16:18 UTC
Description of problem:

A subvolume that was previously excluded could not be included back to same quiesce-set when attempted with --await option.

Quiesce include op gets TIMEDOUT after 5mins.

Version-Release number of selected component (if applicable):
ceph version 18.2.1-38.el9cp (c709c61e19be87249f04b05dec1e586c1f9dd7b0) reef (stable)

How reproducible:


Steps to Reproduce:
1. Quiesce a set of subvolumes - one from default and one from non-default subvolume group and verify cmd response
2. Exclude a subvolume from default group, verify status
3. Include back the same subvolume excluded in step2, with --await option, wait for cmd response

Actual results: Quiesce include op TIMEDOUT after 5mins with subvolume in QUIESCING state.


Expected results: Quiesce include on subvolume should succeed.


Additional info:

Comment 14 errata-xmlrpc 2024-06-13 14:27:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925


Note You need to log in before you can comment on or make changes to this bug.