Bug 2240727 - [cephfs] subvolume group delete did not allow.
Summary: [cephfs] subvolume group delete did not allow.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.3
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 5.3z6
Assignee: Neeraj Pratap Singh
QA Contact: Hemanth Kumar
Ranjini M N
URL:
Whiteboard:
Depends On: 2240729 2240138
Blocks: 2258797
TreeView+ depends on / blocked
 
Reported: 2023-09-26 07:01 UTC by Neeraj Pratap Singh
Modified: 2024-02-08 16:55 UTC (History)
11 users (show)

Fixed In Version: ceph-16.2.10-231.el8cp
Doc Type: Bug Fix
Doc Text:
.The `ENOTEMPTY` output is detected and the message is displayed correctly Previously, when running the `subvolume group rm` command, the `ENOTEMPTY` output was not detected in the volume's plugin causing a generalized error message instead of a specific message. With this fix, the `ENOTEMPTY` output is detected for the `subvolume group rm` command when there is subvolume present inside the subvolumegroup and the message is displayed correctly.
Clone Of: 2240138
Environment:
Last Closed: 2024-02-08 16:55:48 UTC
Embargoed:
ngangadh: needinfo+
ngangadh: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 62968 0 None None None 2023-09-26 07:01:49 UTC
Red Hat Issue Tracker RHCEPH-7546 0 None None None 2023-09-26 07:04:12 UTC
Red Hat Product Errata RHSA-2024:0745 0 None None None 2024-02-08 16:55:53 UTC

Description Neeraj Pratap Singh 2023-09-26 07:01:50 UTC
+++ This bug was initially created as a clone of Bug #2240138 +++

Description of problem:

Failed to execute unknown task
Failed to delete subvolume group jul_group: error in rmdir /volumes/jul_group
9/21/23 4:35:27 PM

Version-Release number of selected component (if applicable):


How reproducible:

Try to delete the subvolume group in dashboard

Steps to Reproduce:
1.
2.
3.

Actual results:

failed to delete subvolume group

Expected results:

deleting subvolume group should go thru

Additional info:

--- Additional comment from  on 2023-09-21 23:45:32 UTC ---

It only happens when there is a subvolume is in the subvolume group.


It should say, empty the subvolume group before remove it which causes customer confusion.

--- Additional comment from Venky Shankar on 2023-09-22 06:25:17 UTC ---

Neeraj, please create a tracker and link it in this bz.

--- Additional comment from Nizamudeen on 2023-09-22 09:01:37 UTC ---

Ceph Dashboard shows the error message we get from the respective call which is made to remove the subvolume group. Since we get the message as `error in rmdir /volumes/jul_group` we show that in the UI. Could you please attach mgr logs to the bz at the time of removal to see what are all the messages that we get from the volumes module? Based on that we can decide whether the fix should go in Dashboard or the Volume module.

--- Additional comment from Nizamudeen on 2023-09-22 12:46:24 UTC ---

In any case, this could be an RFE so moving it out to 7.1. If we have the fix earlier, maybe it could be targeted to 7.0z1

--- Additional comment from Neeraj Pratap Singh on 2023-09-25 09:29:21 UTC ---

(In reply to Nizamudeen from comment #3)
> Ceph Dashboard shows the error message we get from the respective call which
> is made to remove the subvolume group. Since we get the message as `error in
> rmdir /volumes/jul_group` we show that in the UI. Could you please attach
> mgr logs to the bz at the time of removal to see what are all the messages
> that we get from the volumes module? Based on that we can decide whether the
> fix should go in Dashboard or the Volume module.

Hi,
I did reproduce the issue and found that we are not catching `ENOTEMPTY` error
in volumes plugin and that's why we are getting the generalised message for rmdir
instead of a customized one. Will create a tracker under CephFS.

--- Additional comment from Neeraj Pratap Singh on 2023-09-25 09:33:11 UTC ---

(In reply to Nizamudeen from comment #3)
> Ceph Dashboard shows the error message we get from the respective call which
> is made to remove the subvolume group. Since we get the message as `error in
> rmdir /volumes/jul_group` we show that in the UI. Could you please attach
> mgr logs to the bz at the time of removal to see what are all the messages
> that we get from the volumes module? Based on that we can decide whether the
> fix should go in Dashboard or the Volume module.

It seems that the issue is not from Ceph-dashboard, but from volumes plugin.
@Nizamudeen, pls re-assign to me if you are ok!

--- Additional comment from Nizamudeen on 2023-09-25 10:25:36 UTC ---

Thank you Neeraj, here you go!

--- Additional comment from Venky Shankar on 2023-09-26 05:55:37 UTC ---

Neeraj, please clone this BZ to RHCS 5/6 releases.

Comment 11 errata-xmlrpc 2024-02-08 16:55:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 Security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:0745


Note You need to log in before you can comment on or make changes to this bug.