Bug 2240727

Summary: [cephfs] subvolume group delete did not allow.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Neeraj Pratap Singh <neesingh>
Component: CephFSAssignee: Neeraj Pratap Singh <neesingh>
Status: CLOSED ERRATA QA Contact: Hemanth Kumar <hyelloji>
Severity: medium Docs Contact: Ranjini M N <rmandyam>
Priority: unspecified    
Version: 5.3CC: ceph-eng-bugs, cephqe-warriors, hyelloji, julpark, neesingh, ngangadh, nia, rmandyam, tserlin, vereddy, vshankar
Target Milestone: ---Flags: ngangadh: needinfo+
ngangadh: needinfo+
Target Release: 5.3z6   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.10-231.el8cp Doc Type: Bug Fix
Doc Text:
.The `ENOTEMPTY` output is detected and the message is displayed correctly Previously, when running the `subvolume group rm` command, the `ENOTEMPTY` output was not detected in the volume's plugin causing a generalized error message instead of a specific message. With this fix, the `ENOTEMPTY` output is detected for the `subvolume group rm` command when there is subvolume present inside the subvolumegroup and the message is displayed correctly.
Story Points: ---
Clone Of: 2240138 Environment:
Last Closed: 2024-02-08 16:55:48 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2240729, 2240138    
Bug Blocks: 2258797    

Description Neeraj Pratap Singh 2023-09-26 07:01:50 UTC
+++ This bug was initially created as a clone of Bug #2240138 +++

Description of problem:

Failed to execute unknown task
Failed to delete subvolume group jul_group: error in rmdir /volumes/jul_group
9/21/23 4:35:27 PM

Version-Release number of selected component (if applicable):


How reproducible:

Try to delete the subvolume group in dashboard

Steps to Reproduce:
1.
2.
3.

Actual results:

failed to delete subvolume group

Expected results:

deleting subvolume group should go thru

Additional info:

--- Additional comment from  on 2023-09-21 23:45:32 UTC ---

It only happens when there is a subvolume is in the subvolume group.


It should say, empty the subvolume group before remove it which causes customer confusion.

--- Additional comment from Venky Shankar on 2023-09-22 06:25:17 UTC ---

Neeraj, please create a tracker and link it in this bz.

--- Additional comment from Nizamudeen on 2023-09-22 09:01:37 UTC ---

Ceph Dashboard shows the error message we get from the respective call which is made to remove the subvolume group. Since we get the message as `error in rmdir /volumes/jul_group` we show that in the UI. Could you please attach mgr logs to the bz at the time of removal to see what are all the messages that we get from the volumes module? Based on that we can decide whether the fix should go in Dashboard or the Volume module.

--- Additional comment from Nizamudeen on 2023-09-22 12:46:24 UTC ---

In any case, this could be an RFE so moving it out to 7.1. If we have the fix earlier, maybe it could be targeted to 7.0z1

--- Additional comment from Neeraj Pratap Singh on 2023-09-25 09:29:21 UTC ---

(In reply to Nizamudeen from comment #3)
> Ceph Dashboard shows the error message we get from the respective call which
> is made to remove the subvolume group. Since we get the message as `error in
> rmdir /volumes/jul_group` we show that in the UI. Could you please attach
> mgr logs to the bz at the time of removal to see what are all the messages
> that we get from the volumes module? Based on that we can decide whether the
> fix should go in Dashboard or the Volume module.

Hi,
I did reproduce the issue and found that we are not catching `ENOTEMPTY` error
in volumes plugin and that's why we are getting the generalised message for rmdir
instead of a customized one. Will create a tracker under CephFS.

--- Additional comment from Neeraj Pratap Singh on 2023-09-25 09:33:11 UTC ---

(In reply to Nizamudeen from comment #3)
> Ceph Dashboard shows the error message we get from the respective call which
> is made to remove the subvolume group. Since we get the message as `error in
> rmdir /volumes/jul_group` we show that in the UI. Could you please attach
> mgr logs to the bz at the time of removal to see what are all the messages
> that we get from the volumes module? Based on that we can decide whether the
> fix should go in Dashboard or the Volume module.

It seems that the issue is not from Ceph-dashboard, but from volumes plugin.
@Nizamudeen, pls re-assign to me if you are ok!

--- Additional comment from Nizamudeen on 2023-09-25 10:25:36 UTC ---

Thank you Neeraj, here you go!

--- Additional comment from Venky Shankar on 2023-09-26 05:55:37 UTC ---

Neeraj, please clone this BZ to RHCS 5/6 releases.

Comment 11 errata-xmlrpc 2024-02-08 16:55:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 Security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:0745