Bug 2033545 - [RFE] Quota support for subvolume group
Summary: [RFE] Quota support for subvolume group
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.1
Hardware: All
OS: All
high
high
Target Milestone: ---
: 6.0
Assignee: Kotresh HR
QA Contact: Amarnath
Eliska
URL:
Whiteboard:
Depends On:
Blocks: 2126050 2138087
TreeView+ depends on / blocked
 
Reported: 2021-12-17 07:57 UTC by Kesavan
Modified: 2023-03-20 18:56 UTC (History)
12 users (show)

Fixed In Version: ceph-17.2.3-32.el9cp
Doc Type: Enhancement
Doc Text:
.Users can now set and manage quotas on subvolume group Previously, the user could only apply quotas to individual subvolumes. With this release, the user can now set, apply and manage quotas for a given subvolume group, especially when working on a multi-tenant environment.
Clone Of:
: 2138087 (view as bug list)
Environment:
Last Closed: 2023-03-20 18:55:34 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 53509 0 None None None 2021-12-17 08:02:12 UTC
Red Hat Issue Tracker RHCEPH-2815 0 None None None 2021-12-17 07:58:14 UTC
Red Hat Product Errata RHBA-2023:1360 0 None None None 2023-03-20 18:56:13 UTC

Description Kesavan 2021-12-17 07:57:42 UTC
Today, we can apply quota to individual subvolume. However when working on a multi-tenant environment, the storage admin wants to provision quota for a given subvolumegroup, so one level above the subvolume.
It will be nice if we could set a desired quota per subvolumegroup.

Comment 1 RHEL Program Management 2021-12-17 07:57:49 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 3 Sébastien Han 2022-03-09 09:42:48 UTC
Hi Kotresh,

The workflow you describe is correct.

The deletion of the subvolumegroup will only happen on cluster uninstallation and the controller will block if Subvolumes still exists.
Can you elaborate on the subvolumegroup deletion behavior and why an immediate deletion could be an issue?

Thanks.

Comment 5 Kotresh HR 2022-03-14 05:58:11 UTC
Thanks Sebastien and Orit for the answers. Reply inline.

(In reply to Sébastien Han from comment #3)
> Hi Kotresh,
> 
> The workflow you describe is correct.
> 
> The deletion of the subvolumegroup will only happen on cluster
> uninstallation and the controller will block if Subvolumes still exists.
> Can you elaborate on the subvolumegroup deletion behavior and why an
> immediate deletion could be an issue?

When a subvolume is deleted, it is moved to trash directory and the call is
returned to user with success. The actual deletion happens asynchronously.
The trash directory of the subvolumes has to reside inside subvolumegroup if quota
is enabled because of [1]. If the subvolumegroup deletion is received immediately after
subvolumes deletion, the caller would receive 'EAGAIN' until the trash directory is empty.
So I think this behavior is fine ?

[1] https://tracker.ceph.com/issues/16884

> 
> Thanks.

Comment 6 Sébastien Han 2022-03-14 10:14:37 UTC
(In reply to Kotresh HR from comment #5)
> Thanks Sebastien and Orit for the answers. Reply inline.
> 
> (In reply to Sébastien Han from comment #3)
> > Hi Kotresh,
> > 
> > The workflow you describe is correct.
> > 
> > The deletion of the subvolumegroup will only happen on cluster
> > uninstallation and the controller will block if Subvolumes still exists.
> > Can you elaborate on the subvolumegroup deletion behavior and why an
> > immediate deletion could be an issue?
> 
> When a subvolume is deleted, it is moved to trash directory and the call is
> returned to user with success. The actual deletion happens asynchronously.
> The trash directory of the subvolumes has to reside inside subvolumegroup if
> quota
> is enabled because of [1]. If the subvolumegroup deletion is received
> immediately after
> subvolumes deletion, the caller would receive 'EAGAIN' until the trash
> directory is empty.
> So I think this behavior is fine ?
> 
> [1] https://tracker.ceph.com/issues/16884
> 
> > 
> > Thanks.

If the deletion fails with EAGAIN we can easily catch it, but the controller will retry on failure anyways so this behavior is fine.

Comment 21 Amarnath 2022-10-02 17:21:52 UTC
Hi @khiremat,

I am able to set file quota to subvolumegroup.
Here is the scenario i tried

Created SubvolumeGroup
File quota set to 3

with this, I am able to create more than three subvolumes.I tried to create 4 and it did not throw any error
When I try to create file inside the subvolume it fails with the quota exceeded error.

Can you comment on the above Scenario?


Regards,
Amarnath

Comment 22 Amarnath 2022-10-04 03:28:37 UTC
Hi Kotresh,

Below are the scenarios that have been tested with respect to subvolumegroup bytes quota.
Could please point if we have to validate any other things.


Subvolume is allowing to set larger value than subvolumegroup  -- Failed

[root@ceph-amk-bz-8be6sz-node7 volumes]# setfattr -n ceph.quota.max_bytes -v 10737418240 subvolgroup_1/
[root@ceph-amk-bz-8be6sz-node7 volumes]# getfattr --only-values -n ceph.quota.max_bytes subvolgroup_1/
10737418240[root@ceph-amk-bz-8be6sz-node7 volumes]# 


[root@ceph-amk-bz-8be6sz-node7 volumes]#  setfattr -n ceph.quota.max_bytes -v 20737418240 subvolgroup_1/subvol_1/
[root@ceph-amk-bz-8be6sz-node7 volumes]# getfattr --only-values -n ceph.quota.max_bytes subvolgroup_1/subvol_1/
20737418240[root@ceph-amk-bz-8be6sz-node7 volumes]# 

Validate subvolumegroup bytes quota takes precedence -- passed

[root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_3
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_4
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_5
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_6
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_7
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_8
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_9
cp: error copying 'rhcos-live.x86_64.iso' to 'rhcos-live.x86_64.iso_9': Disk quota exceeded
[root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# ls -lrt
total 10516288
-rw-r--r--. 1 root root 1131413504 Jul 13 22:42 rhcos-live.x86_64.iso
-rw-r--r--. 1 root root 1131413504 Oct  3 23:02 rhcos-live.x86_64.iso_1
-rw-r--r--. 1 root root 1131413504 Oct  3 23:08 rhcos-live.x86_64.iso_2
-rw-r--r--. 1 root root 1131413504 Oct  3 23:09 rhcos-live.x86_64.iso_3
-rw-r--r--. 1 root root 1131413504 Oct  3 23:09 rhcos-live.x86_64.iso_4
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_5
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_6
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_7
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_8
-rw-r--r--. 1 root root  585957376 Oct  3 23:10 rhcos-live.x86_64.iso_9

Increase subvolumegroup quota
[root@ceph-amk-bz-8be6sz-node7 volumes]# setfattr -n ceph.quota.max_bytes -v 20737418240 subvolgroup_1/
[root@ceph-amk-bz-8be6sz-node7 volumes]# 
[root@ceph-amk-bz-8be6sz-node7 volumes]# getfattr --only-values -n ceph.quota.max_bytes subvolgroup_1/
20737418240[root@ceph-amk-bz-8be6sz-node7 volumes]# 
[root@ceph-amk-bz-8be6sz-node7 volumes]# 

[root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# ls -lrt
total 10516288
-rw-r--r--. 1 root root 1131413504 Jul 13 22:42 rhcos-live.x86_64.iso
-rw-r--r--. 1 root root 1131413504 Oct  3 23:02 rhcos-live.x86_64.iso_1
-rw-r--r--. 1 root root 1131413504 Oct  3 23:08 rhcos-live.x86_64.iso_2
-rw-r--r--. 1 root root 1131413504 Oct  3 23:09 rhcos-live.x86_64.iso_3
-rw-r--r--. 1 root root 1131413504 Oct  3 23:09 rhcos-live.x86_64.iso_4
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_5
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_6
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_7
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_8
-rw-r--r--. 1 root root  585957376 Oct  3 23:10 rhcos-live.x86_64.iso_9
[root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_10
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_11
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_12
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_13
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_14
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_15
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_16
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_17
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_18
cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_19
cp: error copying 'rhcos-live.x86_64.iso' to 'rhcos-live.x86_64.iso_18': Disk quota exceeded
cp: error copying 'rhcos-live.x86_64.iso' to 'rhcos-live.x86_64.iso_19': Disk quota exceeded
[root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# ls -lrt
total 20391296
-rw-r--r--. 1 root root 1131413504 Jul 13 22:42 rhcos-live.x86_64.iso
-rw-r--r--. 1 root root 1131413504 Oct  3 23:02 rhcos-live.x86_64.iso_1
-rw-r--r--. 1 root root 1131413504 Oct  3 23:08 rhcos-live.x86_64.iso_2
-rw-r--r--. 1 root root 1131413504 Oct  3 23:09 rhcos-live.x86_64.iso_3
-rw-r--r--. 1 root root 1131413504 Oct  3 23:09 rhcos-live.x86_64.iso_4
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_5
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_6
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_7
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_8
-rw-r--r--. 1 root root  585957376 Oct  3 23:10 rhcos-live.x86_64.iso_9
-rw-r--r--. 1 root root 1131413504 Oct  3 23:15 rhcos-live.x86_64.iso_10
-rw-r--r--. 1 root root 1131413504 Oct  3 23:15 rhcos-live.x86_64.iso_11
-rw-r--r--. 1 root root 1131413504 Oct  3 23:15 rhcos-live.x86_64.iso_12
-rw-r--r--. 1 root root 1131413504 Oct  3 23:15 rhcos-live.x86_64.iso_13
-rw-r--r--. 1 root root 1131413504 Oct  3 23:15 rhcos-live.x86_64.iso_14
-rw-r--r--. 1 root root 1131413504 Oct  3 23:16 rhcos-live.x86_64.iso_15
-rw-r--r--. 1 root root 1131413504 Oct  3 23:16 rhcos-live.x86_64.iso_16
-rw-r--r--. 1 root root 1131413504 Oct  3 23:16 rhcos-live.x86_64.iso_17
-rw-r--r--. 1 root root 1060700160 Oct  3 23:16 rhcos-live.x86_64.iso_18
-rw-r--r--. 1 root root          0 Oct  3 23:16 rhcos-live.x86_64.iso_19


Decrease Subvoulme group quota
[root@ceph-amk-bz-8be6sz-node7 volumes]# setfattr -n ceph.quota.max_bytes -v 10737418240 subvolgroup_1/
[root@ceph-amk-bz-8be6sz-node7 volumes]# 
[root@ceph-amk-bz-8be6sz-node7 volumes]# 
[root@ceph-amk-bz-8be6sz-node7 volumes]# getfattr --only-values -n ceph.quota.max_bytes subvolgroup_1/
10737418240[root@ceph-amk-bz-8be6sz-node7 volumes]# 
10737418240[root@ceph-amk-bz-8be6sz-node7 volumes]# cd subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/
[root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# ls -lrt
total 20391296
-rw-r--r--. 1 root root 1131413504 Jul 13 22:42 rhcos-live.x86_64.iso
-rw-r--r--. 1 root root 1131413504 Oct  3 23:02 rhcos-live.x86_64.iso_1
-rw-r--r--. 1 root root 1131413504 Oct  3 23:08 rhcos-live.x86_64.iso_2
-rw-r--r--. 1 root root 1131413504 Oct  3 23:09 rhcos-live.x86_64.iso_3
-rw-r--r--. 1 root root 1131413504 Oct  3 23:09 rhcos-live.x86_64.iso_4
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_5
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_6
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_7
-rw-r--r--. 1 root root 1131413504 Oct  3 23:10 rhcos-live.x86_64.iso_8
-rw-r--r--. 1 root root  585957376 Oct  3 23:10 rhcos-live.x86_64.iso_9
-rw-r--r--. 1 root root 1131413504 Oct  3 23:15 rhcos-live.x86_64.iso_10
-rw-r--r--. 1 root root 1131413504 Oct  3 23:15 rhcos-live.x86_64.iso_11
-rw-r--r--. 1 root root 1131413504 Oct  3 23:15 rhcos-live.x86_64.iso_12
-rw-r--r--. 1 root root 1131413504 Oct  3 23:15 rhcos-live.x86_64.iso_13
-rw-r--r--. 1 root root 1131413504 Oct  3 23:15 rhcos-live.x86_64.iso_14
-rw-r--r--. 1 root root 1131413504 Oct  3 23:16 rhcos-live.x86_64.iso_15
-rw-r--r--. 1 root root 1131413504 Oct  3 23:16 rhcos-live.x86_64.iso_16
-rw-r--r--. 1 root root 1131413504 Oct  3 23:16 rhcos-live.x86_64.iso_17
-rw-r--r--. 1 root root 1060700160 Oct  3 23:16 rhcos-live.x86_64.iso_18
-rw-r--r--. 1 root root          0 Oct  3 23:16 rhcos-live.x86_64.iso_19
[root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_20
cp: error copying 'rhcos-live.x86_64.iso' to 'rhcos-live.x86_64.iso_20': Disk quota exceeded
[root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# 




Copying directly to subvolumegroup folder and subvolume folder -- Passed

[root@ceph-amk-bz-8be6sz-node7 subvolgroup_1]# cp subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso rhcos-live.x86_64.iso_20
cp: error copying 'subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso' to 'rhcos-live.x86_64.iso_20': Disk quota exceeded
[root@ceph-amk-bz-8be6sz-node7 subvolgroup_1]# cd subvol_1/
[root@ceph-amk-bz-8be6sz-node7 subvol_1]# cp b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso rhcos-live.x86_64.iso_20
cp: error copying 'b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso' to 'rhcos-live.x86_64.iso_20': Disk quota exceeded
[root@ceph-amk-bz-8be6sz-node7 subvol_1]# 


Ceph Versions
[root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# ceph versions
{
    "mon": {
        "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 12
    },
    "mds": {
        "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 3
    },
    "overall": {
        "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 20
    }
}
[root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# 

Regards
Amarnath

Comment 25 Amarnath 2022-10-05 18:33:39 UTC
Hi Kotresh,

I was thinking this was related to attributes quota.
Thanks for PR details.
Now I have tested it as a parameter which has been introduced as part of subvolumegroup creation
----------------------------------------------------------------------------------------------------------------------------
Create subvolumegroup with size parameter --passed
----------------------------------------------------------------------------------------------------------------------------
[root@ceph-amk-bz-8be6sz-node7 x86_64]# ceph fs subvolumegroup create cephfs subvolgroup_2 10737418240 
[root@ceph-amk-bz-8be6sz-node7 x86_64]# ceph fs subvolumegroup info cephfs subvolgroup_2 
{
    "atime": "2022-10-05 18:00:39",
    "bytes_pcent": "0.00",
    "bytes_quota": 10737418240,
    "bytes_used": 0,
    "created_at": "2022-10-05 18:00:39",
    "ctime": "2022-10-05 18:00:39",
    "data_pool": "cephfs.cephfs.data",
    "gid": 0,
    "mode": 16877,
    "mon_addrs": [
        "10.0.211.147:6789",
        "10.0.210.100:6789",
        "10.0.208.197:6789"
    ],
	----------------------------------------------------------------------------------------------------------------------------
	With out size parameter -- passed
	----------------------------------------------------------------------------------------------------------------------------
	[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup info cephfs subvolgroup_3
{
    "atime": "2022-10-05 18:27:56",
    "bytes_pcent": "undefined",
    "bytes_quota": "infinite",
    "bytes_used": 0,
    "created_at": "2022-10-05 18:27:56",
    "ctime": "2022-10-05 18:27:56",
    "data_pool": "cephfs.cephfs.data",
    "gid": 0,
    "mode": 16877,
    "mon_addrs": [
        "10.0.211.147:6789",
        "10.0.210.100:6789",
        "10.0.208.197:6789"
    ],
    "mtime": "2022-10-05 18:27:56",
    "uid": 0
}
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ls -lrt
total 10516288
-rw-r--r--. 1 root root 1131413504 Oct  5 14:04 rhcos-live.x86_64.iso
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_1
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_2
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_3
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_4
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_5
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_6
-rw-r--r--. 1 root root 1131413504 Oct  5 14:07 rhcos-live.x86_64.iso_7
-rw-r--r--. 1 root root 1131413504 Oct  5 14:07 rhcos-live.x86_64.iso_8
-rw-r--r--. 1 root root  585957376 Oct  5 14:07 rhcos-live.x86_64.iso_9
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_10 .
cp: error copying '../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_10' to './rhcos-live.x86_64.iso_10': Disk quota exceeded
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]#
----------------------------------------------------------------------------------------------------------------------------
ceph fs subvolumegroup resize cephfs subvolgroup_2 20737418240
----------------------------------------------------------------------------------------------------------------------------
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 20737418240
[
    {
        "bytes_used": 10768679044
    },
    {
        "bytes_quota": 20737418240
    },
    {
        "bytes_pcent": "51.93"
    }
]
----------------------------------------------------------------------------------------------------------------------------
Reduce smaller size
----------------------------------------------------------------------------------------------------------------------------
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 10737418240
[
    {
        "bytes_used": 11900092548
    },
    {
        "bytes_quota": 10737418240
    },
    {
        "bytes_pcent": "110.83"
    }
]
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# 
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_10 .
cp: overwrite './rhcos-live.x86_64.iso_10'? y
cp: error copying '../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_10' to './rhcos-live.x86_64.iso_10': Disk quota exceeded
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ls -lrt
total 10516288
-rw-r--r--. 1 root root 1131413504 Oct  5 14:04 rhcos-live.x86_64.iso
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_1
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_2
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_3
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_4
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_5
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_6
-rw-r--r--. 1 root root 1131413504 Oct  5 14:07 rhcos-live.x86_64.iso_7
-rw-r--r--. 1 root root 1131413504 Oct  5 14:07 rhcos-live.x86_64.iso_8
-rw-r--r--. 1 root root  585957376 Oct  5 14:07 rhcos-live.x86_64.iso_9
-rw-r--r--. 1 root root          0 Oct  5 14:14 rhcos-live.x86_64.iso_10
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# 
----------------------------------------------------------------------------------------------------------------------------
No Shrink -- passed
----------------------------------------------------------------------------------------------------------------------------
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 10768679044 --no_shrink
[
    {
        "bytes_used": 10768679044
    },
    {
        "bytes_quota": 10768679044
    },
    {
        "bytes_pcent": "100.00"
    }
]
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 10768679043 --no_shrink
Error EINVAL: Can't resize the subvolume group. The new size '10768679043' would be lesser than the current used size '10768679044'

[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 30737418240 --no_shrink
[
    {
        "bytes_used": 10768679044
    },
    {
        "bytes_quota": 30737418240
    },
    {
        "bytes_pcent": "35.03"
    }
]
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 20737418240 --no_shrink
[
    {
        "bytes_used": 10768679044
    },
    {
        "bytes_quota": 20737418240
    },
    {
        "bytes_pcent": "51.93"
    }
]

[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 infinite
[
    {
        "bytes_used": 10768679044
    },
    {
        "bytes_quota": 0
    },
    {
        "bytes_pcent": "undefined"
    }
]
----------------------------------------------------------------------------------------------------------------------------
Reset size and add Data to the subvolumegroup --passed
----------------------------------------------------------------------------------------------------------------------------
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup info cephfs subvolgroup_2
{
    "atime": "2022-10-05 18:00:39",
    "bytes_pcent": "51.85",
    "bytes_quota": 20768679043,
    "bytes_used": 10768679044,
    "created_at": "2022-10-05 18:00:39",
    "ctime": "2022-10-05 18:21:26",
    "data_pool": "cephfs.cephfs.data",
    "gid": 0,
    "mode": 16877,
    "mon_addrs": [
        "10.0.211.147:6789",
        "10.0.210.100:6789",
        "10.0.208.197:6789"
    ],
    "mtime": "2022-10-05 18:01:25",
    "uid": 0
}
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_11 .
cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_12 .
cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_13 .
cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_14 .
cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_15 .
cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_16 .
cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_17 .
[root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ls -lrt
total 18250560
-rw-r--r--. 1 root root 1131413504 Oct  5 14:04 rhcos-live.x86_64.iso
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_1
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_2
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_3
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_4
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_5
-rw-r--r--. 1 root root 1131413504 Oct  5 14:06 rhcos-live.x86_64.iso_6
-rw-r--r--. 1 root root 1131413504 Oct  5 14:07 rhcos-live.x86_64.iso_7
-rw-r--r--. 1 root root 1131413504 Oct  5 14:07 rhcos-live.x86_64.iso_8
-rw-r--r--. 1 root root  585957376 Oct  5 14:07 rhcos-live.x86_64.iso_9
-rw-r--r--. 1 root root          0 Oct  5 14:14 rhcos-live.x86_64.iso_10
-rw-r--r--. 1 root root 1131413504 Oct  5 14:22 rhcos-live.x86_64.iso_11
-rw-r--r--. 1 root root 1131413504 Oct  5 14:23 rhcos-live.x86_64.iso_12
-rw-r--r--. 1 root root 1131413504 Oct  5 14:23 rhcos-live.x86_64.iso_13
-rw-r--r--. 1 root root 1131413504 Oct  5 14:23 rhcos-live.x86_64.iso_14
-rw-r--r--. 1 root root 1131413504 Oct  5 14:23 rhcos-live.x86_64.iso_15
-rw-r--r--. 1 root root 1131413504 Oct  5 14:23 rhcos-live.x86_64.iso_16
-rw-r--r--. 1 root root 1131413504 Oct  5 14:23 rhcos-live.x86_64.iso_17

Regards,
Amarnath

Comment 41 errata-xmlrpc 2023-03-20 18:55:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360


Note You need to log in before you can comment on or make changes to this bug.