Bug 2033545
Summary: | [RFE] Quota support for subvolume group | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Kesavan <kvellalo> | |
Component: | CephFS | Assignee: | Kotresh HR <khiremat> | |
Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> | |
Severity: | high | Docs Contact: | Eliska <ekristov> | |
Priority: | high | |||
Version: | 5.1 | CC: | ceph-eng-bugs, ekristov, gfarnum, khiremat, mlungu, owasserm, seb, shan, sostapov, tserlin, vereddy, vshankar | |
Target Milestone: | --- | Keywords: | FutureFeature | |
Target Release: | 6.0 | |||
Hardware: | All | |||
OS: | All | |||
Whiteboard: | ||||
Fixed In Version: | ceph-17.2.3-32.el9cp | Doc Type: | Enhancement | |
Doc Text: |
.Users can now set and manage quotas on subvolume group
Previously, the user could only apply quotas to individual subvolumes.
With this release, the user can now set, apply and manage quotas for a given subvolume group, especially when working on a multi-tenant environment.
|
Story Points: | --- | |
Clone Of: | ||||
: | 2138087 (view as bug list) | Environment: | ||
Last Closed: | 2023-03-20 18:55:34 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2126050, 2138087 |
Description
Kesavan
2021-12-17 07:57:42 UTC
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity. Hi Kotresh, The workflow you describe is correct. The deletion of the subvolumegroup will only happen on cluster uninstallation and the controller will block if Subvolumes still exists. Can you elaborate on the subvolumegroup deletion behavior and why an immediate deletion could be an issue? Thanks. Thanks Sebastien and Orit for the answers. Reply inline. (In reply to Sébastien Han from comment #3) > Hi Kotresh, > > The workflow you describe is correct. > > The deletion of the subvolumegroup will only happen on cluster > uninstallation and the controller will block if Subvolumes still exists. > Can you elaborate on the subvolumegroup deletion behavior and why an > immediate deletion could be an issue? When a subvolume is deleted, it is moved to trash directory and the call is returned to user with success. The actual deletion happens asynchronously. The trash directory of the subvolumes has to reside inside subvolumegroup if quota is enabled because of [1]. If the subvolumegroup deletion is received immediately after subvolumes deletion, the caller would receive 'EAGAIN' until the trash directory is empty. So I think this behavior is fine ? [1] https://tracker.ceph.com/issues/16884 > > Thanks. (In reply to Kotresh HR from comment #5) > Thanks Sebastien and Orit for the answers. Reply inline. > > (In reply to Sébastien Han from comment #3) > > Hi Kotresh, > > > > The workflow you describe is correct. > > > > The deletion of the subvolumegroup will only happen on cluster > > uninstallation and the controller will block if Subvolumes still exists. > > Can you elaborate on the subvolumegroup deletion behavior and why an > > immediate deletion could be an issue? > > When a subvolume is deleted, it is moved to trash directory and the call is > returned to user with success. The actual deletion happens asynchronously. > The trash directory of the subvolumes has to reside inside subvolumegroup if > quota > is enabled because of [1]. If the subvolumegroup deletion is received > immediately after > subvolumes deletion, the caller would receive 'EAGAIN' until the trash > directory is empty. > So I think this behavior is fine ? > > [1] https://tracker.ceph.com/issues/16884 > > > > > Thanks. If the deletion fails with EAGAIN we can easily catch it, but the controller will retry on failure anyways so this behavior is fine. Hi @khiremat, I am able to set file quota to subvolumegroup. Here is the scenario i tried Created SubvolumeGroup File quota set to 3 with this, I am able to create more than three subvolumes.I tried to create 4 and it did not throw any error When I try to create file inside the subvolume it fails with the quota exceeded error. Can you comment on the above Scenario? Regards, Amarnath Hi Kotresh, Below are the scenarios that have been tested with respect to subvolumegroup bytes quota. Could please point if we have to validate any other things. Subvolume is allowing to set larger value than subvolumegroup -- Failed [root@ceph-amk-bz-8be6sz-node7 volumes]# setfattr -n ceph.quota.max_bytes -v 10737418240 subvolgroup_1/ [root@ceph-amk-bz-8be6sz-node7 volumes]# getfattr --only-values -n ceph.quota.max_bytes subvolgroup_1/ 10737418240[root@ceph-amk-bz-8be6sz-node7 volumes]# [root@ceph-amk-bz-8be6sz-node7 volumes]# setfattr -n ceph.quota.max_bytes -v 20737418240 subvolgroup_1/subvol_1/ [root@ceph-amk-bz-8be6sz-node7 volumes]# getfattr --only-values -n ceph.quota.max_bytes subvolgroup_1/subvol_1/ 20737418240[root@ceph-amk-bz-8be6sz-node7 volumes]# Validate subvolumegroup bytes quota takes precedence -- passed [root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_3 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_4 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_5 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_6 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_7 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_8 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_9 cp: error copying 'rhcos-live.x86_64.iso' to 'rhcos-live.x86_64.iso_9': Disk quota exceeded [root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# ls -lrt total 10516288 -rw-r--r--. 1 root root 1131413504 Jul 13 22:42 rhcos-live.x86_64.iso -rw-r--r--. 1 root root 1131413504 Oct 3 23:02 rhcos-live.x86_64.iso_1 -rw-r--r--. 1 root root 1131413504 Oct 3 23:08 rhcos-live.x86_64.iso_2 -rw-r--r--. 1 root root 1131413504 Oct 3 23:09 rhcos-live.x86_64.iso_3 -rw-r--r--. 1 root root 1131413504 Oct 3 23:09 rhcos-live.x86_64.iso_4 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_5 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_6 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_7 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_8 -rw-r--r--. 1 root root 585957376 Oct 3 23:10 rhcos-live.x86_64.iso_9 Increase subvolumegroup quota [root@ceph-amk-bz-8be6sz-node7 volumes]# setfattr -n ceph.quota.max_bytes -v 20737418240 subvolgroup_1/ [root@ceph-amk-bz-8be6sz-node7 volumes]# [root@ceph-amk-bz-8be6sz-node7 volumes]# getfattr --only-values -n ceph.quota.max_bytes subvolgroup_1/ 20737418240[root@ceph-amk-bz-8be6sz-node7 volumes]# [root@ceph-amk-bz-8be6sz-node7 volumes]# [root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# ls -lrt total 10516288 -rw-r--r--. 1 root root 1131413504 Jul 13 22:42 rhcos-live.x86_64.iso -rw-r--r--. 1 root root 1131413504 Oct 3 23:02 rhcos-live.x86_64.iso_1 -rw-r--r--. 1 root root 1131413504 Oct 3 23:08 rhcos-live.x86_64.iso_2 -rw-r--r--. 1 root root 1131413504 Oct 3 23:09 rhcos-live.x86_64.iso_3 -rw-r--r--. 1 root root 1131413504 Oct 3 23:09 rhcos-live.x86_64.iso_4 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_5 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_6 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_7 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_8 -rw-r--r--. 1 root root 585957376 Oct 3 23:10 rhcos-live.x86_64.iso_9 [root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_10 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_11 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_12 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_13 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_14 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_15 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_16 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_17 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_18 cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_19 cp: error copying 'rhcos-live.x86_64.iso' to 'rhcos-live.x86_64.iso_18': Disk quota exceeded cp: error copying 'rhcos-live.x86_64.iso' to 'rhcos-live.x86_64.iso_19': Disk quota exceeded [root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# ls -lrt total 20391296 -rw-r--r--. 1 root root 1131413504 Jul 13 22:42 rhcos-live.x86_64.iso -rw-r--r--. 1 root root 1131413504 Oct 3 23:02 rhcos-live.x86_64.iso_1 -rw-r--r--. 1 root root 1131413504 Oct 3 23:08 rhcos-live.x86_64.iso_2 -rw-r--r--. 1 root root 1131413504 Oct 3 23:09 rhcos-live.x86_64.iso_3 -rw-r--r--. 1 root root 1131413504 Oct 3 23:09 rhcos-live.x86_64.iso_4 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_5 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_6 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_7 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_8 -rw-r--r--. 1 root root 585957376 Oct 3 23:10 rhcos-live.x86_64.iso_9 -rw-r--r--. 1 root root 1131413504 Oct 3 23:15 rhcos-live.x86_64.iso_10 -rw-r--r--. 1 root root 1131413504 Oct 3 23:15 rhcos-live.x86_64.iso_11 -rw-r--r--. 1 root root 1131413504 Oct 3 23:15 rhcos-live.x86_64.iso_12 -rw-r--r--. 1 root root 1131413504 Oct 3 23:15 rhcos-live.x86_64.iso_13 -rw-r--r--. 1 root root 1131413504 Oct 3 23:15 rhcos-live.x86_64.iso_14 -rw-r--r--. 1 root root 1131413504 Oct 3 23:16 rhcos-live.x86_64.iso_15 -rw-r--r--. 1 root root 1131413504 Oct 3 23:16 rhcos-live.x86_64.iso_16 -rw-r--r--. 1 root root 1131413504 Oct 3 23:16 rhcos-live.x86_64.iso_17 -rw-r--r--. 1 root root 1060700160 Oct 3 23:16 rhcos-live.x86_64.iso_18 -rw-r--r--. 1 root root 0 Oct 3 23:16 rhcos-live.x86_64.iso_19 Decrease Subvoulme group quota [root@ceph-amk-bz-8be6sz-node7 volumes]# setfattr -n ceph.quota.max_bytes -v 10737418240 subvolgroup_1/ [root@ceph-amk-bz-8be6sz-node7 volumes]# [root@ceph-amk-bz-8be6sz-node7 volumes]# [root@ceph-amk-bz-8be6sz-node7 volumes]# getfattr --only-values -n ceph.quota.max_bytes subvolgroup_1/ 10737418240[root@ceph-amk-bz-8be6sz-node7 volumes]# 10737418240[root@ceph-amk-bz-8be6sz-node7 volumes]# cd subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/ [root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# ls -lrt total 20391296 -rw-r--r--. 1 root root 1131413504 Jul 13 22:42 rhcos-live.x86_64.iso -rw-r--r--. 1 root root 1131413504 Oct 3 23:02 rhcos-live.x86_64.iso_1 -rw-r--r--. 1 root root 1131413504 Oct 3 23:08 rhcos-live.x86_64.iso_2 -rw-r--r--. 1 root root 1131413504 Oct 3 23:09 rhcos-live.x86_64.iso_3 -rw-r--r--. 1 root root 1131413504 Oct 3 23:09 rhcos-live.x86_64.iso_4 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_5 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_6 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_7 -rw-r--r--. 1 root root 1131413504 Oct 3 23:10 rhcos-live.x86_64.iso_8 -rw-r--r--. 1 root root 585957376 Oct 3 23:10 rhcos-live.x86_64.iso_9 -rw-r--r--. 1 root root 1131413504 Oct 3 23:15 rhcos-live.x86_64.iso_10 -rw-r--r--. 1 root root 1131413504 Oct 3 23:15 rhcos-live.x86_64.iso_11 -rw-r--r--. 1 root root 1131413504 Oct 3 23:15 rhcos-live.x86_64.iso_12 -rw-r--r--. 1 root root 1131413504 Oct 3 23:15 rhcos-live.x86_64.iso_13 -rw-r--r--. 1 root root 1131413504 Oct 3 23:15 rhcos-live.x86_64.iso_14 -rw-r--r--. 1 root root 1131413504 Oct 3 23:16 rhcos-live.x86_64.iso_15 -rw-r--r--. 1 root root 1131413504 Oct 3 23:16 rhcos-live.x86_64.iso_16 -rw-r--r--. 1 root root 1131413504 Oct 3 23:16 rhcos-live.x86_64.iso_17 -rw-r--r--. 1 root root 1060700160 Oct 3 23:16 rhcos-live.x86_64.iso_18 -rw-r--r--. 1 root root 0 Oct 3 23:16 rhcos-live.x86_64.iso_19 [root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# cp rhcos-live.x86_64.iso rhcos-live.x86_64.iso_20 cp: error copying 'rhcos-live.x86_64.iso' to 'rhcos-live.x86_64.iso_20': Disk quota exceeded [root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# Copying directly to subvolumegroup folder and subvolume folder -- Passed [root@ceph-amk-bz-8be6sz-node7 subvolgroup_1]# cp subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso rhcos-live.x86_64.iso_20 cp: error copying 'subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso' to 'rhcos-live.x86_64.iso_20': Disk quota exceeded [root@ceph-amk-bz-8be6sz-node7 subvolgroup_1]# cd subvol_1/ [root@ceph-amk-bz-8be6sz-node7 subvol_1]# cp b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso rhcos-live.x86_64.iso_20 cp: error copying 'b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso' to 'rhcos-live.x86_64.iso_20': Disk quota exceeded [root@ceph-amk-bz-8be6sz-node7 subvol_1]# Ceph Versions [root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# ceph versions { "mon": { "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 3 }, "mgr": { "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 2 }, "osd": { "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 12 }, "mds": { "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 3 }, "overall": { "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 20 } } [root@ceph-amk-bz-8be6sz-node7 b735d599-c424-4b25-b3a8-8f5c54cc4ade]# Regards Amarnath Hi Kotresh, I was thinking this was related to attributes quota. Thanks for PR details. Now I have tested it as a parameter which has been introduced as part of subvolumegroup creation ---------------------------------------------------------------------------------------------------------------------------- Create subvolumegroup with size parameter --passed ---------------------------------------------------------------------------------------------------------------------------- [root@ceph-amk-bz-8be6sz-node7 x86_64]# ceph fs subvolumegroup create cephfs subvolgroup_2 10737418240 [root@ceph-amk-bz-8be6sz-node7 x86_64]# ceph fs subvolumegroup info cephfs subvolgroup_2 { "atime": "2022-10-05 18:00:39", "bytes_pcent": "0.00", "bytes_quota": 10737418240, "bytes_used": 0, "created_at": "2022-10-05 18:00:39", "ctime": "2022-10-05 18:00:39", "data_pool": "cephfs.cephfs.data", "gid": 0, "mode": 16877, "mon_addrs": [ "10.0.211.147:6789", "10.0.210.100:6789", "10.0.208.197:6789" ], ---------------------------------------------------------------------------------------------------------------------------- With out size parameter -- passed ---------------------------------------------------------------------------------------------------------------------------- [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup info cephfs subvolgroup_3 { "atime": "2022-10-05 18:27:56", "bytes_pcent": "undefined", "bytes_quota": "infinite", "bytes_used": 0, "created_at": "2022-10-05 18:27:56", "ctime": "2022-10-05 18:27:56", "data_pool": "cephfs.cephfs.data", "gid": 0, "mode": 16877, "mon_addrs": [ "10.0.211.147:6789", "10.0.210.100:6789", "10.0.208.197:6789" ], "mtime": "2022-10-05 18:27:56", "uid": 0 } [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ls -lrt total 10516288 -rw-r--r--. 1 root root 1131413504 Oct 5 14:04 rhcos-live.x86_64.iso -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_1 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_2 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_3 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_4 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_5 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_6 -rw-r--r--. 1 root root 1131413504 Oct 5 14:07 rhcos-live.x86_64.iso_7 -rw-r--r--. 1 root root 1131413504 Oct 5 14:07 rhcos-live.x86_64.iso_8 -rw-r--r--. 1 root root 585957376 Oct 5 14:07 rhcos-live.x86_64.iso_9 [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_10 . cp: error copying '../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_10' to './rhcos-live.x86_64.iso_10': Disk quota exceeded [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ---------------------------------------------------------------------------------------------------------------------------- ceph fs subvolumegroup resize cephfs subvolgroup_2 20737418240 ---------------------------------------------------------------------------------------------------------------------------- [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 20737418240 [ { "bytes_used": 10768679044 }, { "bytes_quota": 20737418240 }, { "bytes_pcent": "51.93" } ] ---------------------------------------------------------------------------------------------------------------------------- Reduce smaller size ---------------------------------------------------------------------------------------------------------------------------- [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 10737418240 [ { "bytes_used": 11900092548 }, { "bytes_quota": 10737418240 }, { "bytes_pcent": "110.83" } ] [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_10 . cp: overwrite './rhcos-live.x86_64.iso_10'? y cp: error copying '../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_10' to './rhcos-live.x86_64.iso_10': Disk quota exceeded [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ls -lrt total 10516288 -rw-r--r--. 1 root root 1131413504 Oct 5 14:04 rhcos-live.x86_64.iso -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_1 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_2 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_3 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_4 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_5 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_6 -rw-r--r--. 1 root root 1131413504 Oct 5 14:07 rhcos-live.x86_64.iso_7 -rw-r--r--. 1 root root 1131413504 Oct 5 14:07 rhcos-live.x86_64.iso_8 -rw-r--r--. 1 root root 585957376 Oct 5 14:07 rhcos-live.x86_64.iso_9 -rw-r--r--. 1 root root 0 Oct 5 14:14 rhcos-live.x86_64.iso_10 [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ---------------------------------------------------------------------------------------------------------------------------- No Shrink -- passed ---------------------------------------------------------------------------------------------------------------------------- [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 10768679044 --no_shrink [ { "bytes_used": 10768679044 }, { "bytes_quota": 10768679044 }, { "bytes_pcent": "100.00" } ] [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 10768679043 --no_shrink Error EINVAL: Can't resize the subvolume group. The new size '10768679043' would be lesser than the current used size '10768679044' [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 30737418240 --no_shrink [ { "bytes_used": 10768679044 }, { "bytes_quota": 30737418240 }, { "bytes_pcent": "35.03" } ] [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 20737418240 --no_shrink [ { "bytes_used": 10768679044 }, { "bytes_quota": 20737418240 }, { "bytes_pcent": "51.93" } ] [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup resize cephfs subvolgroup_2 infinite [ { "bytes_used": 10768679044 }, { "bytes_quota": 0 }, { "bytes_pcent": "undefined" } ] ---------------------------------------------------------------------------------------------------------------------------- Reset size and add Data to the subvolumegroup --passed ---------------------------------------------------------------------------------------------------------------------------- [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ceph fs subvolumegroup info cephfs subvolgroup_2 { "atime": "2022-10-05 18:00:39", "bytes_pcent": "51.85", "bytes_quota": 20768679043, "bytes_used": 10768679044, "created_at": "2022-10-05 18:00:39", "ctime": "2022-10-05 18:21:26", "data_pool": "cephfs.cephfs.data", "gid": 0, "mode": 16877, "mon_addrs": [ "10.0.211.147:6789", "10.0.210.100:6789", "10.0.208.197:6789" ], "mtime": "2022-10-05 18:01:25", "uid": 0 } [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_11 . cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_12 . cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_13 . cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_14 . cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_15 . cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_16 . cp ../../../subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade/rhcos-live.x86_64.iso_17 . [root@ceph-amk-bz-8be6sz-node7 baee011e-85e4-4978-b62b-c9118b3f3e27]# ls -lrt total 18250560 -rw-r--r--. 1 root root 1131413504 Oct 5 14:04 rhcos-live.x86_64.iso -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_1 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_2 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_3 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_4 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_5 -rw-r--r--. 1 root root 1131413504 Oct 5 14:06 rhcos-live.x86_64.iso_6 -rw-r--r--. 1 root root 1131413504 Oct 5 14:07 rhcos-live.x86_64.iso_7 -rw-r--r--. 1 root root 1131413504 Oct 5 14:07 rhcos-live.x86_64.iso_8 -rw-r--r--. 1 root root 585957376 Oct 5 14:07 rhcos-live.x86_64.iso_9 -rw-r--r--. 1 root root 0 Oct 5 14:14 rhcos-live.x86_64.iso_10 -rw-r--r--. 1 root root 1131413504 Oct 5 14:22 rhcos-live.x86_64.iso_11 -rw-r--r--. 1 root root 1131413504 Oct 5 14:23 rhcos-live.x86_64.iso_12 -rw-r--r--. 1 root root 1131413504 Oct 5 14:23 rhcos-live.x86_64.iso_13 -rw-r--r--. 1 root root 1131413504 Oct 5 14:23 rhcos-live.x86_64.iso_14 -rw-r--r--. 1 root root 1131413504 Oct 5 14:23 rhcos-live.x86_64.iso_15 -rw-r--r--. 1 root root 1131413504 Oct 5 14:23 rhcos-live.x86_64.iso_16 -rw-r--r--. 1 root root 1131413504 Oct 5 14:23 rhcos-live.x86_64.iso_17 Regards, Amarnath Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1360 |