Bug 2251192
| Summary: | [CephFS Vol Management] - Edit Subvolume Group is not working. | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Hemanth Kumar <hyelloji> |
| Component: | Ceph-Dashboard | Assignee: | Ivo Almeida <ialmeida> |
| Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> |
| Severity: | high | Docs Contact: | Akash Raj <akraj> |
| Priority: | unspecified | ||
| Version: | 7.0 | CC: | akraj, amk, ceph-eng-bugs, cephqe-warriors, ialmeida, khiremat, nia, pegonzal, saraut, tserlin, vshankar |
| Target Milestone: | --- | ||
| Target Release: | 7.1 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-18.2.1-79.el9cp | Doc Type: | Bug Fix |
| Doc Text: |
.Unset subvolume size is no longer set as ‘infinite’
Previously, the unset subvolume size was set to 'infinite', resulting in the failure of the update.
With this fix, the code that sets the size to 'infinite' is removed and the update works as expected.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2024-06-13 14:23:32 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2267614, 2298578, 2298579 | ||
Hi Nizamudeen, When we try to edit subvolumegroup with only permissions. we are observing below error "Failed to execute unknown task Failed to create subvolume group upgrade_svg_0: invalid size specified: 'infinite' 19/3/24 10:23 AM" Detailed Screenshots: https://docs.google.com/document/d/1wtSkM4LiUELa0SIzsLS0zrhrZxQLobOB8u9YoLfoO2c/edit Regards, Amarnath Hi Nizam,
It works fine in cli
[ceph: root@ceph-upgarde-5-7-zg6iut-node1-installer /]# ceph fs subvolumegroup ls cephfs
[
{
"name": "upgrade_svg_0"
},
{
"name": "upgrade_svg_1"
}
]
[ceph: root@ceph-upgarde-5-7-zg6iut-node1-installer /]# ceph fs subvolumegroup info cephfs upgrade_svg_0
{
"atime": "2024-03-16 13:04:39",
"bytes_pcent": "undefined",
"bytes_quota": "infinite",
"bytes_used": 1258291776,
"created_at": "2024-03-16 13:04:39",
"ctime": "2024-03-16 13:05:25",
"data_pool": "cephfs.cephfs.data",
"gid": 0,
"mode": 16877,
"mon_addrs": [
"10.0.206.241:6789",
"10.0.207.26:6789",
"10.0.205.69:6789"
],
"mtime": "2024-03-16 13:05:25",
"uid": 0
}
[ceph: root@ceph-upgarde-5-7-zg6iut-node1-installer /]# ceph fs subvolumegroup create cephfs upgrade_svg_0 --mode 777
[ceph: root@ceph-upgarde-5-7-zg6iut-node1-installer /]# ceph fs subvolumegroup info cephfs upgrade_svg_0
{
"atime": "2024-03-16 13:04:39",
"bytes_pcent": "undefined",
"bytes_quota": "infinite",
"bytes_used": 1258291776,
"created_at": "2024-03-16 13:04:39",
"ctime": "2024-03-21 05:16:21",
"data_pool": "cephfs.cephfs.data",
"gid": 0,
"mode": 16895,
"mon_addrs": [
"10.0.206.241:6789",
"10.0.207.26:6789",
"10.0.205.69:6789"
],
"mtime": "2024-03-16 13:05:25",
"uid": 0
}
[ceph: root@ceph-upgarde-5-7-zg6iut-node1-installer /]#
Regards,
Amarnath
Hi All,
Edit is working fine both with CLI and UI vice versa
[root@ceph-amk-bz-up-cjm2lg-node7 ~]# ceph fs subvolumegroup info cephfs subvol_group_1
{
"atime": "2024-03-26 17:56:10",
"bytes_pcent": "0.00",
"bytes_quota": 10737418240,
"bytes_used": 0,
"created_at": "2024-03-26 17:56:10",
"ctime": "2024-03-26 17:56:26",
"data_pool": "cephfs.cephfs.data",
"gid": 0,
"mode": 16895,
"mon_addrs": [
"10.0.209.58:6789",
"10.0.208.105:6789",
"10.0.211.250:6789"
],
"mtime": "2024-03-26 17:56:10",
"uid": 0
}
[root@ceph-amk-bz-up-cjm2lg-node7 ~]# ceph fs subvolumegroup info cephfs subvol_group_1
{
"atime": "2024-03-26 17:56:10",
"bytes_pcent": "0.00",
"bytes_quota": 10737418240,
"bytes_used": 0,
"created_at": "2024-03-26 17:56:10",
"ctime": "2024-03-26 17:57:29",
"data_pool": "cephfs.cephfs.data",
"gid": 0,
"mode": 16877,
"mon_addrs": [
"10.0.209.58:6789",
"10.0.208.105:6789",
"10.0.211.250:6789"
],
"mtime": "2024-03-26 17:56:10",
"uid": 0
}
[root@ceph-amk-bz-up-cjm2lg-node7 ~]# ceph fs subvolumegroup info cephfs subvol_group_1
{
"atime": "2024-03-26 17:56:10",
"bytes_pcent": "undefined",
"bytes_quota": "infinite",
"bytes_used": 0,
"created_at": "2024-03-26 17:56:10",
"ctime": "2024-03-26 17:57:41",
"data_pool": "cephfs.cephfs.data",
"gid": 0,
"mode": 16877,
"mon_addrs": [
"10.0.209.58:6789",
"10.0.208.105:6789",
"10.0.211.250:6789"
],
"mtime": "2024-03-26 17:56:10",
"uid": 0
}
[root@ceph-amk-bz-up-cjm2lg-node7 ~]#
Verified on Version:
[root@ceph-amk-bz-up-cjm2lg-node7 ~]# ceph versions
{
"mon": {
"ceph version 18.2.1-86.el9cp (f725a46cdec1d9ec3a962666bb7c671d1a8b3893) reef (stable)": 3
},
"mgr": {
"ceph version 18.2.1-86.el9cp (f725a46cdec1d9ec3a962666bb7c671d1a8b3893) reef (stable)": 2
},
"osd": {
"ceph version 18.2.1-86.el9cp (f725a46cdec1d9ec3a962666bb7c671d1a8b3893) reef (stable)": 12
},
"mds": {
"ceph version 18.2.1-86.el9cp (f725a46cdec1d9ec3a962666bb7c671d1a8b3893) reef (stable)": 3
},
"overall": {
"ceph version 18.2.1-86.el9cp (f725a46cdec1d9ec3a962666bb7c671d1a8b3893) reef (stable)": 20
}
}
[root@ceph-amk-bz-up-cjm2lg-node7 ~]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days |
Created attachment 2001057 [details] Edit Subvolume Group Description of problem: ----------------------- Editing of Subvolume Group is not working - We are unable to edit any of fields in subvolume group apart from subvolume group size. Attaching the Screenshot for reference - All the fields are greyed out and not allowing to edit. Version : 18.2.0-128.el9cp reef (stable)