Bug 2251192 - [CephFS Vol Management] - Edit Subvolume Group is not working.
Summary: [CephFS Vol Management] - Edit Subvolume Group is not working.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Dashboard
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 7.1
Assignee: Ivo Almeida
QA Contact: Amarnath
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2267614 2298578 2298579
TreeView+ depends on / blocked
 
Reported: 2023-11-23 11:35 UTC by Hemanth Kumar
Modified: 2024-11-16 04:25 UTC (History)
11 users (show)

Fixed In Version: ceph-18.2.1-79.el9cp
Doc Type: Bug Fix
Doc Text:
.Unset subvolume size is no longer set as ‘infinite’ Previously, the unset subvolume size was set to 'infinite', resulting in the failure of the update. With this fix, the code that sets the size to 'infinite' is removed and the update works as expected.
Clone Of:
Environment:
Last Closed: 2024-06-13 14:23:32 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 55642 0 None open mgr/dashboard: fix subvolume group edit 2024-02-21 07:27:09 UTC
Red Hat Issue Tracker RHCEPH-7952 0 None None None 2023-11-23 11:36:03 UTC
Red Hat Issue Tracker RHCSDASH-1194 0 None None None 2023-11-23 11:36:13 UTC
Red Hat Product Errata RHSA-2024:3925 0 None None None 2024-06-13 14:23:38 UTC

Description Hemanth Kumar 2023-11-23 11:35:31 UTC
Created attachment 2001057 [details]
Edit Subvolume Group

Description of problem:
-----------------------
Editing of Subvolume Group is not working -

We are unable to edit any of fields in subvolume group apart from subvolume group size.

Attaching the Screenshot for reference - All the fields are greyed out and not allowing to edit.


Version : 18.2.0-128.el9cp reef (stable)

Comment 10 Amarnath 2024-03-19 04:58:20 UTC
Hi Nizamudeen,

When we try to edit subvolumegroup with only permissions. we are observing below error 

"Failed to execute unknown task
Failed to create subvolume group upgrade_svg_0: invalid size specified: 'infinite'
19/3/24 10:23 AM"

Detailed Screenshots: https://docs.google.com/document/d/1wtSkM4LiUELa0SIzsLS0zrhrZxQLobOB8u9YoLfoO2c/edit

Regards,
Amarnath

Comment 12 Amarnath 2024-03-21 05:18:35 UTC
Hi Nizam,

It works fine in cli

[ceph: root@ceph-upgarde-5-7-zg6iut-node1-installer /]# ceph fs subvolumegroup ls cephfs
[
    {
        "name": "upgrade_svg_0"
    },
    {
        "name": "upgrade_svg_1"
    }
]
[ceph: root@ceph-upgarde-5-7-zg6iut-node1-installer /]# ceph fs subvolumegroup info cephfs upgrade_svg_0
{
    "atime": "2024-03-16 13:04:39",
    "bytes_pcent": "undefined",
    "bytes_quota": "infinite",
    "bytes_used": 1258291776,
    "created_at": "2024-03-16 13:04:39",
    "ctime": "2024-03-16 13:05:25",
    "data_pool": "cephfs.cephfs.data",
    "gid": 0,
    "mode": 16877,
    "mon_addrs": [
        "10.0.206.241:6789",
        "10.0.207.26:6789",
        "10.0.205.69:6789"
    ],
    "mtime": "2024-03-16 13:05:25",
    "uid": 0
}
[ceph: root@ceph-upgarde-5-7-zg6iut-node1-installer /]# ceph fs subvolumegroup create cephfs upgrade_svg_0 --mode 777
[ceph: root@ceph-upgarde-5-7-zg6iut-node1-installer /]# ceph fs subvolumegroup info cephfs upgrade_svg_0
{
    "atime": "2024-03-16 13:04:39",
    "bytes_pcent": "undefined",
    "bytes_quota": "infinite",
    "bytes_used": 1258291776,
    "created_at": "2024-03-16 13:04:39",
    "ctime": "2024-03-21 05:16:21",
    "data_pool": "cephfs.cephfs.data",
    "gid": 0,
    "mode": 16895,
    "mon_addrs": [
        "10.0.206.241:6789",
        "10.0.207.26:6789",
        "10.0.205.69:6789"
    ],
    "mtime": "2024-03-16 13:05:25",
    "uid": 0
}
[ceph: root@ceph-upgarde-5-7-zg6iut-node1-installer /]# 


Regards,
Amarnath

Comment 14 Amarnath 2024-03-26 18:01:17 UTC
Hi All,

Edit is working fine both with CLI and UI vice versa 



[root@ceph-amk-bz-up-cjm2lg-node7 ~]# ceph fs subvolumegroup info cephfs subvol_group_1
{
    "atime": "2024-03-26 17:56:10",
    "bytes_pcent": "0.00",
    "bytes_quota": 10737418240,
    "bytes_used": 0,
    "created_at": "2024-03-26 17:56:10",
    "ctime": "2024-03-26 17:56:26",
    "data_pool": "cephfs.cephfs.data",
    "gid": 0,
    "mode": 16895,
    "mon_addrs": [
        "10.0.209.58:6789",
        "10.0.208.105:6789",
        "10.0.211.250:6789"
    ],
    "mtime": "2024-03-26 17:56:10",
    "uid": 0
}
[root@ceph-amk-bz-up-cjm2lg-node7 ~]# ceph fs subvolumegroup info cephfs subvol_group_1
{
    "atime": "2024-03-26 17:56:10",
    "bytes_pcent": "0.00",
    "bytes_quota": 10737418240,
    "bytes_used": 0,
    "created_at": "2024-03-26 17:56:10",
    "ctime": "2024-03-26 17:57:29",
    "data_pool": "cephfs.cephfs.data",
    "gid": 0,
    "mode": 16877,
    "mon_addrs": [
        "10.0.209.58:6789",
        "10.0.208.105:6789",
        "10.0.211.250:6789"
    ],
    "mtime": "2024-03-26 17:56:10",
    "uid": 0
}
[root@ceph-amk-bz-up-cjm2lg-node7 ~]# ceph fs subvolumegroup info cephfs subvol_group_1
{
    "atime": "2024-03-26 17:56:10",
    "bytes_pcent": "undefined",
    "bytes_quota": "infinite",
    "bytes_used": 0,
    "created_at": "2024-03-26 17:56:10",
    "ctime": "2024-03-26 17:57:41",
    "data_pool": "cephfs.cephfs.data",
    "gid": 0,
    "mode": 16877,
    "mon_addrs": [
        "10.0.209.58:6789",
        "10.0.208.105:6789",
        "10.0.211.250:6789"
    ],
    "mtime": "2024-03-26 17:56:10",
    "uid": 0
}
[root@ceph-amk-bz-up-cjm2lg-node7 ~]# 

Verified on Version: 
[root@ceph-amk-bz-up-cjm2lg-node7 ~]# ceph versions
{
    "mon": {
        "ceph version 18.2.1-86.el9cp (f725a46cdec1d9ec3a962666bb7c671d1a8b3893) reef (stable)": 3
    },
    "mgr": {
        "ceph version 18.2.1-86.el9cp (f725a46cdec1d9ec3a962666bb7c671d1a8b3893) reef (stable)": 2
    },
    "osd": {
        "ceph version 18.2.1-86.el9cp (f725a46cdec1d9ec3a962666bb7c671d1a8b3893) reef (stable)": 12
    },
    "mds": {
        "ceph version 18.2.1-86.el9cp (f725a46cdec1d9ec3a962666bb7c671d1a8b3893) reef (stable)": 3
    },
    "overall": {
        "ceph version 18.2.1-86.el9cp (f725a46cdec1d9ec3a962666bb7c671d1a8b3893) reef (stable)": 20
    }
}
[root@ceph-amk-bz-up-cjm2lg-node7 ~]#

Comment 16 errata-xmlrpc 2024-06-13 14:23:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925

Comment 17 Red Hat Bugzilla 2024-11-16 04:25:14 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.