Bug 2028416 - [CephFS] File Quota attributes not getting inherited to the cloned volume
Summary: [CephFS] File Quota attributes not getting inherited to the cloned volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.0
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
: 5.1
Assignee: Kotresh HR
QA Contact: Amarnath
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-02 11:08 UTC by Amarnath
Modified: 2022-04-04 10:23 UTC (History)
6 users (show)

Fixed In Version: ceph-16.2.7-69.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-04 10:22:56 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 54121 0 None None None 2022-02-03 07:07:16 UTC
Red Hat Issue Tracker RHCEPH-2481 0 None None None 2021-12-02 11:16:05 UTC
Red Hat Product Errata RHSA-2022:1174 0 None None None 2022-04-04 10:23:24 UTC

Description Amarnath 2021-12-02 11:08:31 UTC
Description of problem:

File Quota attributes not getting inherited to the cloned volume

Version-Release number of selected component (if applicable):
ceph version 16.2.0-146.el8cp (56f5e9cfe88a08b6899327eca5166ca1c4a392aa) pacific (stable)

How reproducible:


Steps to Reproduce:
1.Create a subvolume and mount it on directory using kernel mount
2.Setfattr to the mounted directory both file and byte quotas
3.create snapshot
4.create a clone of the subvolume
5.mount the cloned volume using fusemount
6.check the quota attributes of the mounted directory


Actual results:
bytes quota is getting inherited where as file quota not getting inherited

Expected results:
both the quotas should get inherited

Additional info:

commands executed:
[root@ceph-amk4-4nfp05-node7 ~]# ceph fs subvolume create cephfs subvol_clone_attr_vol --size 5368706371 --group_name subvolgroup_clone_attr_vol_1
[root@ceph-amk4-4nfp05-node7 ~]# ceph fs subvolume getpath cephfs subvol_clone_attr_vol subvolgroup_clone_attr_vol_1
/volumes/subvolgroup_clone_attr_vol_1/subvol_clone_attr_vol/4f8e849f-c82c-4aac-8ecc-8269f719d68b
[root@ceph-amk4-4nfp05-node7 ~]# mkdir /mnt/cephfs_kernel/
[root@ceph-amk4-4nfp05-node7 ~]# mount -t ceph 10.0.208.79,10.0.209.15,10.0.211.154:/volumes/subvolgroup_clone_attr_vol_1/subvol_clone_attr_vol/4f8e849f-c82c-4aac-8ecc-8269f719d68b /mnt/cephfs_kernel/ -o name=ceph-amk4-4nfp05-node7,secretfile=/etc/ceph/ceph-amk4-4nfp05-node7.secret

[root@ceph-amk4-4nfp05-node7 ~]# setfattr -n ceph.quota.max_files -v 99 /mnt/cephfs_kernel
[root@ceph-amk4-4nfp05-node7 ~]# setfattr -n ceph.quota.max_bytes -v 9999 /mnt/cephfs_kernel
[root@ceph-amk4-4nfp05-node7 ~]# ceph fs subvolume snapshot create cephfs subvol_clone_attr_vol snap_1 --group_name subvolgroup_clone_attr_vol_1
[root@ceph-amk4-4nfp05-node7 ~]#  ceph fs subvolume snapshot clone cephfs subvol_clone_attr_vol snap_1 clone_attr_vol_1 --group_name subvolgroup_clone_attr_vol_1
[root@ceph-amk4-4nfp05-node7 ~]# ceph fs clone status cephfs clone_attr_vol_1
{
  "status": {
    "state": "complete"
  }
}
[root@ceph-amk4-4nfp05-node7 ~]# ceph fs subvolume getpath cephfs clone_attr_vol_1
/volumes/_nogroup/clone_attr_vol_1/66565653-1e50-4fdf-85ae-417ff479f5df
[root@ceph-amk4-4nfp05-node7 ~]# mkdir /mnt/fuse_mount
[root@ceph-amk4-4nfp05-node7 ~]# ceph-fuse -n client.ceph-amk4-4nfp05-node7 /mnt/fuse_mount/ -r /volumes/_nogroup/clone_attr_vol_1/66565653-1e50-4fdf-85ae-417ff479f5df
2021-12-02T05:55:52.903-0500 7f05fc87b200 -1 init, newargv = 0x55a185b6da50 newargc=15
ceph-fuse[21919]: starting ceph client
ceph-fuse[21919]: starting fuse
[root@ceph-amk4-4nfp05-node7 ~]# getfattr -n ceph.quota.max_files /mnt/fuse_mount/
getfattr: Removing leading '/' from absolute path names
# file: mnt/fuse_mount/
ceph.quota.max_files="0"

[root@ceph-amk4-4nfp05-node7 ~]# getfattr -n ceph.quota.max_files /mnt/cephfs_kernel
getfattr: Removing leading '/' from absolute path names
# file: mnt/cephfs_kernel
ceph.quota.max_files="99"

[root@ceph-amk4-4nfp05-node7 ~]# getfattr -n ceph.quota.max_bytes /mnt/cephfs_kernel
getfattr: Removing leading '/' from absolute path names
# file: mnt/cephfs_kernel
ceph.quota.max_bytes="9999"

[root@ceph-amk4-4nfp05-node7 ~]# getfattr -n ceph.quota.max_bytes /mnt/fuse_mount/
getfattr: Removing leading '/' from absolute path names
# file: mnt/fuse_mount/
ceph.quota.max_bytes="9999"

[root@ceph-amk4-4nfp05-node7 ~]# ceph --version
ceph version 16.2.0-146.el8cp (56f5e9cfe88a08b6899327eca5166ca1c4a392aa) pacific (stable)

Comment 7 Amarnath 2022-02-23 12:06:44 UTC
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# ceph fs subvolumegroup create cephfs subvolgroup_clone_attr_vol_1
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# ceph fs subvolume create cephfs subvol_clone_attr_vol --size 5368706371 --group_name subvolgroup_clone_attr_vol_1
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# ceph fs subvolume getpath cephfs subvol_clone_attr_vol subvolgroup_clone_attr_vol_1
/volumes/subvolgroup_clone_attr_vol_1/subvol_clone_attr_vol/69d1e56a-434d-4397-b402-3339ce7b10c4
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# mkdir /mnt/cephfs_kernel/

[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# mount -t ceph 10.0.211.90,10.0.208.78,10.0.210.66:/volumes/subvolgroup_clone_attr_vol_1/subvol_clone_attr_vol/69d1e56a-434d-4397-b402-3339ce7b10c4 /mnt/cephfs_kernel/ -o name=ceph-upgrade-5-0-zcrq6x-node7,secretfile=/etc/ceph/ceph-upgrade-5-0-zcrq6x-node7.secret
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# setfattr -n ceph.quota.max_files -v 99 /mnt/cephfs_kernel
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# setfattr -n ceph.quota.max_bytes -v 9999 /mnt/cephfs_kernel
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# ceph fs subvolume snapshot create cephfs subvol_clone_attr_vol snap_1 --group_name subvolgroup_clone_attr_vol_1
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_clone_attr_vol snap_1 clone_attr_vol_1 --group_name subvolgroup_clone_attr_vol_1
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# ceph fs clone status cephfs clone_attr_vol_1
{
  "status": {
    "state": "pending",
    "source": {
      "volume": "cephfs",
      "subvolume": "subvol_clone_attr_vol",
      "snapshot": "snap_1",
      "group": "subvolgroup_clone_attr_vol_1"
    }
  }
}

[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# ceph fs clone status cephfs clone_attr_vol_1
{
  "status": {
    "state": "complete"
  }
}
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# ceph fs clone status cephfs clone_attr_vol_1
{
  "status": {
    "state": "complete"
  }
}
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# ceph fs subvolume getpath cephfs clone_attr_vol_1
/volumes/_nogroup/clone_attr_vol_1/342d3ff3-815a-4423-be2d-2f2b3a06904a
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# mkdir /mnt/fuse_mount
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# #ceph-fuse -n client.ceph-upgrade-5-0-zcrq6x-node7 /mnt/fuse_mount/ -r /volumes/_nogroup/clone_attr_vol_1/342d3ff3-815a-4423-be2d-2f2b3a06904a
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# ceph-fuse -n client.ceph-upgrade-5-0-zcrq6x-node7 /mnt/fuse_mount/ -r /volumes/_nogroup/clone_attr_vol_1/342d3ff3-815a-4423-be2d-2f2b3a06904a
2022-02-23T06:56:44.114-0500 7f8dd757a200 -1 init, newargv = 0x564335bce970 newargc=15
ceph-fuse[7349]: starting ceph client
ceph-fuse[7349]: starting fuse
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# getfattr -n ceph.quota.max_files /mnt/fuse_mount/
getfattr: Removing leading '/' from absolute path names
# file: mnt/fuse_mount/
ceph.quota.max_files="99"

[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# getfattr -n ceph.quota.max_files /mnt/cephfs_kernel
getfattr: Removing leading '/' from absolute path names
# file: mnt/cephfs_kernel
ceph.quota.max_files="99"

[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# getfattr -n ceph.quota.max_bytes /mnt/cephfs_kernel
getfattr: Removing leading '/' from absolute path names
# file: mnt/cephfs_kernel
ceph.quota.max_bytes="9999"

[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# getfattr -n ceph.quota.max_bytes /mnt/fuse_mount/
getfattr: Removing leading '/' from absolute path names
# file: mnt/fuse_mount/
ceph.quota.max_bytes="9999"

[root@ceph-upgrade-5-0-zcrq6x-node7 ~]# ceph --version
ceph version 16.2.7-69.el8cp (3eaf40c02886a02f9b172579ac6048bad587b63b) pacific (stable)
[root@ceph-upgrade-5-0-zcrq6x-node7 ~]#

Comment 9 errata-xmlrpc 2022-04-04 10:22:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174


Note You need to log in before you can comment on or make changes to this bug.