+++ This bug was initially created as a clone of Bug #2052927 +++ Description of problem: The uid/gid of the cloned subvolume from snapshot is incorrect if the quota is set on source snapshot. This is regression caused by fix of the https://bugzilla.redhat.com/show_bug.cgi?id=2039276 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Create a subvolume with different user than that of it's group and set quota on it. 2. Create a snapshot for the above subvolume 3. Create a clone from above snapshot. 4. Verify the uid of the clone is incorrect. Actual results: The uid/gid of the clone doesn't match the source snapshot Expected results: The uid/gid should of the clone match the source snapshot Additional info: --- Additional comment from RHEL Program Management on 2022-02-10 09:58:10 UTC --- Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
https://github.com/ceph/ceph/pull/45205 is in v16.2.8.
Tested on below version and i see gid and uid intact with the cloned volume [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolumegroup create cephfs svg_1 --uid 10 --gid 11 [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume create cephfs sv_1 --group_name svg_1 --uid 20 --gid 21 [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume info cephfs sv_1 --group_name svg_1 { "atime": "2022-05-30 12:40:00", "bytes_pcent": "undefined", "bytes_quota": "infinite", "bytes_used": 0, "created_at": "2022-05-30 12:40:00", "ctime": "2022-05-30 12:40:00", "data_pool": "cephfs.cephfs.data", "features": [ "snapshot-clone", "snapshot-autoprotect", "snapshot-retention" ], "gid": 21, "mode": 16877, "mon_addrs": [ "10.0.209.159:6789", "10.0.210.94:6789", "10.0.210.59:6789" ], "mtime": "2022-05-30 12:40:00", "path": "/volumes/svg_1/sv_1/aff6f285-0f75-4fad-8de1-813e390f7a01", "pool_namespace": "", "state": "complete", "type": "subvolume", "uid": 20 } [root@ceph-amk-bz-1-gtdt09-node7 ~]# mkdir /mnt/cephfs_fuse_root [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph-fuse /mnt/cephfs_fuse_root/ ceph-fuse[10173]: starting ceph client 2022-05-30T08:41:42.473-0400 7fedc9ff4380 -1 init, newargv = 0x56030103b280 newargc=15 ceph-fuse[10173]: starting fuse [root@ceph-amk-bz-1-gtdt09-node7 ~]# setfattr -n ceph.quota.max_bytes -v 1000 /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ [root@ceph-amk-bz-1-gtdt09-node7 ~]# setfattr -n ceph.quota.max_files -v 1000 /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ [root@ceph-amk-bz-1-gtdt09-node7 ~]# getfattr -n ceph.quota.max_files /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ getfattr: Removing leading '/' from absolute path names # file: mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ ceph.quota.max_files="1000" [root@ceph-amk-bz-1-gtdt09-node7 ~]# getfattr -n ceph.quota.max_bytes /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ getfattr: Removing leading '/' from absolute path names # file: mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ ceph.quota.max_bytes="1000" [root@ceph-amk-bz-1-gtdt09-node7 ~]# vi /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/test_file.txt [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume snapshot create cephfs sv_1 sp_1 --group_name svg_1 [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume snapshot clone cephfs sv_1 sp_1 c_1 --group_name svg_1 [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume info cephfs c_1 { "atime": "2022-05-30 12:40:00", "bytes_pcent": "undefined", "bytes_quota": "infinite", "bytes_used": 0, "created_at": "2022-05-30 12:43:22", "ctime": "2022-05-30 12:43:23", "data_pool": "cephfs.cephfs.data", "features": [ "snapshot-clone", "snapshot-autoprotect", "snapshot-retention" ], "gid": 21, "mode": 16877, "mon_addrs": [ "10.0.209.159:6789", "10.0.210.94:6789", "10.0.210.59:6789" ], "mtime": "2022-05-30 12:40:00", "path": "/volumes/_nogroup/c_1/76a8c74e-1b97-4b48-a8a4-e3eca2de5e93", "pool_namespace": "", "state": "complete", "type": "clone", "uid": 20 } [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph version ceph version 16.2.8-19.el8cp (6a4efed655707266ca3963489591c8a2c4df6949) pacific (stable) [root@ceph-amk-bz-1-gtdt09-node7 ~]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5997