Bug 2052936 - CephFS: mgr/volumes: the subvolume snapshot clone's uid/gid is incorrect
Summary: CephFS: mgr/volumes: the subvolume snapshot clone's uid/gid is incorrect
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.1
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 5.2
Assignee: Kotresh HR
QA Contact: Amarnath
Akash Raj
URL:
Whiteboard:
Depends On: 2052927
Blocks: 2102272
TreeView+ depends on / blocked
 
Reported: 2022-02-10 10:04 UTC by Kotresh HR
Modified: 2022-08-09 17:38 UTC (History)
8 users (show)

Fixed In Version: ceph-16.2.8-2.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of: 2052927
Environment:
Last Closed: 2022-08-09 17:37:27 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 54066 0 None None None 2022-02-10 10:04:41 UTC
Red Hat Issue Tracker RHCEPH-3139 0 None None None 2022-02-10 10:07:19 UTC
Red Hat Product Errata RHSA-2022:5997 0 None None None 2022-08-09 17:38:03 UTC

Description Kotresh HR 2022-02-10 10:04:42 UTC
+++ This bug was initially created as a clone of Bug #2052927 +++

Description of problem:
The uid/gid of the cloned subvolume from snapshot is incorrect if the quota is set on source snapshot. This is regression caused by fix of the https://bugzilla.redhat.com/show_bug.cgi?id=2039276

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Create a subvolume with different user than that of it's group and set quota on it.
2. Create a snapshot for the above subvolume
3. Create a clone from above snapshot.
4. Verify the uid of the clone is incorrect.

Actual results:
The uid/gid of the clone doesn't match the source snapshot

Expected results:
The uid/gid should of the clone match the source snapshot

Additional info:

--- Additional comment from RHEL Program Management on 2022-02-10 09:58:10 UTC ---

Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 1 RHEL Program Management 2022-02-10 10:04:49 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 4 Ken Dreyer (Red Hat) 2022-05-24 23:08:06 UTC
https://github.com/ceph/ceph/pull/45205 is in v16.2.8.

Comment 8 Amarnath 2022-05-30 12:44:59 UTC
Tested on below version and i see gid and uid intact with the cloned volume

[root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolumegroup create cephfs svg_1 --uid 10 --gid 11
[root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume create cephfs sv_1 --group_name svg_1 --uid 20 --gid 21
[root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume info cephfs sv_1 --group_name svg_1
{
    "atime": "2022-05-30 12:40:00",
    "bytes_pcent": "undefined",
    "bytes_quota": "infinite",
    "bytes_used": 0,
    "created_at": "2022-05-30 12:40:00",
    "ctime": "2022-05-30 12:40:00",
    "data_pool": "cephfs.cephfs.data",
    "features": [
        "snapshot-clone",
        "snapshot-autoprotect",
        "snapshot-retention"
    ],
    "gid": 21,
    "mode": 16877,
    "mon_addrs": [
        "10.0.209.159:6789",
        "10.0.210.94:6789",
        "10.0.210.59:6789"
    ],
    "mtime": "2022-05-30 12:40:00",
    "path": "/volumes/svg_1/sv_1/aff6f285-0f75-4fad-8de1-813e390f7a01",
    "pool_namespace": "",
    "state": "complete",
    "type": "subvolume",
    "uid": 20
}
[root@ceph-amk-bz-1-gtdt09-node7 ~]# mkdir /mnt/cephfs_fuse_root
[root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph-fuse /mnt/cephfs_fuse_root/
ceph-fuse[10173]: starting ceph client
2022-05-30T08:41:42.473-0400 7fedc9ff4380 -1 init, newargv = 0x56030103b280 newargc=15
ceph-fuse[10173]: starting fuse
[root@ceph-amk-bz-1-gtdt09-node7 ~]# setfattr -n ceph.quota.max_bytes -v 1000 /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/
[root@ceph-amk-bz-1-gtdt09-node7 ~]# setfattr -n ceph.quota.max_files -v 1000 /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/
[root@ceph-amk-bz-1-gtdt09-node7 ~]# getfattr -n ceph.quota.max_files /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/
getfattr: Removing leading '/' from absolute path names
# file: mnt/cephfs_fuse_root/volumes/svg_1/sv_1/
ceph.quota.max_files="1000"

[root@ceph-amk-bz-1-gtdt09-node7 ~]# getfattr -n ceph.quota.max_bytes /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/
getfattr: Removing leading '/' from absolute path names
# file: mnt/cephfs_fuse_root/volumes/svg_1/sv_1/
ceph.quota.max_bytes="1000"

[root@ceph-amk-bz-1-gtdt09-node7 ~]# vi /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/test_file.txt
[root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume snapshot create cephfs sv_1 sp_1 --group_name svg_1
[root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume snapshot clone cephfs sv_1 sp_1 c_1 --group_name svg_1
[root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume info cephfs c_1
{
    "atime": "2022-05-30 12:40:00",
    "bytes_pcent": "undefined",
    "bytes_quota": "infinite",
    "bytes_used": 0,
    "created_at": "2022-05-30 12:43:22",
    "ctime": "2022-05-30 12:43:23",
    "data_pool": "cephfs.cephfs.data",
    "features": [
        "snapshot-clone",
        "snapshot-autoprotect",
        "snapshot-retention"
    ],
    "gid": 21,
    "mode": 16877,
    "mon_addrs": [
        "10.0.209.159:6789",
        "10.0.210.94:6789",
        "10.0.210.59:6789"
    ],
    "mtime": "2022-05-30 12:40:00",
    "path": "/volumes/_nogroup/c_1/76a8c74e-1b97-4b48-a8a4-e3eca2de5e93",
    "pool_namespace": "",
    "state": "complete",
    "type": "clone",
    "uid": 20
}
[root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph version
ceph version 16.2.8-19.el8cp (6a4efed655707266ca3963489591c8a2c4df6949) pacific (stable)
[root@ceph-amk-bz-1-gtdt09-node7 ~]#

Comment 12 errata-xmlrpc 2022-08-09 17:37:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5997


Note You need to log in before you can comment on or make changes to this bug.