Bug 2052927 - CephFS: mgr/volumes: the subvolume snapshot clone's uid/gid is incorrect
Summary: CephFS: mgr/volumes: the subvolume snapshot clone's uid/gid is incorrect
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.1
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 5.1
Assignee: Kotresh HR
QA Contact: Amarnath
URL:
Whiteboard:
Depends On:
Blocks: 2052936
TreeView+ depends on / blocked
 
Reported: 2022-02-10 09:58 UTC by Kotresh HR
Modified: 2022-04-04 10:24 UTC (History)
5 users (show)

Fixed In Version: ceph-16.2.7-59.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2052936 (view as bug list)
Environment:
Last Closed: 2022-04-04 10:24:10 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 54066 0 None None None 2022-02-10 09:58:02 UTC
Red Hat Issue Tracker RHCEPH-3138 0 None None None 2022-02-10 10:05:33 UTC
Red Hat Product Errata RHSA-2022:1174 0 None None None 2022-04-04 10:24:30 UTC

Description Kotresh HR 2022-02-10 09:58:03 UTC
Description of problem:
The uid/gid of the cloned subvolume from snapshot is incorrect if the quota is set on source snapshot. This is regression caused by fix of the https://bugzilla.redhat.com/show_bug.cgi?id=2039276

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Create a subvolume with different user than that of it's group and set quota on it.
2. Create a snapshot for the above subvolume
3. Create a clone from above snapshot.
4. Verify the uid of the clone is incorrect.

Actual results:
The uid/gid of the clone doesn't match the source snapshot

Expected results:
The uid/gid should of the clone match the source snapshot

Additional info:

Comment 1 RHEL Program Management 2022-02-10 09:58:10 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 7 Amarnath 2022-02-17 18:05:26 UTC
Tested on below version and i see gid and uid intact with the cloned volume
 
[root@ceph-bz-mds-3l0f2m-node8 ~]# ceph version
ceph version 16.2.7-62.el8cp (02084e5d310344421d265e453c02a7a16a9e6e36) pacific (stable)
[root@ceph-bz-mds-3l0f2m-node8 ~]# ceph fs subvolumegroup create cephfs svg_1 --uid 10 --gid 11
[root@ceph-bz-mds-3l0f2m-node8 ~]# ceph fs subvolume create cephfs sv_1 --group_name svg_1 --uid 20 --gid 21
[root@ceph-bz-mds-3l0f2m-node8 ~]# ceph fs subvolume info cephfs sv_1 --group_name svg_1
{
    "atime": "2022-02-17 17:58:41",
    "bytes_pcent": "undefined",
    "bytes_quota": "infinite",
    "bytes_used": 0,
    "created_at": "2022-02-17 17:58:41",
    "ctime": "2022-02-17 17:58:42",
    "data_pool": "cephfs.cephfs.data",
    "features": [
        "snapshot-clone",
        "snapshot-autoprotect",
        "snapshot-retention"
    ],
    "gid": 21,
    "mode": 16877,
    "mon_addrs": [
        "10.0.211.244:6789",
        "10.0.209.246:6789",
        "10.0.211.86:6789"
    ],
    "mtime": "2022-02-17 17:58:41",
    "path": "/volumes/svg_1/sv_1/81743acd-022f-4a26-8b17-73be4f5d1dfe",
    "pool_namespace": "",
    "state": "complete",
    "type": "subvolume",
    "uid": 20
}
[root@ceph-bz-mds-3l0f2m-node8 ~]# mkdir /mnt/cephfs_fuse_root
[root@ceph-bz-mds-3l0f2m-node8 ~]# ceph-fuse /mnt/cephfs_fuse_root/
2022-02-17T12:59:27.105-0500 7f00fe4f2200 -1 init, newargv = 0x56260e67c070 newargc=15
ceph-fuse[3299]: starting ceph client
ceph-fuse[3299]: starting fuse
[root@ceph-bz-mds-3l0f2m-node8 ~]# setfattr -n ceph.quota.max_bytes -v 1000 /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/
[root@ceph-bz-mds-3l0f2m-node8 ~]# setfattr -n ceph.quota.max_files -v 1000 /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/
[root@ceph-bz-mds-3l0f2m-node8 ~]# getfattr -n ceph.quota.max_files /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/
getfattr: Removing leading '/' from absolute path names
# file: mnt/cephfs_fuse_root/volumes/svg_1/sv_1/
ceph.quota.max_files="1000"

[root@ceph-bz-mds-3l0f2m-node8 ~]# ceph fs subvolume snapshot create cephfs sv_1 sp_1 --group_name svg_1
[root@ceph-bz-mds-3l0f2m-node8 ~]# ceph fs subvolume snapshot clone cephfs sv_1 sp_1 c_1 --group_name svg_1
[root@ceph-bz-mds-3l0f2m-node8 ~]# ceph fs subvolume info cephfs c_1
{
    "atime": "2022-02-17 17:58:41",
    "bytes_pcent": "undefined",
    "bytes_quota": "infinite",
    "bytes_used": 0,
    "created_at": "2022-02-17 18:00:17",
    "ctime": "2022-02-17 18:00:17",
    "data_pool": "cephfs.cephfs.data",
    "features": [
        "snapshot-clone",
        "snapshot-autoprotect",
        "snapshot-retention"
    ],
    "gid": 21,
    "mode": 16877,
    "mon_addrs": [
        "10.0.211.244:6789",
        "10.0.209.246:6789",
        "10.0.211.86:6789"
    ],
    "mtime": "2022-02-17 17:58:41",
    "path": "/volumes/_nogroup/c_1/d65a0397-312f-4b73-817a-cd28bdf5aa3d",
    "pool_namespace": "",
    "state": "complete",
    "type": "clone",
    "uid": 20
}
[root@ceph-bz-mds-3l0f2m-node8 ~]#

Comment 9 errata-xmlrpc 2022-04-04 10:24:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174


Note You need to log in before you can comment on or make changes to this bug.