Bug 2052936
Summary: | CephFS: mgr/volumes: the subvolume snapshot clone's uid/gid is incorrect | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Kotresh HR <khiremat> |
Component: | CephFS | Assignee: | Kotresh HR <khiremat> |
Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> |
Severity: | high | Docs Contact: | Akash Raj <akraj> |
Priority: | medium | ||
Version: | 5.1 | CC: | akraj, ceph-eng-bugs, hyelloji, kdreyer, tserlin, vereddy, vshankar, ymane |
Target Milestone: | --- | Keywords: | Rebase |
Target Release: | 5.2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-16.2.8-2.el8cp | Doc Type: | No Doc Update |
Doc Text: | Story Points: | --- | |
Clone Of: | 2052927 | Environment: | |
Last Closed: | 2022-08-09 17:37:27 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 2052927 | ||
Bug Blocks: | 2102272 |
Description
Kotresh HR
2022-02-10 10:04:42 UTC
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity. https://github.com/ceph/ceph/pull/45205 is in v16.2.8. Tested on below version and i see gid and uid intact with the cloned volume [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolumegroup create cephfs svg_1 --uid 10 --gid 11 [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume create cephfs sv_1 --group_name svg_1 --uid 20 --gid 21 [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume info cephfs sv_1 --group_name svg_1 { "atime": "2022-05-30 12:40:00", "bytes_pcent": "undefined", "bytes_quota": "infinite", "bytes_used": 0, "created_at": "2022-05-30 12:40:00", "ctime": "2022-05-30 12:40:00", "data_pool": "cephfs.cephfs.data", "features": [ "snapshot-clone", "snapshot-autoprotect", "snapshot-retention" ], "gid": 21, "mode": 16877, "mon_addrs": [ "10.0.209.159:6789", "10.0.210.94:6789", "10.0.210.59:6789" ], "mtime": "2022-05-30 12:40:00", "path": "/volumes/svg_1/sv_1/aff6f285-0f75-4fad-8de1-813e390f7a01", "pool_namespace": "", "state": "complete", "type": "subvolume", "uid": 20 } [root@ceph-amk-bz-1-gtdt09-node7 ~]# mkdir /mnt/cephfs_fuse_root [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph-fuse /mnt/cephfs_fuse_root/ ceph-fuse[10173]: starting ceph client 2022-05-30T08:41:42.473-0400 7fedc9ff4380 -1 init, newargv = 0x56030103b280 newargc=15 ceph-fuse[10173]: starting fuse [root@ceph-amk-bz-1-gtdt09-node7 ~]# setfattr -n ceph.quota.max_bytes -v 1000 /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ [root@ceph-amk-bz-1-gtdt09-node7 ~]# setfattr -n ceph.quota.max_files -v 1000 /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ [root@ceph-amk-bz-1-gtdt09-node7 ~]# getfattr -n ceph.quota.max_files /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ getfattr: Removing leading '/' from absolute path names # file: mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ ceph.quota.max_files="1000" [root@ceph-amk-bz-1-gtdt09-node7 ~]# getfattr -n ceph.quota.max_bytes /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ getfattr: Removing leading '/' from absolute path names # file: mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ ceph.quota.max_bytes="1000" [root@ceph-amk-bz-1-gtdt09-node7 ~]# vi /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/test_file.txt [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume snapshot create cephfs sv_1 sp_1 --group_name svg_1 [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume snapshot clone cephfs sv_1 sp_1 c_1 --group_name svg_1 [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph fs subvolume info cephfs c_1 { "atime": "2022-05-30 12:40:00", "bytes_pcent": "undefined", "bytes_quota": "infinite", "bytes_used": 0, "created_at": "2022-05-30 12:43:22", "ctime": "2022-05-30 12:43:23", "data_pool": "cephfs.cephfs.data", "features": [ "snapshot-clone", "snapshot-autoprotect", "snapshot-retention" ], "gid": 21, "mode": 16877, "mon_addrs": [ "10.0.209.159:6789", "10.0.210.94:6789", "10.0.210.59:6789" ], "mtime": "2022-05-30 12:40:00", "path": "/volumes/_nogroup/c_1/76a8c74e-1b97-4b48-a8a4-e3eca2de5e93", "pool_namespace": "", "state": "complete", "type": "clone", "uid": 20 } [root@ceph-amk-bz-1-gtdt09-node7 ~]# ceph version ceph version 16.2.8-19.el8cp (6a4efed655707266ca3963489591c8a2c4df6949) pacific (stable) [root@ceph-amk-bz-1-gtdt09-node7 ~]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5997 |