Bug 2052927
Summary: | CephFS: mgr/volumes: the subvolume snapshot clone's uid/gid is incorrect | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Kotresh HR <khiremat> | |
Component: | CephFS | Assignee: | Kotresh HR <khiremat> | |
Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> | |
Severity: | high | Docs Contact: | ||
Priority: | medium | |||
Version: | 5.1 | CC: | ceph-eng-bugs, ceph-qe-bugs, tserlin, vereddy, vshankar | |
Target Milestone: | --- | |||
Target Release: | 5.1 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | ceph-16.2.7-59.el8cp | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 2052936 (view as bug list) | Environment: | ||
Last Closed: | 2022-04-04 10:24:10 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2052936 |
Description
Kotresh HR
2022-02-10 09:58:03 UTC
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity. Tested on below version and i see gid and uid intact with the cloned volume [root@ceph-bz-mds-3l0f2m-node8 ~]# ceph version ceph version 16.2.7-62.el8cp (02084e5d310344421d265e453c02a7a16a9e6e36) pacific (stable) [root@ceph-bz-mds-3l0f2m-node8 ~]# ceph fs subvolumegroup create cephfs svg_1 --uid 10 --gid 11 [root@ceph-bz-mds-3l0f2m-node8 ~]# ceph fs subvolume create cephfs sv_1 --group_name svg_1 --uid 20 --gid 21 [root@ceph-bz-mds-3l0f2m-node8 ~]# ceph fs subvolume info cephfs sv_1 --group_name svg_1 { "atime": "2022-02-17 17:58:41", "bytes_pcent": "undefined", "bytes_quota": "infinite", "bytes_used": 0, "created_at": "2022-02-17 17:58:41", "ctime": "2022-02-17 17:58:42", "data_pool": "cephfs.cephfs.data", "features": [ "snapshot-clone", "snapshot-autoprotect", "snapshot-retention" ], "gid": 21, "mode": 16877, "mon_addrs": [ "10.0.211.244:6789", "10.0.209.246:6789", "10.0.211.86:6789" ], "mtime": "2022-02-17 17:58:41", "path": "/volumes/svg_1/sv_1/81743acd-022f-4a26-8b17-73be4f5d1dfe", "pool_namespace": "", "state": "complete", "type": "subvolume", "uid": 20 } [root@ceph-bz-mds-3l0f2m-node8 ~]# mkdir /mnt/cephfs_fuse_root [root@ceph-bz-mds-3l0f2m-node8 ~]# ceph-fuse /mnt/cephfs_fuse_root/ 2022-02-17T12:59:27.105-0500 7f00fe4f2200 -1 init, newargv = 0x56260e67c070 newargc=15 ceph-fuse[3299]: starting ceph client ceph-fuse[3299]: starting fuse [root@ceph-bz-mds-3l0f2m-node8 ~]# setfattr -n ceph.quota.max_bytes -v 1000 /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ [root@ceph-bz-mds-3l0f2m-node8 ~]# setfattr -n ceph.quota.max_files -v 1000 /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ [root@ceph-bz-mds-3l0f2m-node8 ~]# getfattr -n ceph.quota.max_files /mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ getfattr: Removing leading '/' from absolute path names # file: mnt/cephfs_fuse_root/volumes/svg_1/sv_1/ ceph.quota.max_files="1000" [root@ceph-bz-mds-3l0f2m-node8 ~]# ceph fs subvolume snapshot create cephfs sv_1 sp_1 --group_name svg_1 [root@ceph-bz-mds-3l0f2m-node8 ~]# ceph fs subvolume snapshot clone cephfs sv_1 sp_1 c_1 --group_name svg_1 [root@ceph-bz-mds-3l0f2m-node8 ~]# ceph fs subvolume info cephfs c_1 { "atime": "2022-02-17 17:58:41", "bytes_pcent": "undefined", "bytes_quota": "infinite", "bytes_used": 0, "created_at": "2022-02-17 18:00:17", "ctime": "2022-02-17 18:00:17", "data_pool": "cephfs.cephfs.data", "features": [ "snapshot-clone", "snapshot-autoprotect", "snapshot-retention" ], "gid": 21, "mode": 16877, "mon_addrs": [ "10.0.211.244:6789", "10.0.209.246:6789", "10.0.211.86:6789" ], "mtime": "2022-02-17 17:58:41", "path": "/volumes/_nogroup/c_1/d65a0397-312f-4b73-817a-cd28bdf5aa3d", "pool_namespace": "", "state": "complete", "type": "clone", "uid": 20 } [root@ceph-bz-mds-3l0f2m-node8 ~]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1174 |