Bug 2185713
| Summary: | client: clear the suid/sgid in fallocate path | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Xiubo Li <xiubli> |
| Component: | CephFS | Assignee: | Xiubo Li <xiubli> |
| Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 5.3 | CC: | ceph-eng-bugs, cephqe-warriors, hyelloji, tserlin, vdas, vereddy, vshankar |
| Target Milestone: | --- | Flags: | hyelloji:
needinfo-
hyelloji: needinfo- hyelloji: needinfo- |
| Target Release: | 5.3z3 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-16.2.10-171.el8cp | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-05-23 00:19:10 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Xiubo Li
2023-04-11 04:33:16 UTC
Verified on 5.3 Builds
Permissions are getting erased when fallocate has been called by non super user
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# chmod a+rws /mnt/cephfs/file
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ll /mnt/cephfs/file
-rwSrwSrw-. 1 root root 323 May 8 05:48 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ls -lrt /mnt/cephfs/file
-rwSrwSrw-. 1 root root 323 May 8 05:48 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# stat /mnt/cephfs/file
File: /mnt/cephfs/file
Size: 323 Blocks: 1 IO Block: 4194304 regular file
Device: 33h/51d Inode: 1099511678697 Links: 1
Access: (6666/-rwSrwSrw-) Uid: ( 0/ root) Gid: ( 0/ root)
Context: system_u:object_r:fusefs_t:s0
Access: 2023-05-08 05:45:27.096597943 -0400
Modify: 2023-05-08 05:48:06.966618041 -0400
Change: 2023-05-08 05:49:52.935652402 -0400
Birth: -
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# su cephuser -c 'fallocate -p -o 200K -l 500K /mnt/cephfs/file'
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]#
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]#
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ls -lrt /mnt/cephfs/file
-rw-rw-rw-. 1 root root 323 May 8 05:50 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ll /mnt/cephfs/file
-rw-rw-rw-. 1 root root 323 May 8 05:50 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ceph versions
{
"mon": {
"ceph version 16.2.10-171.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 3
},
"mgr": {
"ceph version 16.2.10-171.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 2
},
"osd": {
"ceph version 16.2.10-171.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 12
},
"mds": {
"ceph version 16.2.10-171.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 3
},
"overall": {
"ceph version 16.2.10-171.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 20
}
}
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]#
on 6.1 builds we are still observing older behavior.
@xiubo will this be ported to 6.1 as well?
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ll /mnt/cephfs/file
-rwSrwSrw-. 1 root root 112 May 8 04:50 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ls -lrt /mnt/cephfs/file
-rwSrwSrw-. 1 root root 112 May 8 04:50 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# su cephuser -c 'fallocate -p -o 200K -l 500K /mnt/cephfs/file'
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ll /mnt/cephfs/file
-rwSrwSrw-. 1 root root 112 May 8 04:52 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# su cephuser -c 'fallocate -p -o 200K -l 500K /mnt/cephfs/file'
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# echo $?
0
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ceph versions
{
"mon": {
"ceph version 17.2.6-42.el9cp (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 3
},
"mgr": {
"ceph version 17.2.6-42.el9cp (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 2
},
"osd": {
"ceph version 17.2.6-42.el9cp (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 12
},
"mds": {
"ceph version 17.2.6-42.el9cp (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 3
},
"overall": {
"ceph version 17.2.6-42.el9cp (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 20
}
}
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]#
Detailed Steps :
https://docs.google.com/document/d/1PhfKXnjgyo3z1ni8z-4MYxrC3ODgBF86U5RCONJmyjc/edit?pli=1#
(In reply to Amarnath from comment #6) > Verified on 5.3 Builds > > Permissions are getting erased when fallocate has been called by non super > user > > [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# chmod a+rws /mnt/cephfs/file > [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ll /mnt/cephfs/file > -rwSrwSrw-. 1 root root 323 May 8 05:48 /mnt/cephfs/file > [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ls -lrt /mnt/cephfs/file > -rwSrwSrw-. 1 root root 323 May 8 05:48 /mnt/cephfs/file > [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# stat /mnt/cephfs/file > File: /mnt/cephfs/file > Size: 323 Blocks: 1 IO Block: 4194304 regular file > Device: 33h/51d Inode: 1099511678697 Links: 1 > Access: (6666/-rwSrwSrw-) Uid: ( 0/ root) Gid: ( 0/ root) > Context: system_u:object_r:fusefs_t:s0 > Access: 2023-05-08 05:45:27.096597943 -0400 > Modify: 2023-05-08 05:48:06.966618041 -0400 > Change: 2023-05-08 05:49:52.935652402 -0400 > Birth: - > [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# su cephuser -c 'fallocate -p -o > 200K -l 500K /mnt/cephfs/file' > [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# > [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# > [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ls -lrt /mnt/cephfs/file > -rw-rw-rw-. 1 root root 323 May 8 05:50 /mnt/cephfs/file > [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ll /mnt/cephfs/file > -rw-rw-rw-. 1 root root 323 May 8 05:50 /mnt/cephfs/file > [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ceph versions > { > "mon": { > "ceph version 16.2.10-171.el8cp > (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 3 > }, > "mgr": { > "ceph version 16.2.10-171.el8cp > (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 2 > }, > "osd": { > "ceph version 16.2.10-171.el8cp > (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 12 > }, > "mds": { > "ceph version 16.2.10-171.el8cp > (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 3 > }, > "overall": { > "ceph version 16.2.10-171.el8cp > (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 20 > } > } > [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# > > on 6.1 builds we are still observing older behavior. > > @xiubo will this be ported to 6.1 as well? > > [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ll /mnt/cephfs/file > -rwSrwSrw-. 1 root root 112 May 8 04:50 /mnt/cephfs/file > [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ls -lrt /mnt/cephfs/file > -rwSrwSrw-. 1 root root 112 May 8 04:50 /mnt/cephfs/file > [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# su cephuser -c 'fallocate -p -o > 200K -l 500K /mnt/cephfs/file' > [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ll /mnt/cephfs/file > -rwSrwSrw-. 1 root root 112 May 8 04:52 /mnt/cephfs/file > [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# su cephuser -c 'fallocate -p -o > 200K -l 500K /mnt/cephfs/file' > [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# echo $? > 0 > [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ceph versions > { > "mon": { > "ceph version 17.2.6-42.el9cp > (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 3 > }, > "mgr": { > "ceph version 17.2.6-42.el9cp > (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 2 > }, > "osd": { > "ceph version 17.2.6-42.el9cp > (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 12 > }, > "mds": { > "ceph version 17.2.6-42.el9cp > (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 3 > }, > "overall": { > "ceph version 17.2.6-42.el9cp > (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 20 > } > } > [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# > > Detailed Steps : > https://docs.google.com/document/d/1PhfKXnjgyo3z1ni8z- > 4MYxrC3ODgBF86U5RCONJmyjc/edit?pli=1# Sorry for late. This LGTM. Thanks - Xiubo Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:3259 |