Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2185713

Summary: client: clear the suid/sgid in fallocate path
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Xiubo Li <xiubli>
Component: CephFSAssignee: Xiubo Li <xiubli>
Status: CLOSED ERRATA QA Contact: Amarnath <amk>
Severity: high Docs Contact:
Priority: unspecified    
Version: 5.3CC: ceph-eng-bugs, cephqe-warriors, hyelloji, tserlin, vdas, vereddy, vshankar
Target Milestone: ---Flags: hyelloji: needinfo-
hyelloji: needinfo-
hyelloji: needinfo-
Target Release: 5.3z3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.10-171.el8cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-05-23 00:19:10 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Xiubo Li 2023-04-11 04:33:16 UTC
This bug was initially created as a copy of Bug #2185710

I am copying this bug because: 



There is no Posix item requires that we should clear the suid/sgid
in fallocate code path but this is the default behaviour for most of
the filesystems and the VFS layer. And also the same for the write
code path, which have already support it.

Fixes: https://tracker.ceph.com/issues/58680

Comment 6 Amarnath 2023-05-08 09:57:17 UTC
Verified on 5.3 Builds

Permissions are getting erased when fallocate has been called by non super user

[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# chmod a+rws /mnt/cephfs/file
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ll /mnt/cephfs/file
-rwSrwSrw-. 1 root root 323 May  8 05:48 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ls -lrt /mnt/cephfs/file 
-rwSrwSrw-. 1 root root 323 May  8 05:48 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# stat /mnt/cephfs/file
  File: /mnt/cephfs/file
  Size: 323       	Blocks: 1          IO Block: 4194304 regular file
Device: 33h/51d	Inode: 1099511678697  Links: 1
Access: (6666/-rwSrwSrw-)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2023-05-08 05:45:27.096597943 -0400
Modify: 2023-05-08 05:48:06.966618041 -0400
Change: 2023-05-08 05:49:52.935652402 -0400
 Birth: -
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# su cephuser -c 'fallocate -p -o 200K -l 500K /mnt/cephfs/file'
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# 
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# 
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ls -lrt /mnt/cephfs/file 
-rw-rw-rw-. 1 root root 323 May  8 05:50 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ll /mnt/cephfs/file
-rw-rw-rw-. 1 root root 323 May  8 05:50 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ceph versions
{
    "mon": {
        "ceph version 16.2.10-171.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 3
    },
    "mgr": {
        "ceph version 16.2.10-171.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 2
    },
    "osd": {
        "ceph version 16.2.10-171.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 12
    },
    "mds": {
        "ceph version 16.2.10-171.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 3
    },
    "overall": {
        "ceph version 16.2.10-171.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 20
    }
}
[root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# 

on 6.1 builds we are still observing older behavior.

@xiubo will this be ported to 6.1 as well?

[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ll /mnt/cephfs/file 
-rwSrwSrw-. 1 root root 112 May  8 04:50 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ls -lrt /mnt/cephfs/file 
-rwSrwSrw-. 1 root root 112 May  8 04:50 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# su cephuser -c 'fallocate -p -o 200K -l 500K /mnt/cephfs/file'
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ll /mnt/cephfs/file 
-rwSrwSrw-. 1 root root 112 May  8 04:52 /mnt/cephfs/file
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# su cephuser -c 'fallocate -p -o 200K -l 500K /mnt/cephfs/file'
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# echo $?
0
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ceph versions
{
    "mon": {
        "ceph version 17.2.6-42.el9cp (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.6-42.el9cp (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.6-42.el9cp (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 12
    },
    "mds": {
        "ceph version 17.2.6-42.el9cp (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 3
    },
    "overall": {
        "ceph version 17.2.6-42.el9cp (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 20
    }
}
[root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# 

Detailed Steps : 
https://docs.google.com/document/d/1PhfKXnjgyo3z1ni8z-4MYxrC3ODgBF86U5RCONJmyjc/edit?pli=1#

Comment 8 Xiubo Li 2023-05-09 00:34:32 UTC
(In reply to Amarnath from comment #6)
> Verified on 5.3 Builds
> 
> Permissions are getting erased when fallocate has been called by non super
> user
> 
> [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# chmod a+rws /mnt/cephfs/file
> [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ll /mnt/cephfs/file
> -rwSrwSrw-. 1 root root 323 May  8 05:48 /mnt/cephfs/file
> [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ls -lrt /mnt/cephfs/file 
> -rwSrwSrw-. 1 root root 323 May  8 05:48 /mnt/cephfs/file
> [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# stat /mnt/cephfs/file
>   File: /mnt/cephfs/file
>   Size: 323       	Blocks: 1          IO Block: 4194304 regular file
> Device: 33h/51d	Inode: 1099511678697  Links: 1
> Access: (6666/-rwSrwSrw-)  Uid: (    0/    root)   Gid: (    0/    root)
> Context: system_u:object_r:fusefs_t:s0
> Access: 2023-05-08 05:45:27.096597943 -0400
> Modify: 2023-05-08 05:48:06.966618041 -0400
> Change: 2023-05-08 05:49:52.935652402 -0400
>  Birth: -
> [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# su cephuser -c 'fallocate -p -o
> 200K -l 500K /mnt/cephfs/file'
> [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# 
> [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# 
> [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ls -lrt /mnt/cephfs/file 
> -rw-rw-rw-. 1 root root 323 May  8 05:50 /mnt/cephfs/file
> [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ll /mnt/cephfs/file
> -rw-rw-rw-. 1 root root 323 May  8 05:50 /mnt/cephfs/file
> [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# ceph versions
> {
>     "mon": {
>         "ceph version 16.2.10-171.el8cp
> (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 3
>     },
>     "mgr": {
>         "ceph version 16.2.10-171.el8cp
> (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 2
>     },
>     "osd": {
>         "ceph version 16.2.10-171.el8cp
> (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 12
>     },
>     "mds": {
>         "ceph version 16.2.10-171.el8cp
> (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 3
>     },
>     "overall": {
>         "ceph version 16.2.10-171.el8cp
> (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)": 20
>     }
> }
> [root@ceph-amk-upgrade-1-m6kh3a-node7 ~]# 
> 
> on 6.1 builds we are still observing older behavior.
> 
> @xiubo will this be ported to 6.1 as well?
> 
> [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ll /mnt/cephfs/file 
> -rwSrwSrw-. 1 root root 112 May  8 04:50 /mnt/cephfs/file
> [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ls -lrt /mnt/cephfs/file 
> -rwSrwSrw-. 1 root root 112 May  8 04:50 /mnt/cephfs/file
> [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# su cephuser -c 'fallocate -p -o
> 200K -l 500K /mnt/cephfs/file'
> [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ll /mnt/cephfs/file 
> -rwSrwSrw-. 1 root root 112 May  8 04:52 /mnt/cephfs/file
> [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# su cephuser -c 'fallocate -p -o
> 200K -l 500K /mnt/cephfs/file'
> [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# echo $?
> 0
> [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# ceph versions
> {
>     "mon": {
>         "ceph version 17.2.6-42.el9cp
> (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 3
>     },
>     "mgr": {
>         "ceph version 17.2.6-42.el9cp
> (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 2
>     },
>     "osd": {
>         "ceph version 17.2.6-42.el9cp
> (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 12
>     },
>     "mds": {
>         "ceph version 17.2.6-42.el9cp
> (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 3
>     },
>     "overall": {
>         "ceph version 17.2.6-42.el9cp
> (40cb9a099610ba64629eb9f09ab6dc0f4c1af757) quincy (stable)": 20
>     }
> }
> [root@ceph-amk-upgrade-1-mdg6ad-node7 ~]# 
> 
> Detailed Steps : 
> https://docs.google.com/document/d/1PhfKXnjgyo3z1ni8z-
> 4MYxrC3ODgBF86U5RCONJmyjc/edit?pli=1#

Sorry for late. This LGTM.

Thanks
- Xiubo

Comment 10 errata-xmlrpc 2023-05-23 00:19:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3259