Bug 1644421

Summary: CephFS is creating exportable directories with 755 permission and causes containers not able to write on them using Manila
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Tom Barron <tbarron>
Component: CephFSAssignee: Patrick Donnelly <pdonnell>
Status: CLOSED ERRATA QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: low Docs Contact:
Priority: medium    
Version: 3.0CC: anharris, ceph-eng-bugs, ceph-qe-bugs, hnallurv, jgrosso, pdonnell, tbarron, tserlin
Target Milestone: rc   
Target Release: 3.2   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.8-51.el7cp Ubuntu: ceph_12.2.8-46redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-01-03 19:02:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1643167, 1679273    

Description Tom Barron 2018-10-30 19:28:34 UTC
[Note, this for the ceph_volume_client side work required by the corresponding OpenStack BZ [1] - from which the following description is taken.

Description of problem:

Using manila-provisioner in OpenShift creates a manila share on OpenStack. When is mounted on the POD:

172.20.2.21:/volumes/_nogroup/0840a9cc-8936-4b25-8a45-3ff92d7a5f55   2097152       0   2097152   0% /test-manila

The permissions are the following

sh-4.2$ ls -ld /test-manila/
drwxr-xr-x. 1 nobody nobody 0 Oct 25 15:34 /test-manila/

The user executing the pod is the following:
sh-4.2$ id
uid=1000180000 gid=0(root) groups=0(root),1000180000


Version-Release number of selected component (if applicable):
rhceph-3-rhel7:3-11


How reproducible:


Steps to Reproduce:
1. Create a PVC on Openshift
2. Wait till PV is created
3. Add volume for the pod in the deployment config 

Actual results:
Pod is not able to write in the NFS share due to 755

Expected results:
Able to write in the share, a 775 or 2775


Additional info:
Another option it would be create the directory with nfsnobody and run the containers following OCP instructions for NFS: 
https://docs.openshift.com/container-platform/3.11/install_config/persistent_storage/persistent_storage_nfs.html#nfs-user-ids

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1643167

Comment 21 Tom Barron 2018-12-14 17:39:08 UTC
I have verified this bug as follows:

1) made a script that (A) creates shares, share-groups, shares in share groups, and snapshots of shares and of share groups via manila, and (B) mounts the backing CephFS file system and displays the permissions of the backing files.

2) ran the script with unmodified packages and verified that permissions are 755.

3) installed python-cephfs-12.2.8-51.el7cp.x86_64.rpm from the compose referenced above.

4) ran the script again without any changes to manila proper.
4.1 the script succeeded in creating and deleting the shares, share-groups, shares in share groups, and snapshots of shares and share groups.  So installing this rpm by itself causes no harm.
4.2 permissions are still 755 as one would expect

5) installed the manila-side changes from https://review.openstack.org/#/c/614332.  This is an install from git since it is a breaking change for manila until the ceph-side change is released.

6) ran the script again with no manila configuration changes.  Results were the same as 4.1 and 4.2, proving backwards compatible behavior.

7. Set the new ``cephfs_volume_mode`` config option to 775 and ran the script.
7.1 the script succeeded in creating all the expected objects.
7.2 permissions within the cephfs filesystem were set to 775 instead of 755 as desired.

Comment 23 errata-xmlrpc 2019-01-03 19:02:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0020