Description of problem: "Permission denied" when access mounted dir with azure file volume Version-Release number of selected component (if applicable): 4.2.0-0.okd-2019-09-23-144416 How reproducible: Always Steps to Reproduce: 1.oc new-app redis 2.oc set volumes dc/redis --add --claim-name=file-pvc --claim-class=azure-file-key --claim-size=1G --mount-path=/data 3.id uid=1000560000(1000560000) gid=0(root) groups=0(root),1000560000 sh-4.2$ cd /data sh-4.2$ ls ls: cannot open directory .: Permission denied 4.check mount info on node mount | grep pvc-e5afc682-de78-11e9-9116-000d3a3fbd4d //chuff09237p5xf5zf25.file.core.windows.net/chuff0923-7p5xf-dynami-pvc-e5afc682-de78-11e9-9116-000d3a3fbd4d on /var/lib/kubelet/pods/ec171ec5-de78-11e9-94e3-000d3a965d51/volumes/kubernetes.io~azure-file/pvc-e5afc682-de78-11e9-9116-000d3a3fbd4d type cifs (rw,relatime,vers=3.0,cache=strict,username=chuff09237p5xf5zf25,domain=,uid=0,noforceuid,gid=1000560000,forcegid,addr=13.67.155.28,file_mode=0777,dir_mode=0777,soft,persistenthandles,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) 5. ls -lZd /var/lib/kubelet/pods/ec171ec5-de78-11e9-94e3-000d3a965d51/volumes/kubernetes.io~azure-file/pvc-e5afc682-de78-11e9-9116-000d3a3fbd4d drwxrwxrwx. 2 root 1000560000 system_u:object_r:cifs_t:s0 0 Sep 24 03:10 /var/lib/kubelet/pods/ec171ec5-de78-11e9-94e3-000d3a965d51/volumes/kubernetes.io~azure-file/pvc-e5afc682-de78-11e9-9116-000d3a3fbd4d 6. getsebool -a | grep cifs cobbler_use_cifs --> off ftpd_use_cifs --> off git_cgi_use_cifs --> off git_system_use_cifs --> off httpd_use_cifs --> off ksmtuned_use_cifs --> off mpd_use_cifs --> off polipo_use_cifs --> off tmpreaper_use_cifs --> off Actual results: Should have righs of read/write/exec Expected results: Permission denied when access this mounted dir Master Log: Node Log (of failed PODs): PV Dump: PVC Dump: StorageClass Dump (if StorageClass used by PV/PVC): Additional info: No issue when use privileged pod
It works when `setenforce 0` on the node
> It works when `setenforce 0` on the node This sounds like some SELinux boolean was not set properly. We need virt_use_samba "on" in RHCOS.
Same issue also exists on 4.2.0-0.nightly-2019-09-24-194016
RHCOS version 42.80.20190925.2 and all subsequent builds will have the `virt_use_samba` boolean enabled.
Passed on 4.2.0-0.nightly-2019-09-26-192831 oc rsh pod3 uid=1000530000(1000530000) gid=0(root) groups=0(root),1000530000 sh-4.2$ cp hello /tmp/ sh-4.2$ ./tmp/hello Hello OpenShift Storage sh-4.2$ touch test sh-4.2$ ls -lrt total 2312 -rwxrwxrwx. 1 root 1000530000 0 Sep 27 02:23 test -rwxrwxrwx. 1 root 1000530000 2367456 Sep 27 02:24 hello
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922