Bug 1432783

Summary: Selinux denying sanlock access to /rhev/data-center/mnt/server:_path/uuid/dom_md/ids mounted using nfs v4.2
Product: Red Hat Enterprise Linux 7 Reporter: Nir Soffer <nsoffer>
Component: selinux-policyAssignee: Lukas Vrabec <lvrabec>
Status: CLOSED ERRATA QA Contact: Milos Malik <mmalik>
Severity: high Docs Contact:
Priority: unspecified    
Version: 7.3CC: bmcclain, jniederm, lvrabec, mgrepl, mmalik, plautrba, pvrabec, snagar, ssekidde, ykaul, ylavi
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-01 15:24:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1406398    

Description Nir Soffer 2017-03-16 07:56:06 UTC
Description of problem:

When using nfs v4.2, sanlock cannot access the ids file.

Here are the AVC we see when using permissive mode:

# ausearch -m avc -i -ts recent
----
type=SYSCALL msg=audit(03/16/2017 09:16:06.016:1352) : arch=x86_64 syscall=open success=yes exit=11 a0=0x7fb37c001d18 a1=O_RDWR|O_DSYNC|O_DIRECT|__O_SYNC a2=0x0 a3=0x1 items=0 ppid=1 pid=7447 auid=unset uid=sanlock gid=sanlock euid=sanlock suid=sanlock fsuid=sanlock egid=sanlock sgid=sanlock fsgid=sanlock tty=(none) ses=unset comm=sanlock exe=/usr/sbin/sanlock subj=system_u:system_r:sanlock_t:s0-s0:c0.c1023 key=(null) 
type=AVC msg=audit(03/16/2017 09:16:06.016:1352) : avc:  denied  { read write open } for  pid=7447 comm=sanlock path=/rhev/data-center/mnt/dumbo.tlv.redhat.com:_voodoo_voodoo6-data-v42/12052870-b5b8-4b00-9dfe-a1edc79324cc/dom_md/ids dev="0:40" ino=37382 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=system_u:object_r:mnt_t:s0 tclass=file 
----
type=SYSCALL msg=audit(03/16/2017 09:16:06.019:1353) : arch=x86_64 syscall=fstat success=yes exit=0 a0=0xb a1=0x7fb38bdb0ab0 a2=0x7fb38bdb0ab0 a3=0x1 items=0 ppid=1 pid=7447 auid=unset uid=sanlock gid=sanlock euid=sanlock suid=sanlock fsuid=sanlock egid=sanlock sgid=sanlock fsgid=sanlock tty=(none) ses=unset comm=sanlock exe=/usr/sbin/sanlock subj=system_u:system_r:sanlock_t:s0-s0:c0.c1023 key=(null) 
type=AVC msg=audit(03/16/2017 09:16:06.019:1353) : avc:  denied  { getattr } for  pid=7447 comm=sanlock path=/rhev/data-center/mnt/dumbo.tlv.redhat.com:_voodoo_voodoo6-data-v42/12052870-b5b8-4b00-9dfe-a1edc79324cc/dom_md/ids dev="0:40" ino=37382 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=system_u:object_r:mnt_t:s0 tclass=file 

# mount | grep data-v42
dumbo.tlv.redhat.com://voodoo/voodoo6-data-v42 on /rhev/data-center/mnt/dumbo.tlv.redhat.com:_voodoo_voodoo6-data-v42 type nfs4 (rw,relatime,seclabel,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.35.0.110,local_lock=none,addr=10.35.0.99)

Version-Release number of selected component (if applicable):
# rpm -qa | grep selinux
selinux-policy-targeted-3.13.1-102.el7_3.15.noarch
selinux-policy-3.13.1-102.el7_3.15.noarch
libselinux-utils-2.5-6.el7.x86_64
libselinux-devel-2.5-6.el7.x86_64
libselinux-2.5-6.el7.x86_64
libselinux-python-2.5-6.el7.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Have a mount using nfs 4.2
2. Try to use sanlock on this mount

Actual results:
Sanlock operation fail, need to change selinux to permissive mode.

Expected results:
Sanlock operations should be allowed.

Additional info:

I tried to label the files on the nfs server using:

# semanage fcontext -a -t mnt_t '/home/nfs(/.*)?'
# restorecon -Rv /home/nfs
...
# ls -lZ voodoo6-data-v42/
total 0
drwxr-xr-x. 4 vdsm kvm system_u:object_r:mnt_t:s0 32 Mar 16 09:16 12052870-b5b8-4b00-9dfe-a1edc79324cc
-rwxr-xr-x. 1 vdsm kvm system_u:object_r:mnt_t:s0  0 Mar 16 09:21 __DIRECT_IO_TEST__

But sanlock still fail to access the ids file.

Based on sanlock_selinux(8), sanlock is allowed to access nfs_t type,
so I relabeled the files on the nfs server:

# semanage fcontext -a -t nfs_t '/home/nfs(/.*)?'
# restorecon -Rv /home/nfs
...
# ls -lZ voodoo6-data-v42/
total 0
drwxr-xr-x. 4 vdsm kvm system_u:object_r:nfs_t:s0 32 Mar 16 09:16 12052870-b5b8-4b00-9dfe-a1edc79324cc
-rwxr-xr-x. 1 vdsm kvm system_u:object_r:nfs_t:s0  0 Mar 16 09:26 __DIRECT_IO_TEST__

So we have a way to fix this, but I'm not sure this is the correct way
to handle this.

Comment 3 Nir Soffer 2017-03-23 16:27:07 UTC
Hi Lukas, can you explain the fix to this bug?

What is the expected behavior after this fix?

Comment 4 Yaniv Lavi 2017-04-04 12:22:38 UTC
Can you suggest 7.3.z? This is blocking the NFS 4.2 support in RHV.

Comment 6 Lukas Vrabec 2017-04-04 14:25:42 UTC
(In reply to Nir Soffer from comment #3)
> Hi Lukas, can you explain the fix to this bug?
> 

I created boolean which can be enabled if you would like to mount via NFS your homedir.

> What is the expected behavior after this fix?
Sanlock can access to homedirs.

Comment 7 Nir Soffer 2017-04-05 09:20:14 UTC
(In reply to Lukas Vrabec from comment #6)
> I created boolean which can be enabled if you would like to mount via NFS
> your homedir.

This fix is only relevant for the use case of sharing directories under /home.
For rhev, we need to consume anything on an NFS server which the server admin
provides, so boolean for /home is not a general solution.

It looks like relabeling the shared directories with nfs_t works, and we can 
document this requirement.

But what about nfs server which does not support selinux? do we have a way to 
disable selinux on the mount, keeping the behavior similar to nfs < v4.2?

Comment 8 Yaniv Kaul 2017-05-30 07:15:36 UTC
*** Bug 1414798 has been marked as a duplicate of this bug. ***

Comment 10 errata-xmlrpc 2017-08-01 15:24:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1861