Bug 1456265 - SELinux blocks nfs-ganesha-lock service installed on Gluster
Summary: SELinux blocks nfs-ganesha-lock service installed on Gluster
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: ganesha-nfs
Version: 3.10
Hardware: x86_64
OS: Other
unspecified
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL: http://lists.gluster.org/pipermail/gl...
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-28 14:22 UTC by Adam
Modified: 2018-06-20 18:30 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-06-20 18:30:07 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Adam 2017-05-28 14:22:45 UTC
Description of problem:
When you start nfs-ganesha.service then nfs-ganesha-lock.service fails (when SELinux is enabled).

Version-Release number of selected component (if applicable):
CentOS 3.10.0-514.21.1.el7.x86_64
glusterfs*    3.10.1-1.el7 (most current from centos-gluster310 repo)
nfs-ganesha*  2.4.5-1.el7 (most current from centos-gluster310 repo)

How reproducible:
Follow this thread to install Gluster and Ganesha:
http://lists.gluster.org/pipermail/gluster-users/2017-May/031256.html

Steps to Reproduce:
1.Reboot
2.systemctl start nfs-ganesha.service

Additional info:
Please find attached AVCs and version numbers. AVCs are collected
between two reboots, in both cases I manually started
nfs-ganesha.service and nfs-ganesha-lock.service failed to start.

---------

uname -r

3.10.0-514.21.1.el7.x86_64



sestatus -v

SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      28

Process contexts:
Current context:                unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
Init context:                   system_u:system_r:init_t:s0

File contexts:
Controlling terminal:           unconfined_u:object_r:user_tty_device_t:s0
/etc/passwd                     system_u:object_r:passwd_file_t:s0
/etc/shadow                     system_u:object_r:shadow_t:s0
/bin/bash                       system_u:object_r:shell_exec_t:s0
/bin/login                      system_u:object_r:login_exec_t:s0
/bin/sh                         system_u:object_r:bin_t:s0 -> system_u:object_r:shell_exec_t:s0
/sbin/agetty                    system_u:object_r:getty_exec_t:s0
/sbin/init                      system_u:object_r:bin_t:s0 -> system_u:object_r:init_exec_t:s0
/usr/sbin/sshd                  system_u:object_r:sshd_exec_t:s0



sudo systemctl start nfs-ganesha.service

systemctl status -l nfs-ganesha-lock.service 

● nfs-ganesha-lock.service - NFS status monitor for NFSv2/3 locking.
   Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha-lock.service; static; vendor preset: disabled)
   Active: failed (Result: exit-code) since Sun 2017-05-28 14:12:48 UTC; 9s ago
  Process: 1991 ExecStart=/usr/sbin/rpc.statd --no-notify $STATDARGS (code=exited, status=1/FAILURE)

mynode0.localdomain systemd[1]: Starting NFS status monitor for NFSv2/3 locking....
mynode0.localdomain rpc.statd[1992]: Version 1.3.0 starting
mynode0.localdomain rpc.statd[1992]: Flags: TI-RPC
mynode0.localdomain rpc.statd[1992]: Failed to open directory sm: Permission denied
mynode0.localdomain systemd[1]: nfs-ganesha-lock.service: control process exited, code=exited status=1
mynode0.localdomain systemd[1]: Failed to start NFS status monitor for NFSv2/3 locking..
mynode0.localdomain systemd[1]: Unit nfs-ganesha-lock.service entered failed state.
mynode0.localdomain systemd[1]: nfs-ganesha-lock.service failed.



sudo ausearch -m AVC,USER_AVC,SELINUX_ERR,USER_SELINUX_ERR -i

----
type=SYSCALL msg=audit(05/28/2017 14:04:32.160:25) : arch=x86_64 syscall=bind success=yes exit=0 a0=0xf a1=0x7ffc757feb60 a2=0x10 a3=0x22 items=0 ppid=1149 pid=1157 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=glusterd exe=/usr/sbin/glusterfsd subj=system_u:system_r:glusterd_t:s0 key=(null) 
type=AVC msg=audit(05/28/2017 14:04:32.160:25) : avc:  denied  { name_bind } for  pid=1157 comm=glusterd src=61000 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:ephemeral_port_t:s0 tclass=tcp_socket 
----
type=SYSCALL msg=audit(05/28/2017 14:11:16.141:26) : arch=x86_64 syscall=bind success=no exit=EACCES(Permission denied) a0=0xf a1=0x7ffffbf92620 a2=0x10 a3=0x22 items=0 ppid=1139 pid=1146 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=glusterd exe=/usr/sbin/glusterfsd subj=system_u:system_r:glusterd_t:s0 key=(null) 
type=AVC msg=audit(05/28/2017 14:11:16.141:26) : avc:  denied  { name_bind } for  pid=1146 comm=glusterd src=61000 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:ephemeral_port_t:s0 tclass=tcp_socket 
----
type=SYSCALL msg=audit(05/28/2017 14:12:48.068:75) : arch=x86_64 syscall=openat success=no exit=EACCES(Permission denied) a0=0xffffffffffffff9c a1=0x7efdc1ec3e10 a2=O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC a3=0x0 items=0 ppid=1991 pid=1992 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=rpc.statd exe=/usr/sbin/rpc.statd subj=system_u:system_r:rpcd_t:s0 key=(null) 
type=AVC msg=audit(05/28/2017 14:12:48.068:75) : avc:  denied  { read } for  pid=1992 comm=rpc.statd name=sm dev="fuse" ino=12866274077597183313 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir 
----
type=SYSCALL msg=audit(05/28/2017 14:12:48.080:76) : arch=x86_64 syscall=open success=no exit=EACCES(Permission denied) a0=0x7efdc1ec3dd0 a1=O_RDONLY a2=0x7efdc1ec3de8 a3=0x5 items=0 ppid=1991 pid=1992 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=rpc.statd exe=/usr/sbin/rpc.statd subj=system_u:system_r:rpcd_t:s0 key=(null) 
type=AVC msg=audit(05/28/2017 14:12:48.080:76) : avc:  denied  { read } for  pid=1992 comm=rpc.statd name=state dev="fuse" ino=12362789396445498341 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file 
----
type=SYSCALL msg=audit(05/28/2017 14:17:37.177:26) : arch=x86_64 syscall=bind success=no exit=EACCES(Permission denied) a0=0xf a1=0x7ffdfa768c70 a2=0x10 a3=0x22 items=0 ppid=1155 pid=1162 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=glusterd exe=/usr/sbin/glusterfsd subj=system_u:system_r:glusterd_t:s0 key=(null) 
type=AVC msg=audit(05/28/2017 14:17:37.177:26) : avc:  denied  { name_bind } for  pid=1162 comm=glusterd src=61000 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:ephemeral_port_t:s0 tclass=tcp_socket 
----
type=SYSCALL msg=audit(05/28/2017 14:17:46.401:56) : arch=x86_64 syscall=kill success=no exit=EACCES(Permission denied) a0=0x560 a1=SIGKILL a2=0x7fd684000078 a3=0x0 items=0 ppid=1 pid=1167 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=glusterd exe=/usr/sbin/glusterfsd subj=system_u:system_r:glusterd_t:s0 key=(null) 
type=AVC msg=audit(05/28/2017 14:17:46.401:56) : avc:  denied  { sigkill } for  pid=1167 comm=glusterd scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:cluster_t:s0 tclass=process 
----
type=SYSCALL msg=audit(05/28/2017 14:17:45.400:55) : arch=x86_64 syscall=kill success=no exit=EACCES(Permission denied) a0=0x560 a1=SIGTERM a2=0x7fd684000038 a3=0x99 items=0 ppid=1 pid=1167 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=glusterd exe=/usr/sbin/glusterfsd subj=system_u:system_r:glusterd_t:s0 key=(null) 
type=AVC msg=audit(05/28/2017 14:17:45.400:55) : avc:  denied  { signal } for  pid=1167 comm=glusterd scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:cluster_t:s0 tclass=process 
----
type=SYSCALL msg=audit(05/28/2017 14:18:56.024:67) : arch=x86_64 syscall=openat success=no exit=EACCES(Permission denied) a0=0xffffffffffffff9c a1=0x7ff662e9be10 a2=O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC a3=0x0 items=0 ppid=1949 pid=1950 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=rpc.statd exe=/usr/sbin/rpc.statd subj=system_u:system_r:rpcd_t:s0 key=(null) 
type=AVC msg=audit(05/28/2017 14:18:56.024:67) : avc:  denied  { read } for  pid=1950 comm=rpc.statd name=sm dev="fuse" ino=12866274077597183313 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir 
----
type=SYSCALL msg=audit(05/28/2017 14:18:56.034:68) : arch=x86_64 syscall=open success=no exit=EACCES(Permission denied) a0=0x7ff662e9bdd0 a1=O_RDONLY a2=0x7ff662e9bde8 a3=0x5 items=0 ppid=1949 pid=1950 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=rpc.statd exe=/usr/sbin/rpc.statd subj=system_u:system_r:rpcd_t:s0 key=(null) 
type=AVC msg=audit(05/28/2017 14:18:56.034:68) : avc:  denied  { read } for  pid=1950 comm=rpc.statd name=state dev="fuse" ino=12362789396445498341 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

Comment 1 Soumya Koduri 2017-05-31 10:47:18 UTC
+Lukas from SELinux team.

@Lukas,
Kindly review above AVCs and let us know if they are handled as part of any recent selinux package versions. Thanks!

Comment 2 Lukas Vrabec 2017-05-31 10:57:28 UTC
Hi, 

You'll need to turn SELinux boolean on: 

# semanage boolean -m --on rpcd_use_fusefs

This should fix your issue.

Comment 3 Shyamsundar 2018-06-20 18:30:07 UTC
This bug reported is against a version of Gluster that is no longer maintained
(or has been EOL'd). See https://www.gluster.org/release-schedule/ for the
versions currently maintained.

As a result this bug is being closed.

If the bug persists on a maintained version of gluster or against the mainline
gluster repository, request that it be reopened and the Version field be marked
appropriately.


Note You need to log in before you can comment on or make changes to this bug.