RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1323947 - [SELinux]: AVC's denying permissions related to statd, observed in nfs-ganesha environment -RHEL7
Summary: [SELinux]: AVC's denying permissions related to statd, observed in nfs-ganesh...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy
Version: 7.2
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: rc
: ---
Assignee: Lukas Vrabec
QA Contact: Milos Malik
Marie Hornickova
URL:
Whiteboard:
Depends On:
Blocks: 1323740 1332577 1333875
TreeView+ depends on / blocked
 
Reported: 2016-04-05 07:09 UTC by Shashank Raj
Modified: 2016-11-08 03:53 UTC (History)
19 users (show)

Fixed In Version: selinux-policy-3.13.1-70.el7
Doc Type: Bug Fix
Doc Text:
Due to missing rules in the Gluster SELinux policy, the nfs-ganesha service failed to connect to the rpc.statd daemon after a node reboot in the situation where the nfs-ganesha server was installed on four nodes. The underlying code has been fixed, and nfs-ganesha no longer fails in the described scenario.
Clone Of: 1323740
: 1332577 1333875 (view as bug list)
Environment:
Last Closed: 2016-11-04 02:46:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2283 0 normal SHIPPED_LIVE selinux-policy bug fix and enhancement update 2016-11-03 13:36:25 UTC

Description Shashank Raj 2016-04-05 07:09:26 UTC
Created attachment 1143678 [details]
audit logs

+++ This bug was initially created as a clone of Bug #1323740 +++

Description of problem:
nfs-ganesha.service status shows "failed to connect to statd" after node reboot

Version-Release number of selected component (if applicable):
glusterfs-3.7.9-1

How reproducible:
twice

Steps to Reproduce:
1.Configure nfs-ganesha on a 4 node cluster and make sure all the services and ports are added properly in firewall and rpcinfo shows relevant output as below:

[root@dhcp37-180 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad


[root@dhcp37-158 bin]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad


[root@dhcp37-127 bin]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad

[root@dhcp37-174 bin]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad

[root@dhcp37-180 ~]# firewall-cmd --list-ports
662/udp 662/tcp 4501/udp 20048/udp 32000/udp 32000/tcp 20048/tcp 4501/tcp

[root@dhcp37-158 statd]# firewall-cmd --list-ports
662/udp 662/tcp 4501/udp 20048/udp 32000/udp 32000/tcp 20048/tcp 4501/tcp

[root@dhcp37-127 statd]# firewall-cmd --list-ports
662/udp 662/tcp 4501/udp 20048/udp 32000/udp 32000/tcp 20048/tcp 4501/tcp

[root@dhcp37-174 statd]# firewall-cmd --list-ports
662/udp 662/tcp 4501/udp 20048/udp 32000/udp 32000/tcp 20048/tcp 4501/tcp


2. Now power off 3 of the nodes from the cluster and start it after 5 minutes.
3. Make sure once the nodes comes up, shared storage is mounted properly on all the nodes.
4. Start pcsd, pacemaker and nfs-ganesha service on all the 3 nodes.
5. Observe that on 2 nodes, nfs-ganesha service status shows "failed to connect to statd"

[root@dhcp37-174 ~]# service nfs-ganesha status -l
Redirecting to /bin/systemctl status  -l nfs-ganesha.service
● nfs-ganesha.service - NFS-Ganesha file server
   Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2016-04-04 08:57:29 IST; 12min ago
     Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki
  Process: 3068 ExecStartPost=/bin/bash -c prlimit --pid $MAINPID --nofile=$NOFILE:$NOFILE (code=exited, status=0/SUCCESS)
  Process: 3066 ExecStart=/bin/bash -c ${NUMACTL} ${NUMAOPTS} /usr/bin/ganesha.nfsd ${OPTIONS} ${EPOCH} (code=exited, status=0/SUCCESS)
 Main PID: 3067 (ganesha.nfsd)
   CGroup: /system.slice/nfs-ganesha.service
           └─3067 /usr/bin/ganesha.nfsd

Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [reaper] nfs_in_grace :STATE :EVENT :NFS Server Now IN GRACE
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nsm_connect :NLM :CRIT :failed to connect to statd
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nsm_unmonitor_all :NLM :CRIT :Can not unmonitor all clnt_create returnedL
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nfs_start :NFS STARTUP :EVENT :------------------------------------------
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nfs_start :NFS STARTUP :EVENT :------------------------------------------
Apr 04 08:58:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [reaper] nfs_in_grace :STATE :EVENT :NFS Server Now NOT IN GRACE



[root@dhcp37-127 nfs-ganesha]# service nfs-ganesha status -l
Redirecting to /bin/systemctl status  -l nfs-ganesha.service
● nfs-ganesha.service - NFS-Ganesha file server
   Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2016-04-04 08:55:44 IST; 14min ago
     Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki
  Process: 3033 ExecStartPost=/bin/bash -c prlimit --pid $MAINPID --nofile=$NOFILE:$NOFILE (code=exited, status=0/SUCCESS)
  Process: 3031 ExecStart=/bin/bash -c ${NUMACTL} ${NUMAOPTS} /usr/bin/ganesha.nfsd ${OPTIONS} ${EPOCH} (code=exited, status=0/SUCCESS)
 Main PID: 3032 (ganesha.nfsd)
   CGroup: /system.slice/nfs-ganesha.service
           └─3032 /usr/bin/ganesha.nfsd

Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [reaper] nfs_in_grace :STATE :EVENT :NFS Server Now IN GRACE
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nsm_connect :NLM :CRIT :failed to connect to statd
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nsm_unmonitor_all :NLM :CRIT :Can not unmonitor all clnt_create returned NULL
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Apr 04 08:56:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [reaper] nfs_in_grace :STATE :EVENT :NFS Server Now NOT IN GRACE


6. rpcinfo on 2 nodes has missing entries for status while on 3rd node its proper.

[root@dhcp37-158 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad


[root@dhcp37-127 nfs-ganesha]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad


[root@dhcp37-174 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad

7. below messages are observed in /var/log/messages;


on 1st node:

Apr  4 08:55:44 dhcp37-127 rpc.statd[3029]: Version 1.3.0 starting
Apr  4 08:55:44 dhcp37-127 rpc.statd[3029]: Flags: TI-RPC
Apr  4 08:55:44 dhcp37-127 rpc.statd[3029]: Failed to open directory sm: Permission denied
Apr  4 08:55:44 dhcp37-127 rpc.statd[3029]: Failed to open /var/lib/nfs/statd/state: Permission denied
Apr  4 08:55:44 dhcp37-127 systemd: nfs-ganesha-lock.service: control process exited, code=exited status=1
Apr  4 08:55:44 dhcp37-127 systemd: Failed to start NFS status monitor for NFSv2/3 locking..
Apr  4 08:55:44 dhcp37-127 systemd: Unit nfs-ganesha-lock.service entered failed state.
Apr  4 08:55:44 dhcp37-127 systemd: nfs-ganesha-lock.service failed.
Apr  4 08:55:44 dhcp37-127 systemd: Starting NFS-Ganesha file server...


Apr  4 08:55:44 dhcp37-127 nfs-ganesha[3032]: [reaper] nfs_in_grace :STATE :EVENT :NFS Server Now IN GRACE
Apr  4 08:55:44 dhcp37-127 nfs-ganesha[3032]: [main] nsm_connect :NLM :CRIT :failed to connect to statd
Apr  4 08:55:44 dhcp37-127 nfs-ganesha[3032]: [main] nsm_unmonitor_all :NLM :CRIT :Can not unmonitor all clnt_create returned NULL

on 2nd node:



Apr  4 08:57:29 dhcp37-174 rpc.statd[3064]: Failed to open directory sm: Permission denied
Apr  4 08:57:29 dhcp37-174 rpc.statd[3064]: Failed to open /var/lib/nfs/statd/state: Permission denied
Apr  4 08:57:29 dhcp37-174 systemd: nfs-ganesha-lock.service: control process exited, code=exited status=1
Apr  4 08:57:29 dhcp37-174 systemd: Failed to start NFS status monitor for NFSv2/3 locking..
Apr  4 08:57:29 dhcp37-174 systemd: Unit nfs-ganesha-lock.service entered failed state.
Apr  4 08:57:29 dhcp37-174 systemd: nfs-ganesha-lock.service failed.
Apr  4 08:57:29 dhcp37-174 systemd: Starting NFS-Ganesha file server...

Apr  4 08:57:29 dhcp37-174 nfs-ganesha[3067]: [main] nsm_connect :NLM :CRIT :failed to connect to statd
Apr  4 08:57:29 dhcp37-174 nfs-ganesha[3067]: [main] nsm_unmonitor_all :NLM :CRIT :Can not unmonitor all clnt_create returned NULL


Actual results:

nfs-ganesha.service status shows "failed to connect to statd" after node reboot

Expected results:

there should not be any failures after node comes up and after the restart of nfs-ganesha, pcs and pacemaker services.


Additional info:

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-04-04 11:02:57 EDT ---

This bug is automatically being proposed for the current z-stream release of Red Hat Gluster Storage 3 by setting the release flag 'rhgs‑3.1.z' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Shashank Raj on 2016-04-04 11:10:04 EDT ---

sosreports are placed under http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1323740

--- Additional comment from Soumya Koduri on 2016-04-04 11:21:31 EDT ---

I see below AVCs in one of the machines where rpc.statd hasn't started.

type=AVC msg=audit(1459740344.745:419): avc:  denied  { read } for  pid=3029 comm="rpc.statd" name="nfs" dev="dm-0" ino=34567184 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file
type=SYSCALL msg=audit(1459740344.745:419): arch=c000003e syscall=257 success=no exit=-13 a0=ffffffffffffff9c a1=7effa7434790 a2=90800 a3=0 items=0 ppid=3028 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null)
type=AVC msg=audit(1459740344.745:420): avc:  denied  { read } for  pid=3029 comm="rpc.statd" name="nfs" dev="dm-0" ino=34567184 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file
type=SYSCALL msg=audit(1459740344.745:420): arch=c000003e syscall=2 success=no exit=-13 a0=7effa7434750 a1=0 a2=7effa7434768 a3=5 items=0 ppid=3028 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null)


Not sure why these AVCs are not seen on other machines.. Could you check with selinux disabled?

--- Additional comment from Shashank Raj on 2016-04-05 03:02:27 EDT ---

Correct Soumya, after running the same test with selinux disabled, i didnt observe the issue. No statd related failures seen in ganesha.service status. However i can below avc's in audit.log


type=AVC msg=audit(1459799848.045:869): avc:  denied  { read } for  pid=1565 comm="rpc.statd" name="nfs" dev="dm-0" ino=35254482 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file

type=AVC msg=audit(1459799848.045:869): avc:  denied  { read } for  pid=1565 comm="rpc.statd" name="sm" dev="fuse" ino=9851517453928257202 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir

type=SYSCALL msg=audit(1459799848.045:869): arch=c000003e syscall=257 success=yes exit=7 a0=ffffffffffffff9c a1=7f92e6f96790 a2=90800 a3=0 items=0 ppid=1564 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null)

type=AVC msg=audit(1459799848.060:870): avc:  denied  { read } for  pid=1565 comm="rpc.statd" name="state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1459799848.060:870): avc:  denied  { open } for  pid=1565 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp37-180.lab.eng.blr.redhat.com/nfs/statd/state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1459799848.060:870): avc:  denied  { read } for  pid=1565 comm="rpc.statd" name="state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1459799848.060:870): avc:  denied  { open } for  pid=1565 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp37-180.lab.eng.blr.redhat.com/nfs/statd/state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=SYSCALL msg=audit(1459799848.060:870): arch=c000003e syscall=2 success=yes exit=7 a0=7f92e6f96750 a1=0 a2=7f92e6f96768 a3=5 items=0 ppid=1564 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null)

type=AVC msg=audit(1459799848.065:871): avc:  denied  { write } for  pid=1565 comm="rpc.statd" name="statd" dev="fuse" ino=9574569421130904447 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir

type=AVC msg=audit(1459799848.065:871): avc:  denied  { add_name } for  pid=1565 comm="rpc.statd" name="state.new" scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir

type=AVC msg=audit(1459799848.065:871): avc:  denied  { create } for  pid=1565 comm="rpc.statd" name="state.new" scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1459799848.065:871): avc:  denied  { write } for  pid=1565 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp37-180.lab.eng.blr.redhat.com/nfs/statd/state.new" dev="fuse" ino=12901113835499053102 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=SYSCALL msg=audit(1459799848.065:871): arch=c000003e syscall=2 success=yes exit=7 a0=7f92e6f96780 a1=101241 a2=1a4 a3=18 items=0 ppid=1564 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null)

type=AVC msg=audit(1459799848.079:872): avc:  denied  { remove_name } for  pid=1565 comm="rpc.statd" name="state.new" dev="fuse" ino=12901113835499053102 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir

type=AVC msg=audit(1459799848.079:872): avc:  denied  { rename } for  pid=1565 comm="rpc.statd" name="state.new" dev="fuse" ino=12901113835499053102 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1459799848.079:872): avc:  denied  { unlink } for  pid=1565 comm="rpc.statd" name="state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

Comment 6 Shashank Raj 2016-04-15 14:02:46 UTC
[root@dhcp37-180 /]# mount | grep /var/lib
[root@dhcp37-180 /]# ls -RZ /var/lib/nfs
lrwxrwxrwx. root root system_u:object_r:var_lib_t:s0   /var/lib/nfs -> /var/run/gluster/shared_storage/nfs-ganesha/dhcp37-180.lab.eng.blr.redhat.com/nfs

Comment 9 Shashank Raj 2016-04-28 17:22:02 UTC
As mentioned in comment 4, there are still some AVC's seen in audit logs which restrict the starting of statd in enforcing mode:

type=AVC msg=audit(1461741415.944:384): avc:  denied  { read } for  pid=1911 comm="sm-notify" name="nfs" dev="dm-0" ino=36742185 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file

type=AVC msg=audit(1461616003.027:1760): avc:  denied  { read } for  pid=18230 comm="rpc.statd" name="nfs" dev="dm-0" ino=34738912 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file

So we need to have an updated local policy which could fix the above issues as well.

Let me know for any other information

Comment 10 Lukas Vrabec 2016-04-29 09:12:50 UTC
Hi Shashank, 

AVCs with fusefs_t filesystem will be fixed by new boolean for rpcd. 

I would like to ask, how is link file "/var/lib/nfs" created? We need to ensure that link file will be created with proper SELinux label.

Comment 11 Shashank Raj 2016-04-29 10:08:28 UTC
so once we have the shared volume mounted on /var/run/gluster/shared_storage (which is a fuse mount), as part of ganesha configuration (gluster nfs-ganesha enable), we create directories under nfs-ganesha on the shared_storage location and give a link to /var/lib/nfs. As below

/var/lib/nfs -> /var/run/gluster/shared_storage/nfs-ganesha/dhcp37-180.lab.eng.blr.redhat.com/nfs

/var/lib/nfs -> /var/run/gluster/shared_storage/nfs-ganesha/dhcp37-158.lab.eng.blr.redhat.com/nfs

/var/lib/nfs -> /var/run/gluster/shared_storage/nfs-ganesha/dhcp37-174.lab.eng.blr.redhat.com/nfs

Comment 12 Lukas Vrabec 2016-04-29 10:15:27 UTC
Understand, and which process creating link? I need to create SELinux transition rule.

Comment 13 Shashank Raj 2016-04-29 10:35:24 UTC
During gluster nfs-ganesha enable, we call this script "/usr/libexec/ganesha/ganesha-ha.sh" to configure it.

Comment 14 Lukas Vrabec 2016-05-02 08:10:34 UTC
I sent scratch builds to Shashank Raj. 

Please, attach AVCs after testing. 

Thank you.

Comment 15 Shashank Raj 2016-05-02 09:46:07 UTC
You asked me to update libselinux and policycoreutils packages, before updating selinux policies but there are other dependent packages as well which stops me from upgrading this. See below:

--> Finished Dependency Resolution
Error: Package: libselinux-2.5-3.el7.x86_64 (/libselinux-2.5-3.el7.x86_64)
           Requires: libsepol(x86-64) >= 2.5
           Installed: libsepol-2.1.9-3.el7.x86_64 (@anaconda/7.2)
               libsepol(x86-64) = 2.1.9-3.el7

Error: Package: policycoreutils-2.5-2.el7.x86_64 (/policycoreutils-2.5-2.el7.x86_64)
           Requires: libsepol >= 2.5
           Installed: libsepol-2.1.9-3.el7.x86_64 (@anaconda/7.2)
               libsepol = 2.1.9-3.el7

i downloaded the package libsepol-2.5-2.1.el7.x86_64.rpm from location (https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=489564) and then tried installing but again got into below dependency issue.

Error: libselinux conflicts with systemd-219-19.el7_2.8.x86_64

Error: Package: policycoreutils-python-2.5-2.el7.x86_64 (/policycoreutils-python-2.5-2.el7.x86_64)
           Requires: libsemanage-python >= 2.5
           Installed: libsemanage-python-2.1.10-18.el7.x86_64 (@rhel-7-server-rpms)
               libsemanage-python = 2.1.10-18.el7
           Available: libsemanage-python-2.1.10-16.el7.x86_64 (rhel-7-server-rpms)
               libsemanage-python = 2.1.10-16.el7

After downloading the libsemanage packages from (https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=489872) and try upgrading, got into below issue

Error: libsemanage conflicts with selinux-policy-targeted-3.13.1-60.el7_2.3.noarch

So, because of so many dependency issues, i could no start with the verification.

Can you please provide the required and necessary links/packages which i can use and test the fix.

Comment 16 Shashank Raj 2016-05-02 13:12:19 UTC
After upgrading to the selinux build:

[root@dhcp43-188 ~]# rpm -qa|grep selinux
selinux-policy-targeted-3.13.1-69.el7.1.noarch
selinux-policy-devel-3.13.1-69.el7.1.noarch
selinux-policy-3.13.1-69.el7.1.noarch

below AVC's are seen in audit.log

type=AVC msg=audit(1462213059.347:67507): avc:  denied  { read } for  pid=25805 comm="rpc.statd" name="nfs" dev="dm-0" ino=33768381 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file

type=AVC msg=audit(1462213059.347:67507): avc:  denied  { read } for  pid=25805 comm="rpc.statd" name="sm" dev="fuse" ino=12705707374506975531 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir

type=AVC msg=audit(1462213059.358:67508): avc:  denied  { read } for  pid=25805 comm="rpc.statd" name="state" dev="fuse" ino=12079666029056761640 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1462213059.358:67508): avc:  denied  { open } for  pid=25805 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp42-83.lab.eng.blr.redhat.com/nfs/statd/state" dev="fuse" ino=12079666029056761640 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1462213059.166:67402): avc:  denied  { open } for  pid=32468 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp42-115.lab.eng.blr.redhat.com/nfs/statd/state" dev="fuse" ino=12458638062750435886 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1462213059.759:67450): avc:  denied  { open } for  pid=22099 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp43-133.lab.eng.blr.redhat.com/nfs/statd/state" dev="fuse" ino=11945482723556650030 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1462213059.646:62799): avc:  denied  { open } for  pid=23423 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp43-188.lab.eng.blr.redhat.com/nfs/statd/state" dev="fuse" ino=11315594939681067586 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1462213059.361:67509): avc:  denied  { write } for  pid=25805 comm="rpc.statd" name="statd" dev="fuse" ino=11774365275056367482 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir

type=AVC msg=audit(1462213059.361:67509): avc:  denied  { add_name } for  pid=25805 comm="rpc.statd" name="state.new" scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir

type=AVC msg=audit(1462213059.361:67509): avc:  denied  { create } for  pid=25805 comm="rpc.statd" name="state.new" scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1462213059.361:67509): avc:  denied  { write } for  pid=25805 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp42-83.lab.eng.blr.redhat.com/nfs/statd/state.new" dev="fuse" ino=10157941417756713419 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1462213059.168:67403): avc:  denied  { write } for  pid=32468 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp42-115.lab.eng.blr.redhat.com/nfs/statd/state.new" dev="fuse" ino=13750575941116322667 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1462213059.761:67451): avc:  denied  { write } for  pid=22099 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp43-133.lab.eng.blr.redhat.com/nfs/statd/state.new" dev="fuse" ino=12534429915005777352 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1462213059.648:62800): avc:  denied  { write } for  pid=23423 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp43-188.lab.eng.blr.redhat.com/nfs/statd/state.new" dev="fuse" ino=10074523560772388662 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1462213059.430:67510): avc:  denied  { remove_name } for  pid=25805 comm="rpc.statd" name="state.new" dev="fuse" ino=10157941417756713419 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir

type=AVC msg=audit(1462213059.430:67510): avc:  denied  { rename } for  pid=25805 comm="rpc.statd" name="state.new" dev="fuse" ino=10157941417756713419 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1462213059.430:67510): avc:  denied  { unlink } for  pid=25805 comm="rpc.statd" name="state" dev="fuse" ino=12079666029056761640 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

Comment 22 James Christensen 2016-05-17 19:50:42 UTC
Fresh install today, looks like the same issue here.

-- Unit rpc-statd.service has begun starting up.
May 17 14:32:22 nfs1 rpc.statd[17967]: Version 1.3.0 starting
May 17 14:32:22 nfs1 rpc.statd[17967]: Flags: TI-RPC
May 17 14:32:22 nfs1 rpc.statd[17967]: Failed to open directory sm: Permission denied
May 17 14:32:22 nfs1 rpc.statd[17967]: Failed to open /var/lib/nfs/statd/state: Permission denied
May 17 14:32:22 nfs1 systemd[1]: rpc-statd.service: control process exited, code=exited status=1
May 17 14:32:22 nfs1 systemd[1]: Failed to start NFS status monitor for NFSv2/3 locking..
-- Subject: Unit rpc-statd.service has failed

type=SERVICE_START msg=audit(1463513555.587:433): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rpc-statd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
type=AVC msg=audit(1463513901.613:434): avc:  denied  { read } for  pid=19924 comm="rpc.statd" name="nfs" dev="dm-0" ino=34264089 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file
type=SYSCALL msg=audit(1463513901.613:434): arch=c000003e syscall=257 success=no exit=-13 a0=ffffffffffffff9c a1=7fc977d5fc70 a2=90800 a3=0 items=0 ppid=19923 pid=19924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null)
type=AVC msg=audit(1463513901.613:435): avc:  denied  { read } for  pid=19924 comm="rpc.statd" name="nfs" dev="dm-0" ino=34264089 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file
type=SYSCALL msg=audit(1463513901.613:435): arch=c000003e syscall=2 success=no exit=-13 a0=7fc977d5e910 a1=0 a2=7fc977d5e928 a3=5 items=0 ppid=19923 pid=19924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null)
type=SERVICE_START msg=audit(1463513901.616:436): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rpc-statd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'

Comment 23 Milos Malik 2016-05-18 14:08:44 UTC
Which selinux-policy version did you use?

# rpm -qa selinux-policy\*

Comment 24 James Christensen 2016-05-23 19:19:52 UTC
selinux-policy-3.13.1-60.el7_2.3.noarch
selinux-policy-targeted-3.13.1-60.el7_2.3.noarch

Comment 25 Milos Malik 2016-05-24 07:06:23 UTC
The fix is not present in version 3.13.1-60.el7_2.3 of selinux-policy packages, but it is present in version 3.13.1-60.el7_2.4 and above, which will be released soon.

Comment 27 errata-xmlrpc 2016-11-04 02:46:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2283.html


Note You need to log in before you can comment on or make changes to this bug.