Bug 1323740 - [SELinux]nfs-ganesha.service status shows "failed to connect to statd" after node reboot
Summary: [SELinux]nfs-ganesha.service status shows "failed to connect to statd" after ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.3
Assignee: Kaleb KEITHLEY
QA Contact: Shashank Raj
Marie Hornickova
URL:
Whiteboard:
: 1332577 (view as bug list)
Depends On: 1323947 1332577 1333875
Blocks: 1311817
TreeView+ depends on / blocked
 
Reported: 2016-04-04 15:02 UTC by Shashank Raj
Modified: 2016-11-08 03:53 UTC (History)
12 users (show)

Fixed In Version: nfs-ganesha-2.3.1-5, selinux-policy-3.13.1-60.el7_2.4
Doc Type: Bug Fix
Doc Text:
Due to missing rules in the Gluster SELinux policy, the nfs-ganesha service failed to connect to the rpc.statd daemon after a node reboot in the situation where the nfs-ganesha server was installed on four nodes. The underlying code has been fixed, and nfs-ganesha no longer fails in the described scenario.
Clone Of:
: 1323947 (view as bug list)
Environment:
Last Closed: 2016-06-23 05:35:06 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1334760 0 high CLOSED nfs-ganesha-2.3.1-5.el7rhgs having dependency on selinux-policy >= 0:3.13.1-70 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHEA-2016:1247 0 normal SHIPPED_LIVE nfs-ganesha update for Red Hat Gluster Storage 3.1 update 3 2016-06-23 09:12:43 UTC

Internal Links: 1334760

Description Shashank Raj 2016-04-04 15:02:52 UTC
Description of problem:
nfs-ganesha.service status shows "failed to connect to statd" after node reboot

Version-Release number of selected component (if applicable):
glusterfs-3.7.9-1

How reproducible:
twice

Steps to Reproduce:
1.Configure nfs-ganesha on a 4 node cluster and make sure all the services and ports are added properly in firewall and rpcinfo shows relevant output as below:

[root@dhcp37-180 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad


[root@dhcp37-158 bin]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad


[root@dhcp37-127 bin]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad

[root@dhcp37-174 bin]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad

[root@dhcp37-180 ~]# firewall-cmd --list-ports
662/udp 662/tcp 4501/udp 20048/udp 32000/udp 32000/tcp 20048/tcp 4501/tcp

[root@dhcp37-158 statd]# firewall-cmd --list-ports
662/udp 662/tcp 4501/udp 20048/udp 32000/udp 32000/tcp 20048/tcp 4501/tcp

[root@dhcp37-127 statd]# firewall-cmd --list-ports
662/udp 662/tcp 4501/udp 20048/udp 32000/udp 32000/tcp 20048/tcp 4501/tcp

[root@dhcp37-174 statd]# firewall-cmd --list-ports
662/udp 662/tcp 4501/udp 20048/udp 32000/udp 32000/tcp 20048/tcp 4501/tcp


2. Now power off 3 of the nodes from the cluster and start it after 5 minutes.
3. Make sure once the nodes comes up, shared storage is mounted properly on all the nodes.
4. Start pcsd, pacemaker and nfs-ganesha service on all the 3 nodes.
5. Observe that on 2 nodes, nfs-ganesha service status shows "failed to connect to statd"

[root@dhcp37-174 ~]# service nfs-ganesha status -l
Redirecting to /bin/systemctl status  -l nfs-ganesha.service
● nfs-ganesha.service - NFS-Ganesha file server
   Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2016-04-04 08:57:29 IST; 12min ago
     Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki
  Process: 3068 ExecStartPost=/bin/bash -c prlimit --pid $MAINPID --nofile=$NOFILE:$NOFILE (code=exited, status=0/SUCCESS)
  Process: 3066 ExecStart=/bin/bash -c ${NUMACTL} ${NUMAOPTS} /usr/bin/ganesha.nfsd ${OPTIONS} ${EPOCH} (code=exited, status=0/SUCCESS)
 Main PID: 3067 (ganesha.nfsd)
   CGroup: /system.slice/nfs-ganesha.service
           └─3067 /usr/bin/ganesha.nfsd

Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [reaper] nfs_in_grace :STATE :EVENT :NFS Server Now IN GRACE
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nsm_connect :NLM :CRIT :failed to connect to statd
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nsm_unmonitor_all :NLM :CRIT :Can not unmonitor all clnt_create returnedL
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nfs_start :NFS STARTUP :EVENT :------------------------------------------
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Apr 04 08:57:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [main] nfs_start :NFS STARTUP :EVENT :------------------------------------------
Apr 04 08:58:29 dhcp37-174.lab.eng.blr.redhat.com nfs-ganesha[3067]: [reaper] nfs_in_grace :STATE :EVENT :NFS Server Now NOT IN GRACE



[root@dhcp37-127 nfs-ganesha]# service nfs-ganesha status -l
Redirecting to /bin/systemctl status  -l nfs-ganesha.service
● nfs-ganesha.service - NFS-Ganesha file server
   Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2016-04-04 08:55:44 IST; 14min ago
     Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki
  Process: 3033 ExecStartPost=/bin/bash -c prlimit --pid $MAINPID --nofile=$NOFILE:$NOFILE (code=exited, status=0/SUCCESS)
  Process: 3031 ExecStart=/bin/bash -c ${NUMACTL} ${NUMAOPTS} /usr/bin/ganesha.nfsd ${OPTIONS} ${EPOCH} (code=exited, status=0/SUCCESS)
 Main PID: 3032 (ganesha.nfsd)
   CGroup: /system.slice/nfs-ganesha.service
           └─3032 /usr/bin/ganesha.nfsd

Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [reaper] nfs_in_grace :STATE :EVENT :NFS Server Now IN GRACE
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nsm_connect :NLM :CRIT :failed to connect to statd
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nsm_unmonitor_all :NLM :CRIT :Can not unmonitor all clnt_create returned NULL
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Apr 04 08:55:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Apr 04 08:56:44 dhcp37-127.lab.eng.blr.redhat.com nfs-ganesha[3032]: [reaper] nfs_in_grace :STATE :EVENT :NFS Server Now NOT IN GRACE


6. rpcinfo on 2 nodes has missing entries for status while on 3rd node its proper.

[root@dhcp37-158 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100024    1   tcp    662  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad


[root@dhcp37-127 nfs-ganesha]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad


[root@dhcp37-174 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad

7. below messages are observed in /var/log/messages;


on 1st node:

Apr  4 08:55:44 dhcp37-127 rpc.statd[3029]: Version 1.3.0 starting
Apr  4 08:55:44 dhcp37-127 rpc.statd[3029]: Flags: TI-RPC
Apr  4 08:55:44 dhcp37-127 rpc.statd[3029]: Failed to open directory sm: Permission denied
Apr  4 08:55:44 dhcp37-127 rpc.statd[3029]: Failed to open /var/lib/nfs/statd/state: Permission denied
Apr  4 08:55:44 dhcp37-127 systemd: nfs-ganesha-lock.service: control process exited, code=exited status=1
Apr  4 08:55:44 dhcp37-127 systemd: Failed to start NFS status monitor for NFSv2/3 locking..
Apr  4 08:55:44 dhcp37-127 systemd: Unit nfs-ganesha-lock.service entered failed state.
Apr  4 08:55:44 dhcp37-127 systemd: nfs-ganesha-lock.service failed.
Apr  4 08:55:44 dhcp37-127 systemd: Starting NFS-Ganesha file server...


Apr  4 08:55:44 dhcp37-127 nfs-ganesha[3032]: [reaper] nfs_in_grace :STATE :EVENT :NFS Server Now IN GRACE
Apr  4 08:55:44 dhcp37-127 nfs-ganesha[3032]: [main] nsm_connect :NLM :CRIT :failed to connect to statd
Apr  4 08:55:44 dhcp37-127 nfs-ganesha[3032]: [main] nsm_unmonitor_all :NLM :CRIT :Can not unmonitor all clnt_create returned NULL

on 2nd node:



Apr  4 08:57:29 dhcp37-174 rpc.statd[3064]: Failed to open directory sm: Permission denied
Apr  4 08:57:29 dhcp37-174 rpc.statd[3064]: Failed to open /var/lib/nfs/statd/state: Permission denied
Apr  4 08:57:29 dhcp37-174 systemd: nfs-ganesha-lock.service: control process exited, code=exited status=1
Apr  4 08:57:29 dhcp37-174 systemd: Failed to start NFS status monitor for NFSv2/3 locking..
Apr  4 08:57:29 dhcp37-174 systemd: Unit nfs-ganesha-lock.service entered failed state.
Apr  4 08:57:29 dhcp37-174 systemd: nfs-ganesha-lock.service failed.
Apr  4 08:57:29 dhcp37-174 systemd: Starting NFS-Ganesha file server...

Apr  4 08:57:29 dhcp37-174 nfs-ganesha[3067]: [main] nsm_connect :NLM :CRIT :failed to connect to statd
Apr  4 08:57:29 dhcp37-174 nfs-ganesha[3067]: [main] nsm_unmonitor_all :NLM :CRIT :Can not unmonitor all clnt_create returned NULL


Actual results:

nfs-ganesha.service status shows "failed to connect to statd" after node reboot

Expected results:

there should not be any failures after node comes up and after the restart of nfs-ganesha, pcs and pacemaker services.


Additional info:

Comment 2 Shashank Raj 2016-04-04 15:10:04 UTC
sosreports are placed under http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1323740

Comment 3 Soumya Koduri 2016-04-04 15:21:31 UTC
I see below AVCs in one of the machines where rpc.statd hasn't started.

type=AVC msg=audit(1459740344.745:419): avc:  denied  { read } for  pid=3029 comm="rpc.statd" name="nfs" dev="dm-0" ino=34567184 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file
type=SYSCALL msg=audit(1459740344.745:419): arch=c000003e syscall=257 success=no exit=-13 a0=ffffffffffffff9c a1=7effa7434790 a2=90800 a3=0 items=0 ppid=3028 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null)
type=AVC msg=audit(1459740344.745:420): avc:  denied  { read } for  pid=3029 comm="rpc.statd" name="nfs" dev="dm-0" ino=34567184 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file
type=SYSCALL msg=audit(1459740344.745:420): arch=c000003e syscall=2 success=no exit=-13 a0=7effa7434750 a1=0 a2=7effa7434768 a3=5 items=0 ppid=3028 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null)


Not sure why these AVCs are not seen on other machines.. Could you check with selinux disabled?

Comment 4 Shashank Raj 2016-04-05 07:02:27 UTC
Correct Soumya, after running the same test with selinux disabled, i didnt observe the issue. No statd related failures seen in ganesha.service status. However i can below avc's in audit.log


type=AVC msg=audit(1459799848.045:869): avc:  denied  { read } for  pid=1565 comm="rpc.statd" name="nfs" dev="dm-0" ino=35254482 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file

type=AVC msg=audit(1459799848.045:869): avc:  denied  { read } for  pid=1565 comm="rpc.statd" name="sm" dev="fuse" ino=9851517453928257202 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir

type=SYSCALL msg=audit(1459799848.045:869): arch=c000003e syscall=257 success=yes exit=7 a0=ffffffffffffff9c a1=7f92e6f96790 a2=90800 a3=0 items=0 ppid=1564 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null)

type=AVC msg=audit(1459799848.060:870): avc:  denied  { read } for  pid=1565 comm="rpc.statd" name="state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1459799848.060:870): avc:  denied  { open } for  pid=1565 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp37-180.lab.eng.blr.redhat.com/nfs/statd/state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1459799848.060:870): avc:  denied  { read } for  pid=1565 comm="rpc.statd" name="state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1459799848.060:870): avc:  denied  { open } for  pid=1565 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp37-180.lab.eng.blr.redhat.com/nfs/statd/state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=SYSCALL msg=audit(1459799848.060:870): arch=c000003e syscall=2 success=yes exit=7 a0=7f92e6f96750 a1=0 a2=7f92e6f96768 a3=5 items=0 ppid=1564 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null)

type=AVC msg=audit(1459799848.065:871): avc:  denied  { write } for  pid=1565 comm="rpc.statd" name="statd" dev="fuse" ino=9574569421130904447 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir

type=AVC msg=audit(1459799848.065:871): avc:  denied  { add_name } for  pid=1565 comm="rpc.statd" name="state.new" scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir

type=AVC msg=audit(1459799848.065:871): avc:  denied  { create } for  pid=1565 comm="rpc.statd" name="state.new" scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1459799848.065:871): avc:  denied  { write } for  pid=1565 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp37-180.lab.eng.blr.redhat.com/nfs/statd/state.new" dev="fuse" ino=12901113835499053102 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=SYSCALL msg=audit(1459799848.065:871): arch=c000003e syscall=2 success=yes exit=7 a0=7f92e6f96780 a1=101241 a2=1a4 a3=18 items=0 ppid=1564 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null)

type=AVC msg=audit(1459799848.079:872): avc:  denied  { remove_name } for  pid=1565 comm="rpc.statd" name="state.new" dev="fuse" ino=12901113835499053102 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir

type=AVC msg=audit(1459799848.079:872): avc:  denied  { rename } for  pid=1565 comm="rpc.statd" name="state.new" dev="fuse" ino=12901113835499053102 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

type=AVC msg=audit(1459799848.079:872): avc:  denied  { unlink } for  pid=1565 comm="rpc.statd" name="state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file

Comment 5 Niels de Vos 2016-04-05 13:59:56 UTC
There is an issue that rpc.statd (running as rpcd_t) can not access files located on a fuse mount (fuse_t). FUSE does not support setting SELinux contexts yet :-/

Maybe we can mount the shared-storage volume with a different context and that should allow rpc.statd to access the contents? Something like this might do:

  mount -t glusterfs -o context=unconfined_u:unconfined_r:unconfined_t ...

Shashank, could you try that out? Restart the nfs-ganesha-lock service and ganesha to see if it makes a difference. If not, maybe the SELinux experts can suggest a more suitable context for mounting the shared storage volume.

Comment 6 Shashank Raj 2016-04-07 12:21:04 UTC
Niels,

Tried with your suggestion but it fails with invalid argument.

[root@dhcp37-180 ~]# mount -t glusterfs -o context=unconfined_u:unconfined_r:unconfined_t localhost:/gluster_shared_storage /var/run/gluster/shared_storage
/usr/bin/fusermount-glusterfs: mount failed: Invalid argument
Mount failed. Please check the log file for more details.

Shashank

Comment 7 Niels de Vos 2016-04-07 15:48:32 UTC
(In reply to Shashank Raj from comment #6)
> Niels,
> 
> Tried with your suggestion but it fails with invalid argument.
> 
> [root@dhcp37-180 ~]# mount -t glusterfs -o
> context=unconfined_u:unconfined_r:unconfined_t
> localhost:/gluster_shared_storage /var/run/gluster/shared_storage
> /usr/bin/fusermount-glusterfs: mount failed: Invalid argument
> Mount failed. Please check the log file for more details.

Please check with one of the SELinux experts (Prasanth?) how the context mount option should be used (it is common to all filesystems).

Comment 9 Shashank Raj 2016-04-13 16:15:17 UTC
Updated dependent selinux bug (https://bugzilla.redhat.com/show_bug.cgi?id=1323947) with the details after trying the workaround.

Comment 12 Shashank Raj 2016-05-05 09:17:36 UTC
Hi Kaleb,

below are the comments from selinux team to make this work with nfs-ganesha. Can we take a look into it and do the needful

Lukas Vrabec 2016-05-03 10:11:22 EDT

Hi, 

To make working nfs-ganesha with SELinux, it's needed to add to post install phase this command:

$ semanage boolean -m --on rpcd_use_fusefs

This boolean is part of selinux-policy-3.13.1-70.el7 , so this package is needed to be required in nfs-ganesha rpm package.

Comment 13 Shashank Raj 2016-05-05 09:21:08 UTC
*** Bug 1332577 has been marked as a duplicate of this bug. ***

Comment 14 Kaleb KEITHLEY 2016-05-05 12:46:24 UTC
I see 3.13.1-70.el7 in brewroot. Is there an ETA in RHEL7 or be available for rhpkg builds?

Thanks.

Comment 15 Kaleb KEITHLEY 2016-05-05 12:50:01 UTC
Waiting for selinux-policy-3.138.1-70 to become available before I can do a build

Comment 20 Shashank Raj 2016-05-11 08:56:58 UTC
Verified this bug with selinux-policy-3.13.1-60.el7_2.4.noarch and nfs-ganesha-2.3.1-6.el7rhgs.x86_64 build and the issue is resolved. 

Verified with different scenarios as below:

>> setting up nfs-ganesha environment
>> rebooting nodes in cluster
>> manually restarting nfs-ganesha and nfs-ganesha-lock service multiple time

In any case, nfs-ganesha and nfs-ganesha-lock service is not going in failed state and no denial AVC's are seen in audit.log.

Based on the above observation, marking this bug as Verified.

Comment 22 errata-xmlrpc 2016-06-23 05:35:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1247


Note You need to log in before you can comment on or make changes to this bug.