Bug 1323740
Summary: | [SELinux]nfs-ganesha.service status shows "failed to connect to statd" after node reboot | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Shashank Raj <sraj> | |
Component: | nfs-ganesha | Assignee: | Kaleb KEITHLEY <kkeithle> | |
Status: | CLOSED ERRATA | QA Contact: | Shashank Raj <sraj> | |
Severity: | high | Docs Contact: | Marie Hornickova <mdolezel> | |
Priority: | unspecified | |||
Version: | rhgs-3.1 | CC: | asrivast, jthottan, kkeithle, lvrabec, mdolezel, ndevos, nlevinki, pprakash, rhinduja, sashinde, skoduri, sraj | |
Target Milestone: | --- | Keywords: | SELinux, ZStream | |
Target Release: | RHGS 3.1.3 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | nfs-ganesha-2.3.1-5, selinux-policy-3.13.1-60.el7_2.4 | Doc Type: | Bug Fix | |
Doc Text: |
Due to missing rules in the Gluster SELinux policy, the nfs-ganesha service failed to connect to the rpc.statd daemon after a node reboot in the situation where the nfs-ganesha server was installed on four nodes. The underlying code has been fixed, and nfs-ganesha no longer fails in the described scenario.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1323947 (view as bug list) | Environment: | ||
Last Closed: | 2016-06-23 05:35:06 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1323947, 1332577, 1333875 | |||
Bug Blocks: | 1311817 |
Description
Shashank Raj
2016-04-04 15:02:52 UTC
sosreports are placed under http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1323740 I see below AVCs in one of the machines where rpc.statd hasn't started. type=AVC msg=audit(1459740344.745:419): avc: denied { read } for pid=3029 comm="rpc.statd" name="nfs" dev="dm-0" ino=34567184 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file type=SYSCALL msg=audit(1459740344.745:419): arch=c000003e syscall=257 success=no exit=-13 a0=ffffffffffffff9c a1=7effa7434790 a2=90800 a3=0 items=0 ppid=3028 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null) type=AVC msg=audit(1459740344.745:420): avc: denied { read } for pid=3029 comm="rpc.statd" name="nfs" dev="dm-0" ino=34567184 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file type=SYSCALL msg=audit(1459740344.745:420): arch=c000003e syscall=2 success=no exit=-13 a0=7effa7434750 a1=0 a2=7effa7434768 a3=5 items=0 ppid=3028 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null) Not sure why these AVCs are not seen on other machines.. Could you check with selinux disabled? Correct Soumya, after running the same test with selinux disabled, i didnt observe the issue. No statd related failures seen in ganesha.service status. However i can below avc's in audit.log type=AVC msg=audit(1459799848.045:869): avc: denied { read } for pid=1565 comm="rpc.statd" name="nfs" dev="dm-0" ino=35254482 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file type=AVC msg=audit(1459799848.045:869): avc: denied { read } for pid=1565 comm="rpc.statd" name="sm" dev="fuse" ino=9851517453928257202 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir type=SYSCALL msg=audit(1459799848.045:869): arch=c000003e syscall=257 success=yes exit=7 a0=ffffffffffffff9c a1=7f92e6f96790 a2=90800 a3=0 items=0 ppid=1564 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null) type=AVC msg=audit(1459799848.060:870): avc: denied { read } for pid=1565 comm="rpc.statd" name="state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1459799848.060:870): avc: denied { open } for pid=1565 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp37-180.lab.eng.blr.redhat.com/nfs/statd/state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1459799848.060:870): avc: denied { read } for pid=1565 comm="rpc.statd" name="state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1459799848.060:870): avc: denied { open } for pid=1565 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp37-180.lab.eng.blr.redhat.com/nfs/statd/state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=SYSCALL msg=audit(1459799848.060:870): arch=c000003e syscall=2 success=yes exit=7 a0=7f92e6f96750 a1=0 a2=7f92e6f96768 a3=5 items=0 ppid=1564 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null) type=AVC msg=audit(1459799848.065:871): avc: denied { write } for pid=1565 comm="rpc.statd" name="statd" dev="fuse" ino=9574569421130904447 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir type=AVC msg=audit(1459799848.065:871): avc: denied { add_name } for pid=1565 comm="rpc.statd" name="state.new" scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir type=AVC msg=audit(1459799848.065:871): avc: denied { create } for pid=1565 comm="rpc.statd" name="state.new" scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1459799848.065:871): avc: denied { write } for pid=1565 comm="rpc.statd" path="/run/gluster/shared_storage/nfs-ganesha/dhcp37-180.lab.eng.blr.redhat.com/nfs/statd/state.new" dev="fuse" ino=12901113835499053102 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=SYSCALL msg=audit(1459799848.065:871): arch=c000003e syscall=2 success=yes exit=7 a0=7f92e6f96780 a1=101241 a2=1a4 a3=18 items=0 ppid=1564 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc.statd" exe="/usr/sbin/rpc.statd" subj=system_u:system_r:rpcd_t:s0 key=(null) type=AVC msg=audit(1459799848.079:872): avc: denied { remove_name } for pid=1565 comm="rpc.statd" name="state.new" dev="fuse" ino=12901113835499053102 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=dir type=AVC msg=audit(1459799848.079:872): avc: denied { rename } for pid=1565 comm="rpc.statd" name="state.new" dev="fuse" ino=12901113835499053102 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1459799848.079:872): avc: denied { unlink } for pid=1565 comm="rpc.statd" name="state" dev="fuse" ino=13562855280619438618 scontext=system_u:system_r:rpcd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=file There is an issue that rpc.statd (running as rpcd_t) can not access files located on a fuse mount (fuse_t). FUSE does not support setting SELinux contexts yet :-/ Maybe we can mount the shared-storage volume with a different context and that should allow rpc.statd to access the contents? Something like this might do: mount -t glusterfs -o context=unconfined_u:unconfined_r:unconfined_t ... Shashank, could you try that out? Restart the nfs-ganesha-lock service and ganesha to see if it makes a difference. If not, maybe the SELinux experts can suggest a more suitable context for mounting the shared storage volume. Niels, Tried with your suggestion but it fails with invalid argument. [root@dhcp37-180 ~]# mount -t glusterfs -o context=unconfined_u:unconfined_r:unconfined_t localhost:/gluster_shared_storage /var/run/gluster/shared_storage /usr/bin/fusermount-glusterfs: mount failed: Invalid argument Mount failed. Please check the log file for more details. Shashank (In reply to Shashank Raj from comment #6) > Niels, > > Tried with your suggestion but it fails with invalid argument. > > [root@dhcp37-180 ~]# mount -t glusterfs -o > context=unconfined_u:unconfined_r:unconfined_t > localhost:/gluster_shared_storage /var/run/gluster/shared_storage > /usr/bin/fusermount-glusterfs: mount failed: Invalid argument > Mount failed. Please check the log file for more details. Please check with one of the SELinux experts (Prasanth?) how the context mount option should be used (it is common to all filesystems). Updated dependent selinux bug (https://bugzilla.redhat.com/show_bug.cgi?id=1323947) with the details after trying the workaround. Hi Kaleb, below are the comments from selinux team to make this work with nfs-ganesha. Can we take a look into it and do the needful Lukas Vrabec 2016-05-03 10:11:22 EDT Hi, To make working nfs-ganesha with SELinux, it's needed to add to post install phase this command: $ semanage boolean -m --on rpcd_use_fusefs This boolean is part of selinux-policy-3.13.1-70.el7 , so this package is needed to be required in nfs-ganesha rpm package. *** Bug 1332577 has been marked as a duplicate of this bug. *** I see 3.13.1-70.el7 in brewroot. Is there an ETA in RHEL7 or be available for rhpkg builds? Thanks. Waiting for selinux-policy-3.138.1-70 to become available before I can do a build Verified this bug with selinux-policy-3.13.1-60.el7_2.4.noarch and nfs-ganesha-2.3.1-6.el7rhgs.x86_64 build and the issue is resolved.
Verified with different scenarios as below:
>> setting up nfs-ganesha environment
>> rebooting nodes in cluster
>> manually restarting nfs-ganesha and nfs-ganesha-lock service multiple time
In any case, nfs-ganesha and nfs-ganesha-lock service is not going in failed state and no denial AVC's are seen in audit.log.
Based on the above observation, marking this bug as Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2016:1247 |