Bug 1244272
Summary: | [SELinux] nfs-ganesha: AVC denied for nfs-ganesha.service , ganesha cluster setup fails in Rhel7 | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Prasanth <pprakash> | ||||
Component: | selinux-policy | Assignee: | Miroslav Grepl <mgrepl> | ||||
Status: | CLOSED ERRATA | QA Contact: | Milos Malik <mmalik> | ||||
Severity: | urgent | Docs Contact: | |||||
Priority: | urgent | ||||||
Version: | 7.1 | CC: | akhakhar, jherrman, jkurik, kkeithle, lvrabec, mgrepl, mmalik, ndevos, nlevinki, plautrba, pprakash, pvrabec, rcyriac, rhs-bugs, saujain, skoduri, ssekidde, vagarwal | ||||
Target Milestone: | rc | Keywords: | ZStream | ||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | selinux-policy-3.13.1-35.el7 | Doc Type: | Bug Fix | ||||
Doc Text: |
Attempting to set up Gluster storage on an NFS-Ganesha cluster previously failed due to an Access Vector Cache (AVC) denial error. The responsible SELinux policy has been adjusted to allow handling of volumes mounted by NFS-Ganesha, and the described failure no longer occurs.
|
Story Points: | --- | ||||
Clone Of: | 1242487 | ||||||
: | 1248653 (view as bug list) | Environment: | |||||
Last Closed: | 2015-11-19 10:41:24 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1202842, 1212796, 1242487, 1248653 | ||||||
Attachments: |
|
Description
Prasanth
2015-07-17 16:18:38 UTC
commit 285a53012c81e74c5b86480e90649165566b7f7f Author: Miroslav Grepl <mgrepl> Date: Mon Jul 20 14:08:03 2015 +0200 Allow glusterd to manage nfsd and rpcd services. Seeing avc errors in selinux-policy-3.13.1-34.el7: 1. type=AVC msg=audit(07/23/2015 05:07:56.705:3038) : avc: denied { getattr } for pid=17162 comm=find path=/var/lib/libvirt/qemu/capabilities.monitor.sock dev="dm-1" ino=1179980 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=sock_file 2. type=AVC msg=audit(07/23/2015 05:08:01.303:3054) : avc: denied { connectto } for pid=17229 comm=crm_mon path=cib_ro scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:cluster_t:s0 tclass=unix_stream_socket 3. type=AVC msg=audit(07/23/2015 05:08:03.890:3065) : avc: denied { connectto } for pid=17250 comm=cibadmin path=cib_rw scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:cluster_t:s0 tclass=unix_stream_socket Seeing the same avc's in selinux-policy-3.13.1-23.el7_1.12.noarch which is latest selinux rpm in Rhel7.1rhs iso [root@nfs1 ~]# ausearch -m avc -m user_avc -m selinux_err -i -ts today|audit2allow #============= fprintd_t ============== allow fprintd_t cluster_t:dbus send_msg; #============= glusterd_t ============== #!!!! This avc can be allowed using the boolean 'daemons_enable_cluster_mode' allow glusterd_t cluster_t:unix_stream_socket connectto; allow glusterd_t qemu_var_run_t:sock_file getattr; Created attachment 1055281 [details]
audit.log
1. type=AVC msg=audit(07/23/2015 05:07:56.705:3038) : avc: denied { getattr } for pid=17162 comm=find path=/var/lib/libvirt/qemu/capabilities.monitor.sock dev="dm-1" ino=1179980 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=sock_file is this expected behaviour? I mean looking for /var/lib/libvirt/qemu/capabilities.monitor.sock? Do we know the parent process of the find command? Off hand I'd say it's not expected. I can't imagine why anything in pacemaker would be running a find like that. (In reply to Miroslav Grepl from comment #4) > 1. type=AVC msg=audit(07/23/2015 05:07:56.705:3038) : avc: denied { > getattr } for pid=17162 comm=find > path=/var/lib/libvirt/qemu/capabilities.monitor.sock dev="dm-1" ino=1179980 > scontext=system_u:system_r:glusterd_t:s0 > tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=sock_file > > is this expected behaviour? I mean looking for > /var/lib/libvirt/qemu/capabilities.monitor.sock? Kaleb has already updated the response in comment#5 and we are also not sure that it is expected or not. Going forward, we updated the selinux-ploicy* "3.13.1-34.el7" version, this includes updating, 1. selinux-policy 2. selinux-policy-targeted 3. selinux-policy-devel Then we applied the workaround as provided earlier i.e. cat bz1242487.te policy_module(bz1242487,1.0) require { type glusterd_t; type cluster_t; type nfsd_unit_file_t; class unix_stream_socket { connectto }; } allow glusterd_t cluster_t : unix_stream_socket { connectto }; Next we tried to enable nfs-ganesha, export a glusterfs volume. Then we see that nfs-ganesha service is up and running as well as volume is exported. Request you to please provide the selinux rpm for RHGS-3.1, while having the workaround in place. Request you to let us know if this is going to be a new version of rpm or same version with workaround updated in it? PS:- All these latest test has been done on VMs post fresh ISO installation. Also, during this exercise we are seeing few more avc with denied flag, although as of now they are not hampering the service of nfs-ganesha. They are as follows:- # ausearch -m avc -ts recent ---- time->Fri Jul 24 01:11:17 2015 type=SYSCALL msg=audit(1437680477.807:13451): arch=c000003e syscall=262 success=yes exit=0 a0=8 a1=14adb78 a2=14adae8 a3=100 items=0 ppid=7921 pid=8037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="find" exe="/usr/bin/find" subj=system_u:system_r:glusterd_t:s0 key=(null) type=AVC msg=audit(1437680477.807:13451): avc: denied { getattr } for pid=8037 comm="find" path="/var/lib/libvirt/qemu/capabilities.monitor.sock" dev="dm-1" ino=1179980 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=sock_file ---- time->Fri Jul 24 01:11:17 2015 type=SYSCALL msg=audit(1437680477.811:13452): arch=c000003e syscall=257 success=yes exit=6 a0=5 a1=14afea8 a2=30900 a3=0 items=0 ppid=7921 pid=8037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="find" exe="/usr/bin/find" subj=system_u:system_r:glusterd_t:s0 key=(null) type=AVC msg=audit(1437680477.811:13452): avc: denied { read } for pid=8037 comm="find" name="sepolgen" dev="dm-1" ino=67573415 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:selinux_config_t:s0 tclass=dir ---- time->Fri Jul 24 01:11:45 2015 type=SYSCALL msg=audit(1437680505.495:13456): arch=c000003e syscall=2 success=yes exit=4 a0=236f0f0 a1=441 a2=1b6 a3=7ffeafa43640 items=0 ppid=9220 pid=9221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mailx" exe="/usr/bin/mailx" subj=system_u:system_r:sendmail_t:s0-s0:c0.c1023 key=(null) type=AVC msg=audit(1437680505.495:13456): avc: denied { create } for pid=9221 comm="mailx" name="dead.letter" scontext=system_u:system_r:sendmail_t:s0-s0:c0.c1023 tcontext=system_u:object_r:abrt_var_cache_t:s0 tclass=file type=AVC msg=audit(1437680505.495:13456): avc: denied { add_name } for pid=9221 comm="mailx" name="dead.letter" scontext=system_u:system_r:sendmail_t:s0-s0:c0.c1023 tcontext=system_u:object_r:abrt_var_cache_t:s0 tclass=dir type=AVC msg=audit(1437680505.495:13456): avc: denied { write } for pid=9221 comm="mailx" name="Python-2015-07-24-01:11:12-7769" dev="dm-1" ino=202279365 scontext=system_u:system_r:sendmail_t:s0-s0:c0.c1023 tcontext=system_u:object_r:abrt_var_cache_t:s0 tclass=dir type=AVC msg=audit(1437680505.495:13456): avc: denied { create } for pid=9221 comm="mailx" name="dead.letter" scontext=system_u:system_r:sendmail_t:s0-s0:c0.c1023 tcontext=system_u:object_r:abrt_var_cache_t:s0 tclass=file is not related and this is another bug. type=AVC msg=audit(1437680477.811:13452): avc: denied { read } for pid=8037 comm="find" name="sepolgen" dev="dm-1" ino=67573415 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:selinux_config_t:s0 tclass=dir it looks there is a find to search something? It wants to read random directories. Using a little helper script to replace the actual find command: #!/bin/sh # # Tracing the find binary, why would it crawl random dirs? # pstree -a -A $$ > /var/tmp/find.trace.$(date +%s) exec /usr/bin/find.orig $@ This script is /usr/bin/find, and the original binary has been renamed to /usr/bin/find.orig. When running "gluster nfs-ganesha enable": # cat find.trace.1437699451 systemd --switched-root --system --deserialize 24 `-glusterd -p /var/run/glusterd.pid `-sh /usr/libexec/ganesha/ganesha-ha.sh setup /etc/ganesha `-pcs /usr/sbin/pcs cluster setup --name G1437608257.36 nfs1 nfs2 `-find /usr/bin/find /var/lib -name cib.* -exec rm -f {} ; `-pstree -a -A -s 20942 When running "gluster nfs-ganesha disable": # cat find.trace.1437699337 systemd --switched-root --system --deserialize 24 `-glusterd -p /var/run/glusterd.pid `-sh /usr/libexec/ganesha/ganesha-ha.sh teardown /etc/ganesha `-pcs /usr/sbin/pcs cluster destroy `-find /usr/bin/find /var/lib -name cib.* -exec rm -f {} ; `-pstree -a -A -s 20705 This shows that the "pcs" command runs "find". # which pcs /usr/sbin/pcs # file /usr/sbin/pcs /usr/sbin/pcs: symbolic link to `/usr/lib/python2.7/site-packages/pcs/pcs.py' # ls -Z /usr/lib/python2.7/site-packages/pcs/pcs.py -rwxr-xr-x. root root system_u:object_r:lib_t:s0 /usr/lib/python2.7/... Gluster does need to run the "pcs" command in order to configure (or destroy) the pacemaker cluster. Does this help, or are more details required? Yes, it's enough. Using selinux-policy-3.13.1-35.el7 on rhs3.1 iso for rhel7, i am not seeing these AVC's now. Following the exact steps mentioned in comment17 with selinux-policy-3.13.1-23.el7_1.13 build on RHGS-3.1 RHEL-7 ISO, i dont see these AVC's anymore and successfully able to setup the ganesha cluster Apeksha, thanks for testing it and confirming the results. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2300.html |