Bug 1220999
Summary: | [SELinux] [nfs-ganesha]: Volume export fails when SELinux is in Enforcing mode - RHEL-6.7 | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Saurabh <saujain> | ||||
Component: | nfs-ganesha | Assignee: | Meghana <mmadhusu> | ||||
Status: | CLOSED ERRATA | QA Contact: | Saurabh <saujain> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | high | ||||||
Version: | rhgs-3.1 | CC: | akhakhar, annair, ansubram, mgrepl, mmadhusu, mmalik, mzywusko, nlevinki, pprakash, rcyriac, saujain, skoduri, vagarwal | ||||
Target Milestone: | --- | ||||||
Target Release: | RHGS 3.1.0 | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | selinux-policy-3.7.19-279.el6 | Doc Type: | Bug Fix | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | |||||||
: | 1222845 (view as bug list) | Environment: | |||||
Last Closed: | 2015-07-29 04:42:45 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1202842, 1212796, 1222845, 1242476 | ||||||
Attachments: |
|
Description
Saurabh
2015-05-13 05:34:41 UTC
team-nfs I installed the latest packages of selinux and executed the test case as per bZ 1220999(downstream). I find that it still fails. [root@nfs9 ~]# showmount -e localhost Export list for localhost: ----> here it should have displayed the exported volume which is not to be seen [root@nfs9 ~]# rpm -qa | grep selinux-policy selinux-policy-targeted-3.7.19-274.el6.noarch selinux-policy-3.7.19-274.el6.noarch [root@nfs9 ~]# [root@nfs9 ~]# [root@nfs9 ~]# [root@nfs9 ~]# less /var/log/audit/audit.log | grep -i avc type=AVC msg=audit(1433782919.727:864): avc: denied { execute } for pid=3897 comm="env" name="nfs-ganesha" dev=dm-0 ino=660392 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:initrc_exec_t:s0 tclass=file type=AVC msg=audit(1433782919.727:864): avc: denied { execute_no_trans } for pid=3897 comm="env" path="/etc/rc.d/init.d/nfs-ganesha" dev=dm-0 ino=660392 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:initrc_exec_t:s0 tclass=file type=USER_AVC msg=audit(1433782920.072:865): user pid=1594 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc: denied { acquire_svc } for service=org.ganesha.nfsd spid=3905 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 tclass=dbus exe="/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?' type=AVC msg=audit(1433782920.097:866): avc: denied { name_bind } for pid=3905 comm="ganesha.nfsd" src=4501 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_rort_t:s0 tclass=udp_socket type=AVC msg=audit(1433782920.097:867): avc: denied { write } for pid=3905 comm="ganesha.nfsd" name="rpcbind.sock" dev=dm-0 ino=1177667 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:rpcbind_var_run_t:s0 tclass=sock_file [root@nfs9 ~]# Volume Name: vol0 Type: Distributed-Replicate Volume ID: a60b2517-0024-48cc-a73a-833a1e41c7cb Status: Started Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.47.127:/rhs/brick1/d1r1 Brick2: 10.70.47.130:/rhs/brick1/d1r2 Brick3: 10.70.47.131:/rhs/brick1/d2r1 Brick4: 10.70.47.133:/rhs/brick1/d2r2 Brick5: 10.70.47.127:/rhs/brick1/d3r1 Brick6: 10.70.47.130:/rhs/brick1/d3r2 Brick7: 10.70.47.131:/rhs/brick1/d4r1 Brick8: 10.70.47.133:/rhs/brick1/d4r2 Brick9: 10.70.47.127:/rhs/brick1/d5r1 Brick10: 10.70.47.130:/rhs/brick1/d5r2 Brick11: 10.70.47.131:/rhs/brick1/d6r1 Brick12: 10.70.47.133:/rhs/brick1/d6r2 Options Reconfigured: ganesha.enable: on features.cache-invalidation: on nfs.disable: on performance.readdir-ahead: on nfs-ganesha: enable Created attachment 1036572 [details]
audit.log
Milos, Could you please check and confirm if the above AVC's are actually fixed in "selinux-policy-3.7.19-269.el6" as mentioned in Bug 1222845 or we have more to fix? Based on AVCs attached today, the gluster daemon tries to start nfs-ganesha service. Latest selinux-policy (-275.el6) for RHEL-6.7 does not contain any policy for nfs-ganesha. Therefore the ganesha.nfsd process runs under the same context as the gluster daemon. This is not correct. We should backport the ganesha policy to RHEL-6.7. Well, now the showmount is now able to display the exported volume nfs9 Export list for localhost: /vol0 (everyone) ----- nfs10 Export list for localhost: /vol0 (everyone) ----- nfs11 Export list for localhost: /vol0 (everyone) ----- nfs12 Export list for localhost: /vol0 (everyone) ----- Although, the unexport still fails, not sure if this is selinux issue, I will first talk to nfs developers, [root@nfs9 ~]# gluster volume set vol0 ganesha.enable off volume set: failed: Dynamic export addition/deletion failed. Please see log file for details [root@nfs9 ~]# [root@nfs9 ~]# [root@nfs9 ~]# You have new mail in /var/spool/mail/root [root@nfs9 ~]# [root@nfs9 ~]# for i in `seq 9 12`; do echo nfs$i; ssh nfs$i "showmount -e localhost"; echo "-----"; done nfs9 rpc mount export: RPC: Unable to receive; errno = Connection refused ----- nfs10 Export list for localhost: /vol0 (everyone) ----- nfs11 Export list for localhost: /vol0 (everyone) ----- nfs12 Export list for localhost: /vol0 (everyone) ----- Hello Milos, I have updated https://bugzilla.redhat.com/show_bug.cgi?id=1229667#c8 the time out issue has happened again. Thanks, Saurabh I see the same: # semodule -i mypolicy.pp libsepol.expand_terule_helper: conflicting TE rule for (glusterd_t, prelink_exec_t:process): old was prelink_t, new is prelink_mask_t libsepol.expand_module: Error during expand libsemanage.semanage_expand_sandbox: Expand module failed semodule: Failed! # Most likely a bug in selinux-policy macros. Can you help us, Mirek? Milos, how does the local policy file look? # cat mypolicy.te policy_module(mypolicy,1.0) require { type glusterd_t; type initrc_exec_t; type initrc_t; type cluster_t; type system_dbusd_t; class dbus { acquire_svc send_msg }; } allow glusterd_t initrc_t : dbus { send_msg }; allow glusterd_t cluster_t : dbus { send_msg }; allow glusterd_t system_dbusd_t : dbus { acquire_svc }; init_domtrans_script(glusterd_t) init_initrc_domain(glusterd_t) init_read_script_state(glusterd_t) init_rw_script_tmp_files(glusterd_t) init_manage_script_status_files(glusterd_t) # I apologize. optional_policy(` prelink_transition_domain_attribute(cluster_t) ') is needed to have in the local policy. Milos, Based on comment 19, I am not sure how do I put the policy in the script mypolicy.te. Request you to please update me. Thanks, Saurabh # cat mypolicy.te policy_module(mypolicy,1.0) require { type glusterd_t; type initrc_exec_t; type initrc_t; type cluster_t; type system_dbusd_t; class dbus { acquire_svc send_msg }; } allow glusterd_t initrc_t : dbus { send_msg }; allow glusterd_t cluster_t : dbus { send_msg }; allow glusterd_t system_dbusd_t : dbus { acquire_svc }; optional_policy(` prelink_transition_domain_attribute(glusterd_t) ') init_domtrans_script(glusterd_t) init_initrc_domain(glusterd_t) init_read_script_state(glusterd_t) init_rw_script_tmp_files(glusterd_t) init_manage_script_status_files(glusterd_t) # Hi Milos, It has worked for me, post installing the module 1. I was able to bring the nfs-ganesha cluster using cli. [root@nfs5 ~]# gluster nfs-ganesha enable Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted pool. Do you still want to continue? (y/n) y ganesha enable : success 2. I was able to dismantle the nfs-ganesha cluster using cli. [root@nfs5 ~]# gluster nfs-ganesha disable ganesha enable : success Now, my question is that are you going to provide the same policy as part of a selinux-policy build for RHEL 6.7 and RHEL7.1? I have tested only on RHEL6.7, so let me know if there are changes required for RHEL7.1 All gluster related fixes from RHEL-6.7 will be soon backported to RHEL-7.2. The important ones will be backported to RHEL-7.1.z too. Milos, I don't find the devel directory in the RHEL 7.1 machine, please update how do I go forward, for setting the policy. [root@vm01 ~]# ls /usr/share/selinux/ packages/ targeted/ [root@vm01 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.1 (Maipo) Thanks, Apeksha Milos, On rhel7.1 i am seeing these avc logs, in auditd.log type=SERVICE_START msg=audit(1434397291.045:1831): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg=' comm="nfs-ganesha" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' type=SERVICE_STOP msg=audit(1434397500.331:1832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg=' comm="nfs-ganesha" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' type=USER_AVC msg=audit(1434397530.988:1833): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc: denied { start } for auid=-1 uid=0 gid=0 path="/etc/rc.d/init.d/nfs-ganesha" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:initrc_exec_t:s0 tclass=service exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?' Please update me about the work around for 7.1 I am facing theses issues during setup of nfs-ganesha. Answer for comment#24: # yum -y install selinux-policy-devel After updating to package, [root@nfs11 ~]# rpm -qa | grep selinux-policy selinux-policy-3.7.19-278.el6.noarch selinux-policy-targeted-3.7.19-278.el6.noarch I am getting avc logs while trying to export a volume. type=USER_AVC msg=audit(1435088127.711:13201): user pid=1488 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc: denied { send_msg } for msgtype=signal interface=org.ganesha.nfsd.exportmgr member=AddExport dest=org.ganesha.nfsd spid=31857 tpid=22032 scontext=unconfined_u:system_r:glusterd_t:s0 tcontext=unconfined_u:system_r:initrc_t:s0 tclass=dbus exe="/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?' Milos, can you update if the selinux policy is updated workaround as mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1220999#c21? volume export is now working Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html |