Note: dwalsh and ccaulfie have already been working on a selinux policy for use in Fedora. This bug is to track backporting that policy into RHEL5.
Fixed in selinux-policy-2.4.6-260.el5
When in RHN ?
Not until RHEL5.5 Preview is available on http://people.redhat.com/dwalsh/SELinux/RHEL5/noarch Please test and tell us what blows up on you.
*** Bug 503141 has been marked as a duplicate of this bug. ***
~~ Attention Customers and Partners - RHEL 5.5 Beta is now available on RHN ~~ RHEL 5.5 Beta has been released! There should be a fix present in this release that addresses your request. Please test and report back results here, by March 3rd 2010 (2010-03-03) or sooner. Upon successful verification of this request, post your results and update the Verified field in Bugzilla with the appropriate value. If you encounter any issues while testing, please describe them and set this bug into NEED_INFO. If you encounter new defects or have additional patch(es) to request for inclusion, please clone this bug per each request and escalate through your support representative.
Created attachment 390340 [details] audit.log from morph-01 I hit a few AVC messages related to ricci during testing of selinux-policy-2.4.6-271.el5. Attached is the raw log, output of audit2allow is below: [nstraz@try 1.selinux_check]$ cat morph-0*/audit.msgs.log | audit2allow allow ricci_modclusterd_t cluster_port_t:tcp_socket name_connect; allow ricci_modclusterd_t hi_reserved_port_t:tcp_socket name_bind; allow ricci_modclusterd_t inetd_child_t:unix_stream_socket connectto; allow ricci_modclusterd_t inetd_t:unix_stream_socket connectto; allow ricci_modclusterd_t var_run_t:sock_file write;
What is listening at /var/run/cman_client? What is running under inetd?
(In reply to comment #29) > What is listening at /var/run/cman_client? [root@morph-03 kernel]# netstat -xpa | grep cman_client unix 2 [ ACC ] STREAM LISTENING 1499540 21322/aisexec /var/run/cman_client [root@morph-03 kernel]# ps -ZC aisexec LABEL PID TTY TIME CMD system_u:system_r:inetd_t:SystemLow-SystemHigh 21322 ? 00:16:47 aisexec Hrm, that shouldn't be running under inetd_t. It was probably started from qarshd, which is started by xinetd. Perhaps our qarshd policy isn't loading correctly.
Looks like I didn't get the qarshd policy installed or loaded correctly. Jaroslav straightened me out and I'm back in business. Sorry about the distraction.
(In reply to comment #34) > Looks like I didn't get the qarshd policy installed or loaded correctly. > Jaroslav straightened me out and I'm back in business. Sorry about the > distraction. Ok.
type=AVC msg=audit(1265993391.961:1114): avc: denied { getattr } for pid=14072 comm="fence_apc" name="/" dev=devpts ino=1 scontext=root:system_r:fenced_t:s0 tcontext=system_u:object_r:devpts_t:s0 tclass=filesystem Happens during fence event via APC power switch.
Fixed in selinux-policy-2.4.6-274.el5
I still receive AVC denials when using conga to configure services with the latest selinux policy enabled. I'll attach the logs from my three-node cluster.
Created attachment 394411 [details] audit.log from node 1 as mentioned in Comment #38
Created attachment 394412 [details] audit.log from node 2 as mentioned in Comment #38
Created attachment 394413 [details] audit.log from node 3 as mentioned in Comment #38
Brian, did you test it with selinux-policy-2.4.6-274.el5 selinux-policy-targeted-2.4.6-274.el5 packages? Also I have sent you a mail with local policy which contains fixes for these errors. Could you test it ?
I hit this while running qdisk tests w/ selinux-policy-2.4.6-274.el5. type=AVC msg=audit(1266518805.733:814): avc: denied { sys_boot } for pid=10895 comm="qdiskd" capability=22 scontext=system_u:system_r:qdiskd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:qdiskd_t:s0-s0:c0.c1023 tclass=capability I'm not sure what the qdiskd is trying to do here.
sys_boot means /* Allow use of reboot() */ #define CAP_SYS_BOOT 22 So qdiskd_t is trying to reboot the machine.
Googling for qdiskd and reboot shows reboot="1" If set to 0 (off), qdiskd will *not* reboot after a negative transition as a result in a change in score (see section 2.2). The default for this value is 1 (on). So it looks like qdiskd needs this capability. It is the first domain, I have ever seen that needs this capability directly. Miroslav can you add it.
Brian, thank you for your testing. All fixes are added to selinux-policy-2.4.6-275.el5 together with sys_boot capability.
I found this while running against selinux-policy-2.4.6-275.el5 audit.log: type=AVC msg=audit(1266617758.201:2192): avc: denied { read write } for pid=9746 comm="gfs_controld" name="dlm_plock" dev=tmpfs ino=66038 scontext=system_u:system_r:gfs_controld_t:s0-s0:c0.c1023 tcontext=system_u:object_r:device_t:s0 tclass=chr_file audit2allow output: #============= gfs_controld_t ============== allow gfs_controld_t device_t:chr_file { read write getattr };
Here are a few more log entries from fence_apc accessing /proc/meminfo. type=AVC msg=audit(1266618620.473:2892): avc: denied { read } for pid=9813 comm="fence_apc" name="meminfo" dev=proc ino=4026531842 scontext=system_u:system_r:fenced_t:s0-s0:c0.c1023 tcontext=system_u:object_r:proc_t:s0 tclass=file type=SYSCALL msg=audit(1266618620.473:2892): arch=c000003e syscall=2 success=yes exit=8 a0=3859b20da0 a1=0 a2=1b6 a3=0 items=0 ppid=9490 pid=9813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fence_apc" exe="/usr/bin/python" subj=system_u:system_r:fenced_t:s0-s0:c0.c1023 key=(null) type=AVC msg=audit(1266618620.473:2893): avc: denied { getattr } for pid=9813 comm="fence_apc" path="/proc/meminfo" dev=proc ino=4026531842 scontext=system_u:system_r:fenced_t:s0-s0:c0.c1023 tcontext=system_u:object_r:proc_t:s0 tclass=file type=SYSCALL msg=audit(1266618620.473:2893): arch=c000003e syscall=5 success=yes exit=0 a0=8 a1=7fff62714aa0 a2=7fff62714aa0 a3=0 items=0 ppid=9490 pid=9813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fence_apc" exe="/usr/bin/python" subj=system_u:system_r:fenced_t:s0-s0:c0.c1023 key=(null) audit2allow output: #============= fenced_t ============== allow fenced_t proc_t:file { read getattr };
What is /dev/dlm_plock? Is this the correct path? We do not have a file context defined for this, so it is labeled device_t which no confined domains have access. fenced reading proc seems fine.
We do have /dev/misc/dlm.* -c system_u:object_r:dlm_control_device_t:s0 Does dlm_plock exist in the /dev/misc directory?
Dave, Can you answer Dan's questions in comments #49 and #50?
(In reply to comment #47) > I found this while running against selinux-policy-2.4.6-275.el5 > > audit.log: > > type=AVC msg=audit(1266617758.201:2192): avc: denied { read write } for > pid=9746 comm="gfs_controld" name="dlm_plock" dev=tmpfs ino=66038 > scontext=system_u:system_r:gfs_controld_t:s0-s0:c0.c1023 > tcontext=system_u:object_r:device_t:s0 tclass=chr_file > Nate, could you check labeling in the /dev/misc directory? ls -lZ /dev/misc/
[root@morph-04 ~]# ls -lZ /dev/misc crw------- root root system_u:object_r:device_t dlm_clvmd crw-rw-rw- root root system_u:object_r:device_t dlm-control crw-rw---- root root system_u:object_r:device_t dlm_plock crw-rw---- root root system_u:object_r:device_t lock_dlm_plock
(In reply to comment #53) > [root@morph-04 ~]# ls -lZ /dev/misc > crw------- root root system_u:object_r:device_t dlm_clvmd > crw-rw-rw- root root system_u:object_r:device_t dlm-control > crw-rw---- root root system_u:object_r:device_t dlm_plock > crw-rw---- root root system_u:object_r:device_t lock_dlm_plock Ok, could you try these steps: # restorecon -Rv /dev/misc # service cman stop # modprobe -r lock_dlm dlm # service cman start and then check labels.
The labels do come up wrong after the cman init script runs. [root@west-07 qarsh-selinux-1.25]# ls -lZ /dev/misc crw------- root root system_u:object_r:device_t dlm_clvmd crw-rw-rw- root root system_u:object_r:device_t dlm-control crw-rw---- root root system_u:object_r:device_t dlm_plock crw-rw---- root root system_u:object_r:device_t lock_dlm_plock You have new mail in /var/spool/mail/root [root@west-07 qarsh-selinux-1.25]# restorecon -Rv /dev/misc restorecon reset /dev/misc/dlm_clvmd context system_u:object_r:device_t:s0->system_u:object_r:dlm_control_device_t:s0 restorecon reset /dev/misc/dlm_plock context system_u:object_r:device_t:s0->system_u:object_r:dlm_control_device_t:s0 restorecon reset /dev/misc/dlm-control context system_u:object_r:device_t:s0->system_u:object_r:dlm_control_device_t:s0 restorecon reset /dev/misc/lock_dlm_plock context system_u:object_r:device_t:s0->system_u:object_r:dlm_control_device_t:s0 [root@west-07 qarsh-selinux-1.25]# ls -lZ /dev/misc crw------- root root system_u:object_r:dlm_control_device_t dlm_clvmd crw-rw-rw- root root system_u:object_r:dlm_control_device_t dlm-control crw-rw---- root root system_u:object_r:dlm_control_device_t dlm_plock crw-rw---- root root system_u:object_r:dlm_control_device_t lock_dlm_plock [root@west-07 qarsh-selinux-1.25]# service clvmd stop Stopping clvm: [ OK ] [root@west-07 qarsh-selinux-1.25]# service cman stop Stopping cluster: Stopping fencing... done Stopping cman... done Stopping ccsd... done Unmounting configfs... done [ OK ] [root@west-07 qarsh-selinux-1.25]# modprobe -r lock_dlm [root@west-07 qarsh-selinux-1.25]# modprobe -r gfs [root@west-07 qarsh-selinux-1.25]# modprobe -r dlm [root@west-07 qarsh-selinux-1.25]# service cman start Starting cluster: Loading modules... done Mounting configfs... done Starting ccsd... done Starting cman... done Starting daemons... done Starting fencing... done [ OK ] [root@west-07 qarsh-selinux-1.25]# ls -lZ /dev/misc crw-rw-rw- root root system_u:object_r:device_t dlm-control crw-rw---- root root system_u:object_r:device_t dlm_plock crw-rw---- root root system_u:object_r:device_t lock_dlm_plock
That's strange. I am also trying to do these steps and I get the proper label on it. # ls -lZ /dev/misc/ crw-rw-rw- root root system_u:object_r:dlm_control_device_t dlm-control crw-rw---- root root system_u:object_r:dlm_control_device_t dlm_plock crw-rw---- root root system_u:object_r:dlm_control_device_t lock_dlm_plock
I don't see anything that uses /dev/dlm_plock, only /dev/misc/dlm_plock.
(In reply to comment #56) > That's strange. I am also trying to do these steps and I get the proper label > on it. After installing a new selinux-policy package is a restart required to get it loaded correctly or is there some other procedure to get it loaded?
Dave, ok. Nate the policy package will load the policy on update. No need to reboot or reload it.
(In reply to comment #60) > Nate the policy package will load the policy on update. No need to reboot or > reload it. Is there a way to verify that the new policy is in effect?
Well you can execute load_policy. But unless you saw an error on update it should be running. You could reinstall the selinux-policy-targeted package yum reinstall selinux-policy-targeted
Will the policy version shown by 'sestatus' increment with the new policy loaded?
No that just shows the policy kernel version. We just rely on rpm for the version.
Created attachment 395733 [details] Audit logs from morph during rgmanager test Looks like rgmanager is going to be a big mine field when it comes to SE Linux integration. Here is the audit logs from trying to do HA-NFS with rgmanager. [nstraz@try t]$ cat audit-morph*.log | audit2allow allow rgmanager_t file_t:dir { create setattr }; allow rgmanager_t fixed_disk_device_t:blk_file { ioctl read }; allow rgmanager_t kernel_t:process sigkill; allow rgmanager_t mnt_t:dir { add_name create write }; allow rgmanager_t self:capability { dac_override sys_resource }; allow rgmanager_t var_lib_nfs_t:file { relabelfrom relabelto }; allow rpcd_t rgmanager_tmp_t:dir { add_name create read remove_name write }; allow rpcd_t rgmanager_tmp_t:file { read rename unlink write };
Nate, what resource script are you testing ?
We're using ip, fs, clusterfs, nfsexport, and nfsclient.
I think we should add the access we have round and then make rgmanager an unconfined domain.
Yes, I agree. Fixed in selinux-policy-2.4.6-277.el5
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2010-0182.html