Description of problem: In bug 514579 we added a new "vhostmd" daemon to RHEL-5.4.x/5.5. When starting it, this daemon ends up running under 'initrc_t' context & triggering AVCs After doing 'service vhostmd start' on a Xen host, I get the following AVCs type=AVC msg=audit(1259851534.535:55): avc: denied { read write } for pid=6425 comm="virsh" path="/dev/shm/vhostmd0" dev=tmpfs ino=20533 scontext=root:system_r:xm_t:s0 tcontext=root:object_r:tmpfs_t:s0 tclass=file type=AVC msg=audit(1259851534.539:56): avc: denied { create } for pid=6425 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket type=AVC msg=audit(1259851534.539:57): avc: denied { bind } for pid=6425 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket type=AVC msg=audit(1259851534.539:58): avc: denied { getattr } for pid=6425 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket type=AVC msg=audit(1259851534.539:59): avc: denied { write } for pid=6425 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket type=AVC msg=audit(1259851534.539:59): avc: denied { nlmsg_read } for pid=6425 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket type=AVC msg=audit(1259851534.539:60): avc: denied { read } for pid=6425 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket type=AVC msg=audit(1259851535.979:61): avc: denied { create } for pid=6475 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket type=AVC msg=audit(1259851535.979:62): avc: denied { bind } for pid=6475 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket type=AVC msg=audit(1259851535.979:63): avc: denied { getattr } for pid=6475 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket type=AVC msg=audit(1259851535.979:64): avc: denied { write } for pid=6475 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket type=AVC msg=audit(1259851535.979:64): avc: denied { nlmsg_read } for pid=6475 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket type=AVC msg=audit(1259851535.979:65): avc: denied { read } for pid=6475 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket The vhostmd daemon ends up runing as this: root:system_r:initrc_t vhostmd 6422 0.0 0.1 43868 2244 ? S 09:45 0:00 /usr/sbin/vhostmd --user vhostmd --connect xen:/// root:system_r:initrc_t vhostmd 6856 0.0 0.0 77840 1712 ? S 09:46 0:00 \_ /usr/bin/perl /usr/share/vhostmd/scripts/pagerate.pl In addition because it is running unprivileged, it spawns the 'libvirt_proxy' setuid daemon, which also ends up running under initrd_t root:system_r:initrc_t root 6466 0.0 0.0 63296 980 ? S 09:45 0:00 /usr/libexec/libvirt_proxy We need to at least write policy to confine vhostmd daemon. The vhostmd daemon works by spawning variou external commands periodically, in particular it seems to spawn 'virsh' frequently, which in turn is what causes the libvirt_proxy setuid process to be spawned. So we may need to decide on additional confined domains for the latter Version-Release number of selected component (if applicable): selinux-policy-2.4.6-261.el5 vhostmd-0.4-0.12.gite9db007b.el5_3 How reproducible: Always Steps to Reproduce: 1. Get a RHEL-5.4 Xen host 2. Install the 'vhostmd' RPM 3. In /etc/vhostmd get rid of the default config, and put the .xen config in its place 4. Start the daemon 'service vhostmd start' Actual results: AVCs logged, and running under initrc_t Expected results: No AVCs, and vhostmd is running in a confined domain Additional info: This daemon/package was only just added to Fedora, so I'm not sure if there's policy for this in Fedora yet or not
Miroslav, can you write a policy for F12/Rawhide, that we can get tested and maybe backport to RHEL5
We use the same configuration in Rawhide and RHEL 5.x, so you can have a look at the commands that vhostmd runs in the configuration file: http://cvs.fedoraproject.org/viewvc/devel/vhostmd/vhostmd.conf?view=markup (look for <action>...</action>) eg: It will run this command every 60 seconds: virsh -r CONNECT version | grep API | gawk -F': ' '{print $2}' The magic word "CONNECT" is replaced by some connection URI, or possibly by nothing. This depends on how the system administrator has adjusted the file /etc/sysconfig/vhostmd. The default file is here: http://cvs.fedoraproject.org/viewvc/devel/vhostmd/vhostmd.sysconfig?view=markup As noted above, because the virsh command isn't running as root (it runs as special user:group vhostmd:vhostmd), it will probably spawn some external program in the Xen case, or connect to libvirtd in the KVM case.
<SNIP> S 09:45 0:00 /usr/sbin/vhostmd --user vhostmd --connect xen:/// <END SNIP> I am not the brightest bulb on the tree but please affirm that you are making this policy for KVM yes? The business driver is SAP certification for RHEV which we do not have which is allowing for Novell to compete against us in this workload. Any SELinux policy testing for XEN should be secondary to KVM because there is no need: SAP certified Xen already and does not require vmhostd. Keep in mind that vmhostd is the solution to SAP's change in certification requirements after Xen was certified.
> I am not the brightest bulb on the tree but please affirm that you are making > this policy for KVM yes? The policy is for vhostmd, and it should work regardless of what hypervisor is in use, because it uses libvirt.
Yeah Dan's right I think. vhostmd itself connects via libvirt to get the list of domains, and then all the metrics gathering happens by running external commands (eg. virsh). There shouldn't be any direct access to Xen or KVM. You can change the libvirt connection URL by editing /etc/sysconfig/vhostmd (the VHOSTMD_URI setting).
Miroslav, is the policy done now?
Richard, yes, the policy is done.
Fixed in selinux-policy-2.4.6-266.el5.noarch
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2010-0182.html