Bug 543941 - The new vhostmd daemon runs under 'initrc_t' context & triggers AVCs
Summary: The new vhostmd daemon runs under 'initrc_t' context & triggers AVCs
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: selinux-policy
Version: 5.4
Hardware: All
OS: Linux
high
medium
Target Milestone: rc
: 5.5
Assignee: Miroslav Grepl
QA Contact: BaseOS QE Security Team
URL:
Whiteboard:
Depends On: 514579
Blocks: 514577
TreeView+ depends on / blocked
 
Reported: 2009-12-03 14:51 UTC by Daniel Berrangé
Modified: 2013-11-12 22:43 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-03-30 07:49:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2010:0182 0 normal SHIPPED_LIVE selinux-policy bug fix update 2010-03-29 12:19:53 UTC

Description Daniel Berrangé 2009-12-03 14:51:02 UTC
Description of problem:

In bug 514579 we added a new "vhostmd"  daemon to RHEL-5.4.x/5.5. When starting it, this daemon ends up running under 'initrc_t' context & triggering AVCs

After doing 'service vhostmd start' on a Xen host, I get the following AVCs

type=AVC msg=audit(1259851534.535:55): avc:  denied  { read write } for  pid=6425 comm="virsh" path="/dev/shm/vhostmd0" dev=tmpfs ino=20533 scontext=root:system_r:xm_t:s0 tcontext=root:object_r:tmpfs_t:s0 tclass=file
type=AVC msg=audit(1259851534.539:56): avc:  denied  { create } for  pid=6425 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket
type=AVC msg=audit(1259851534.539:57): avc:  denied  { bind } for  pid=6425 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket
type=AVC msg=audit(1259851534.539:58): avc:  denied  { getattr } for  pid=6425 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket
type=AVC msg=audit(1259851534.539:59): avc:  denied  { write } for  pid=6425 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket
type=AVC msg=audit(1259851534.539:59): avc:  denied  { nlmsg_read } for  pid=6425 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket
type=AVC msg=audit(1259851534.539:60): avc:  denied  { read } for  pid=6425 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket
type=AVC msg=audit(1259851535.979:61): avc:  denied  { create } for  pid=6475 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket
type=AVC msg=audit(1259851535.979:62): avc:  denied  { bind } for  pid=6475 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket
type=AVC msg=audit(1259851535.979:63): avc:  denied  { getattr } for  pid=6475 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket
type=AVC msg=audit(1259851535.979:64): avc:  denied  { write } for  pid=6475 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket
type=AVC msg=audit(1259851535.979:64): avc:  denied  { nlmsg_read } for  pid=6475 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket
type=AVC msg=audit(1259851535.979:65): avc:  denied  { read } for  pid=6475 comm="virsh" scontext=root:system_r:xm_t:s0 tcontext=root:system_r:xm_t:s0 tclass=netlink_route_socket


The vhostmd daemon ends up runing as this:

root:system_r:initrc_t          vhostmd   6422  0.0  0.1  43868  2244 ?        S    09:45   0:00 /usr/sbin/vhostmd --user vhostmd --connect xen:///
root:system_r:initrc_t          vhostmd   6856  0.0  0.0  77840  1712 ?        S    09:46   0:00  \_ /usr/bin/perl /usr/share/vhostmd/scripts/pagerate.pl


In addition because it is running unprivileged, it spawns the 'libvirt_proxy' setuid daemon, which also ends up running under initrd_t


root:system_r:initrc_t          root      6466  0.0  0.0  63296   980 ?        S    09:45   0:00 /usr/libexec/libvirt_proxy


We need to at least write policy to confine vhostmd daemon.

The vhostmd daemon works by spawning variou external commands periodically, in particular it seems to spawn 'virsh' frequently, which in turn is what causes the libvirt_proxy setuid process to be spawned.

So we may need to decide on additional confined domains for the latter

Version-Release number of selected component (if applicable):
selinux-policy-2.4.6-261.el5
vhostmd-0.4-0.12.gite9db007b.el5_3


How reproducible:
Always

Steps to Reproduce:
1. Get a RHEL-5.4 Xen host
2. Install the 'vhostmd' RPM
3. In /etc/vhostmd get rid of the default config, and put the .xen config in its place
4. Start the daemon   'service vhostmd start'
  
Actual results:
AVCs logged, and running under initrc_t

Expected results:
No AVCs, and vhostmd is running in a confined domain

Additional info:

This daemon/package was only just added to Fedora, so I'm not sure if there's policy for this in Fedora yet or not

Comment 1 Daniel Walsh 2009-12-03 15:30:04 UTC
Miroslav, can you write a policy for F12/Rawhide, that we can get tested and maybe backport to RHEL5

Comment 2 Richard W.M. Jones 2009-12-07 10:23:08 UTC
We use the same configuration in Rawhide and RHEL 5.x,
so you can have a look at the commands that vhostmd runs
in the configuration file:

http://cvs.fedoraproject.org/viewvc/devel/vhostmd/vhostmd.conf?view=markup

(look for <action>...</action>)

eg: It will run this command every 60 seconds:

  virsh -r CONNECT version | grep API | gawk -F': ' '{print $2}'

The magic word "CONNECT" is replaced by some connection URI, or
possibly by nothing.  This depends on how the system administrator
has adjusted the file /etc/sysconfig/vhostmd.  The default file
is here:

http://cvs.fedoraproject.org/viewvc/devel/vhostmd/vhostmd.sysconfig?view=markup

As noted above, because the virsh command isn't running as root
(it runs as special user:group vhostmd:vhostmd), it will probably
spawn some external program in the Xen case, or connect to libvirtd
in the KVM case.

Comment 3 Michael Waite 2009-12-08 20:33:38 UTC
<SNIP>
S    09:45   0:00 /usr/sbin/vhostmd --user vhostmd --connect xen:///
<END SNIP>

I am not the brightest bulb on the tree but please affirm that you are making this policy for KVM yes?

The business driver is SAP certification for RHEV which we do not have which is allowing for Novell to compete against us in this workload.

Any SELinux policy testing for XEN should be secondary to KVM because there is no need: SAP certified Xen already and does not require vmhostd. Keep in mind that vmhostd is the solution to SAP's change in certification requirements after Xen was certified.

Comment 4 Daniel Berrangé 2009-12-09 09:15:45 UTC
> I am not the brightest bulb on the tree but please affirm that you are making
> this policy for KVM yes?

The policy is for vhostmd, and it should work regardless of what hypervisor is in use, because it uses libvirt.

Comment 5 Richard W.M. Jones 2009-12-09 10:02:15 UTC
Yeah Dan's right I think.  vhostmd itself connects via libvirt
to get the list of domains, and then all the metrics gathering
happens by running external commands (eg. virsh).  There shouldn't
be any direct access to Xen or KVM.

You can change the libvirt connection URL by editing
/etc/sysconfig/vhostmd (the VHOSTMD_URI setting).

Comment 6 Richard W.M. Jones 2009-12-10 15:29:57 UTC
Miroslav, is the policy done now?

Comment 7 Miroslav Grepl 2009-12-10 18:53:20 UTC
Richard,

yes, the policy is done.

Comment 9 Miroslav Grepl 2009-12-11 11:23:43 UTC
Fixed in selinux-policy-2.4.6-266.el5.noarch

Comment 13 errata-xmlrpc 2010-03-30 07:49:28 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2010-0182.html


Note You need to log in before you can comment on or make changes to this bug.