Bug 861900 - SELinux prevents /usr/sbin/fence_virtd (virsh_t) from create access on the file /var/run/fence_virtd.pid (var_run_t)
SELinux prevents /usr/sbin/fence_virtd (virsh_t) from create access on the fi...
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy (Show other bugs)
7.0
All Linux
medium Severity medium
: rc
: ---
Assigned To: Miroslav Grepl
Milos Malik
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-10-01 05:45 EDT by Milos Malik
Modified: 2014-06-17 22:15 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-06-13 06:31:14 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
AVCs caught in enforcing and permissive mode (46.92 KB, text/plain)
2012-11-05 09:46 EST, Milos Malik
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 675323 None None None Never

  None (edit)
Description Milos Malik 2012-10-01 05:45:53 EDT
Description of problem:


Version-Release number of selected component (if applicable):
fence-virt-0.3.0-6.el7.x86_64
fence-virtd-0.3.0-6.el7.x86_64
fence-virtd-libvirt-0.3.0-6.el7.x86_64
fence-virtd-multicast-0.3.0-6.el7.x86_64
fence-virtd-serial-0.3.0-6.el7.x86_64
selinux-policy-3.11.1-26.el7.noarch
selinux-policy-devel-3.11.1-26.el7.noarch
selinux-policy-doc-3.11.1-26.el7.noarch
selinux-policy-minimum-3.11.1-26.el7.noarch
selinux-policy-targeted-3.11.1-26.el7.noarch

How reproducible:
always

Steps to Reproduce:
# fence_virtd -c
# dd if=/dev/urandom bs=512 count=1 of=/etc/cluster/fence_xvm.key
# service fence_virtd start
Redirecting to /bin/systemctl start  fence_virtd.service
# service fence_virtd status
Redirecting to /bin/systemctl status  fence_virtd.service
fence_virtd.service - Fence-Virt system host daemon
	  Loaded: loaded (/usr/lib/systemd/system/fence_virtd.service; disabled)
	  Active: inactive (dead) since Mon, 01 Oct 2012 11:41:46 +0000; 3s ago
	 Process: 9882 ExecStart=/usr/sbin/fence_virtd $FENCE_VIRTD_ARGS (code=exited, status=0/SUCCESS)
	Main PID: 9361 (code=killed, signal=KILL)
	  CGroup: name=systemd:/system/fence_virtd.service
# 

  
Actual results in enforcing mode:
----
type=PATH msg=audit(10/01/2012 11:41:46.977:739) : item=0 name=/var/run/fence_virtd.pid inode=1188 dev=00:11 mode=dir,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:var_run_t:s0 
type=CWD msg=audit(10/01/2012 11:41:46.977:739) :  cwd=/ 
type=SYSCALL msg=audit(10/01/2012 11:41:46.977:739) : arch=x86_64 syscall=open success=no exit=-13(Permission denied) a0=0x60b280 a1=O_WRONLY|O_CREAT|O_TRUNC a2=0x1b6 a3=0x238 items=1 ppid=9882 pid=9883 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=fence_virtd exe=/usr/sbin/fence_virtd subj=system_u:system_r:virsh_t:s0 key=(null) 
type=AVC msg=audit(10/01/2012 11:41:46.977:739) : avc:  denied  { create } for  pid=9883 comm=fence_virtd name=fence_virtd.pid scontext=system_u:system_r:virsh_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file 
----

Expected results:
 * no AVCs
Comment 1 Milos Malik 2012-10-01 05:52:06 EDT
"service fence_virtd start" command causes following AVCs in permissive mode:
----
type=SYSCALL msg=audit(10/01/2012 11:47:03.622:751) : arch=x86_64 syscall=fstat success=yes exit=0 a0=0x3 a1=0x7ffffef39050 a2=0x7ffffef39050 a3=0x408291 items=0 ppid=1 pid=9986 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=fence_virtd exe=/usr/sbin/fence_virtd subj=system_u:system_r:virsh_t:s0 key=(null) 
type=AVC msg=audit(10/01/2012 11:47:03.622:751) : avc:  denied  { getattr } for  pid=9986 comm=fence_virtd path=/run/fence_virtd.pid dev="tmpfs" ino=42146 scontext=system_u:system_r:virsh_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file 
----
type=PATH msg=audit(10/01/2012 11:47:03.621:750) : item=1 name=/var/run/fence_virtd.pid inode=42146 dev=00:11 mode=file,644 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:var_run_t:s0 
type=PATH msg=audit(10/01/2012 11:47:03.621:750) : item=0 name=/var/run/ inode=1188 dev=00:11 mode=dir,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:var_run_t:s0 
type=CWD msg=audit(10/01/2012 11:47:03.621:750) :  cwd=/ 
type=SYSCALL msg=audit(10/01/2012 11:47:03.621:750) : arch=x86_64 syscall=open success=yes exit=3 a0=0x60b280 a1=O_WRONLY|O_CREAT|O_TRUNC a2=0x1b6 a3=0x238 items=2 ppid=9985 pid=9986 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=fence_virtd exe=/usr/sbin/fence_virtd subj=system_u:system_r:virsh_t:s0 key=(null) 
type=AVC msg=audit(10/01/2012 11:47:03.621:750) : avc:  denied  { write open } for  pid=9986 comm=fence_virtd path=/run/fence_virtd.pid dev="tmpfs" ino=42146 scontext=system_u:system_r:virsh_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file 
type=AVC msg=audit(10/01/2012 11:47:03.621:750) : avc:  denied  { create } for  pid=9986 comm=fence_virtd name=fence_virtd.pid scontext=system_u:system_r:virsh_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file 
----
type=PATH msg=audit(10/01/2012 11:47:03.639:753) : item=0 name=/etc/cluster/fence_xvm.key inode=393905 dev=08:04 mode=file,644 ouid=root ogid=root rdev=00:00 obj=unconfined_u:object_r:cluster_conf_t:s0 
type=CWD msg=audit(10/01/2012 11:47:03.639:753) :  cwd=/ 
type=SYSCALL msg=audit(10/01/2012 11:47:03.639:753) : arch=x86_64 syscall=open success=yes exit=8 a0=0x1fcbb20 a1=O_RDONLY a2=0x1000 a3=0x6 items=1 ppid=1 pid=9986 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=fence_virtd exe=/usr/sbin/fence_virtd subj=system_u:system_r:virsh_t:s0 key=(null) 
type=AVC msg=audit(10/01/2012 11:47:03.639:753) : avc:  denied  { open } for  pid=9986 comm=fence_virtd path=/etc/cluster/fence_xvm.key dev="sda4" ino=393905 scontext=system_u:system_r:virsh_t:s0 tcontext=unconfined_u:object_r:cluster_conf_t:s0 tclass=file 
type=AVC msg=audit(10/01/2012 11:47:03.639:753) : avc:  denied  { read } for  pid=9986 comm=fence_virtd name=fence_xvm.key dev="sda4" ino=393905 scontext=system_u:system_r:virsh_t:s0 tcontext=unconfined_u:object_r:cluster_conf_t:s0 tclass=file 
----
type=SOCKADDR msg=audit(10/01/2012 11:47:03.639:754) : saddr=inet host:0.0.0.0 serv:1229 
type=SYSCALL msg=audit(10/01/2012 11:47:03.639:754) : arch=x86_64 syscall=bind success=yes exit=0 a0=0x8 a1=0x7ffffef398c0 a2=0x10 a3=0x1 items=0 ppid=1 pid=9986 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=fence_virtd exe=/usr/sbin/fence_virtd subj=system_u:system_r:virsh_t:s0 key=(null) 
type=AVC msg=audit(10/01/2012 11:47:03.639:754) : avc:  denied  { node_bind } for  pid=9986 comm=fence_virtd src=1229 scontext=system_u:system_r:virsh_t:s0 tcontext=system_u:object_r:node_t:s0 tclass=udp_socket 
type=AVC msg=audit(10/01/2012 11:47:03.639:754) : avc:  denied  { name_bind } for  pid=9986 comm=fence_virtd src=1229 scontext=system_u:system_r:virsh_t:s0 tcontext=system_u:object_r:zented_port_t:s0 tclass=udp_socket 
----

"fence_xvm -o list" command causes following AVC in permissive mode:
----
type=SOCKADDR msg=audit(10/01/2012 11:48:09.299:755) : saddr=inet host:127.0.0.1 serv:1229 
type=SYSCALL msg=audit(10/01/2012 11:48:09.299:755) : arch=x86_64 syscall=connect success=no exit=-115(Operation now in progress) a0=0x9 a1=0x7ffffef39260 a2=0x10 a3=0x7ffffef39104 items=0 ppid=1 pid=9986 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=fence_virtd exe=/usr/sbin/fence_virtd subj=system_u:system_r:virsh_t:s0 key=(null) 
type=AVC msg=audit(10/01/2012 11:48:09.299:755) : avc:  denied  { name_connect } for  pid=9986 comm=fence_virtd dest=1229 scontext=system_u:system_r:virsh_t:s0 tcontext=system_u:object_r:zented_port_t:s0 tclass=tcp_socket 
----
Comment 2 Daniel Walsh 2012-10-01 06:15:14 EDT
Fixed in selinux-policy-3.11.1-29.el7
Comment 4 Daniel Walsh 2012-10-30 14:17:52 EDT
In the previous bug fence_virtd was running as virsh_t, now it is running as fenced_t?  Are we debugging two different things here.  In the fist case was it being run from init, and not it is being launched by fenced?
Comment 5 Milos Malik 2012-10-31 03:57:06 EDT
I don't know why was the file labelled differently when I reported the bug. I can think of 2 possibilities:
 * default label of that file changed in selinux-policy
 * I changed the file label manually for testing purposes and didn't run restorecon then

# rpm -qa selinux-policy\*
selinux-policy-mls-3.11.1-44.el7.noarch
selinux-policy-doc-3.11.1-44.el7.noarch
selinux-policy-3.11.1-44.el7.noarch
selinux-policy-targeted-3.11.1-44.el7.noarch
selinux-policy-minimum-3.11.1-44.el7.noarch
selinux-policy-devel-3.11.1-44.el7.noarch
# sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      28
# matchpathcon `which fence_virtd`
/usr/sbin/fence_virtd	system_u:object_r:fenced_exec_t:s0
#

However, the file is labelled fenced_exec_t according to latest selinux-policy. Please ignore comment#0 and comment#1.
Comment 6 Daniel Walsh 2012-10-31 07:02:03 EDT
No Miroslav informed me policy changed.  My mistake.

Fixed in selinux-policy-3.11.1-49.el7
Comment 7 Milos Malik 2012-11-05 09:43:07 EST
# rpm -qa selinux-policy\*
selinux-policy-devel-3.11.1-49.el7.noarch
selinux-policy-doc-3.11.1-49.el7.noarch
selinux-policy-mls-3.11.1-49.el7.noarch
selinux-policy-minimum-3.11.1-49.el7.noarch
selinux-policy-targeted-3.11.1-49.el7.noarch
selinux-policy-3.11.1-49.el7.noarch
# sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      28
# service fence_virtd start
Redirecting to /bin/systemctl start  fence_virtd.service
# ausearch -m avc -ts recent -i | audit2allow

#============= fenced_t ==============
allow fenced_t virt_etc_t:dir search;
allow fenced_t virt_etc_t:file { read open };
allow fenced_t virt_var_run_t:dir search;
allow fenced_t virt_var_run_t:sock_file write;
allow fenced_t virtd_exec_t:file getattr;
allow fenced_t virtd_t:unix_stream_socket connectto;
#
Comment 8 Milos Malik 2012-11-05 09:46:44 EST
Created attachment 638674 [details]
AVCs caught in enforcing and permissive mode
Comment 10 Daniel Walsh 2012-11-12 11:38:39 EST
I am thinking we need

optional_policy(`
	virt_domtrans(fenced_t)
	virt_read_config(fenced_t)
	virt_read_pid_files(fenced_t)
	virt_stream_connect(fenced_t)
')

Milos does fencd_virtd restart libvirt?
Comment 11 Daniel Walsh 2012-11-12 11:39:33 EST
Fixed in selinux-policy-3.11.1-53.el7
Comment 17 Miroslav Grepl 2014-01-20 05:19:15 EST
#!!!! This avc is allowed in the current policy
allow fenced_t zented_port_t:udp_socket name_bind;

is going to be fixed with the next builds.
Comment 18 Ludek Smid 2014-06-13 06:31:14 EDT
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.

Note You need to log in before you can comment on or make changes to this bug.