Description of problem: When deploying overcloud with ML2 OVS backend, guest instances with vhostuser interface will fail to spawn due to selinux policy. selinux audit: [root@compute-0 ~]# sealert -l 56512430-6c70-4223-a639-f56b1524ee65 SELinux is preventing /usr/sbin/ovs-vswitchd from 'read, write' accesses on the unix_stream_socket unix_stream_socket. ***** Plugin catchall (100. confidence) suggests ************************** If you believe that ovs-vswitchd should be allowed read write access on the unix_stream_socket unix_stream_socket by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # ausearch -c 'vhost-events' --raw | audit2allow -M my-vhostevents # semodule -X 300 -i my-vhostevents.pp Additional Information: Source Context system_u:system_r:openvswitch_t:s0 Target Context system_u:system_r:spc_t:s0 Target Objects unix_stream_socket [ unix_stream_socket ] Source vhost-events Source Path /usr/sbin/ovs-vswitchd Port <Unknown> Host compute-0 Source RPM Packages openvswitch2.11-2.11.0-0.20190129gitd3a10db.el8fdb .x86_64 Target RPM Packages Policy RPM selinux-policy-3.14.1-61.el8.noarch Selinux Enabled True Policy Type targeted Enforcing Mode Permissive Host Name compute-0 Platform Linux compute-0 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64 x86_64 Alert Count 1 First Seen 2019-05-08 13:38:45 UTC Last Seen 2019-05-08 13:38:45 UTC Local ID 56512430-6c70-4223-a639-f56b1524ee65 Raw Audit Messages type=AVC msg=audit(1557322725.873:12375): avc: denied { read write } for pid=8786 comm="vhost-events" path="socket:[16378370]" dev="sockfs" ino=16378370 scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:system_r:spc_t:s0 tclass=unix_stream_socket permissive=1 type=SYSCALL msg=audit(1557322725.873:12375): arch=x86_64 syscall=recvmsg success=yes exit=ENOMEM a0=6a a1=7f7692ffc6c0 a2=0 a3=7f7692ffc6a0 items=0 ppid=1 pid=8786 auid=4294967295 uid=987 gid=42477 euid=987 suid=987 fsuid=987 egid=42477 sgid=42477 fsgid=42477 tty=(none) ses=4294967295 comm=vhost-events exe=/usr/sbin/ovs-vswitchd subj=system_u:system_r:openvswitch_t:s0 key=(null) Hash: vhost-events,openvswitch_t,spc_t,unix_stream_socket,read,write When setting selinux to Permissive or creating policies from above audit, we are able to boot the instance successfully. Version-Release number of selected component (if applicable): compose: RHOS_TRUNK-15.0-RHEL-8-20190423.n.1 rpm -qa | grep openvswitch rhosp-openvswitch-2.11-0.1.el8ost.noarch network-scripts-openvswitch2.11-2.11.0-0.20190129gitd3a10db.el8fdb.x86_64 openvswitch-selinux-extra-policy-1.0-10.el8fdb.noarch openvswitch2.11-2.11.0-0.20190129gitd3a10db.el8fdb.x86_64 How reproducible: always Steps to Reproduce: 1. Deploy Overcloud with ML2 OVS backend 2. Spawn guest instance with vhostuser interface Actual results: Unable to spawn guest Expected results: Successful spawn of guest Additional info: Will provide sosreports in comments
I executed below commands in the compute node with selinux mode a Enforcing, I was able to create the VMs successfully. ausearch -c 'vhost-events' --raw | audit2allow -M my-vhostevents semodule -X 300 -i my-vhostevents.pp
curiously: #!!!! This avc has a dontaudit rule in the current policy allow openvswitch_t spc_t:unix_stream_socket { read write };
OVS doesn't require container-selinux, but spc_t is defined there. However, openstack-selinux does require container-selinux.
Yes, openvswitch-selinux-extra-policy-1.0-10 requires container-selinux
We are eliminating the dependency to containers-selinux, because it breaks some layered products.
No problem, since it's specific to openstack container configuration, I placed it here for now: https://github.com/redhat-openstack/openstack-selinux/pull/32
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:2811