Bug 1547250

Summary: Got lots OVS daemon ERRs while starting a OVS-dpdk guest
Product: Red Hat Enterprise Linux 7 Reporter: Jean-Tsung Hsiao <jhsiao>
Component: libvirtAssignee: Martin Kletzander <mkletzan>
Status: CLOSED ERRATA QA Contact: Yanqiu Zhang <yanqzhan>
Severity: high Docs Contact:
Priority: high    
Version: 7.5CC: aconole, atragler, berrange, chhu, ctrautma, dyuan, eskultet, fbaudin, fjin, fleitner, itbrown, jdenemar, jherrman, jhsiao, jraju, jsuchane, juzhang, knoel, ktraynor, kzhang, lmen, maxime.coquelin, mkletzan, mtessun, pezhang, rcain, skramaja, tredaelli, virt-maint, xuzhang, yafu, yalzhang
Target Milestone: rcKeywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: libvirt-4.3.0-1.el7 Doc Type: Bug Fix
Doc Text:
Previously, the virtlogd service logged redundant AVC denial errors when a guest virtual machine was started. With this update, the virtlogd service no longer attempts to send shutdown inhibition calls to systemd, which prevents the described errors from occurring.
Story Points: ---
Clone Of:
: 1561711 1573268 (view as bug list) Environment:
Last Closed: 2018-10-30 09:52:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1573268    

Description Jean-Tsung Hsiao 2018-02-20 20:46:02 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Jean-Tsung Hsiao 2018-02-20 20:58:03 UTC
(In reply to Jean-Tsung Hsiao from comment #0)
> Description of problem:

Got lots OVS daemon ERRs while starting a OVS-dpdk guest


> 
> Version-Release number of selected component (if applicable):

root@netqe5 ~]# rpm -q openvswitch
openvswitch-2.9.0-1.el7fdp.x86_64

[root@netqe5 ~]# rpm -qa | grep qemu
libvirt-daemon-driver-qemu-3.9.0-13.el7.x86_64
qemu-img-rhev-2.9.0-16.el7_4.14.x86_64
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
qemu-kvm-common-rhev-2.9.0-16.el7_4.14.x86_64
qemu-kvm-rhev-2.9.0-16.el7_4.14.x86_64

Same issue happened with latest qemu-kvm-rhev-2.10.0-20.

Issue gone if using qemu-kvm-rhev-2.6.0-28.

> 
> 
> How reproducible:Reproducible
> 
> 
> Steps to Reproduce:
> 1.
Use openvswitch-2.9.0-1
> 2. 
Use qemu-kvm-rhev-2.9.0-16
> 3. 
Config OVS-dpdk with vhostusers
Start guest
> 
> Actual results: Hang; Lots of ovs-vswitchd ERRs
> 
> 
> Expected results:
Should be successful

> 
> 
> Additional info:

Comment 3 Aaron Conole 2018-02-20 21:17:34 UTC
15:32 <jhsiao> 018-02-20T20:30:53.193Z|217777|dpdk|ERR|VHOST_CONFIG:
               truncted msg
15:32 <jhsiao> 2018-02-20T20:30:53.193Z|217778|dpdk|ERR|VHOST_CONFIG:
               vhost read message failed
15:32 <jhsiao> 2018-02-20T20:30:53.193Z|217779|dpdk|INFO|VHOST_CONFIG:
               new vhost user connection is 77
15:32 <jhsiao> 2018-02-20T20:30:53.193Z|217780|dpdk|INFO|VHOST_CONFIG:
               new device, handle is 0
15:32 <jhsiao> 2018-02-20T20:30:53.193Z|217781|dpdk|INFO|VHOST_CONFIG:
               read message VHOST_USER_GET_FEATURES
15:32 <jhsiao> 2018-02-20T20:30:53.193Z|217782|dpdk|INFO|VHOST_CONFIG:
               read message VHOST_USER_GET_PROTOCOL_FEATURES
15:32 <jhsiao> 2018-02-20T20:30:53.194Z|217783|dpdk|INFO|VHOST_CONFIG:
               read message VHOST_USER_SET_PROTOCOL_FEATURES
15:32 <jhsiao> 2018-02-20T20:30:53.194Z|217784|dpdk|INFO|VHOST_CONFIG:
               read message VHOST_USER_GET_QUEUE_NUM
15:32 <jhsiao> 2018-02-20T20:30:53.194Z|217785|dpdk|ERR|VHOST_CONFIG:
               truncted msg
15:32 <jhsiao> 2018-02-20T20:30:53.194Z|217786|dpdk|ERR|VHOST_CONFIG:
               vhost read message failed
15:39 <jhsiao> qemu-kvm-rhev-2.6.0-28 seems to Ok


I think this could be a qemu problem.

Comment 4 Flavio Leitner 2018-02-20 21:38:49 UTC
Jean,

Could you check whether qemu or OVS that is having issues?
Try 2.9 fdp with previous known good qemu.
Try 2.7 fdp with latest qemu.

Comment 5 Jean-Tsung Hsiao 2018-02-20 22:17:41 UTC
(In reply to Flavio Leitner from comment #4)
> Jean,
> 
> Could you check whether qemu or OVS that is having issues?
> Try 2.9 fdp with previous known good qemu.
> Try 2.7 fdp with latest qemu.

Hi Flavio,

Take a look at netqe19 now. It has a pre-release 2.9.0 fdP, and has latest 2.10.0-20 Qemu. Please see attached below.

The guest at server mode has been running fine.

So, I would think this is 2.9.0-1 fdP specific issue.

Thanks!

Jean


[root@netqe19 ~]# rpm -q openvswitch
openvswitch-2.9.0-0.4.20180124git26cdc33.el7fdp.x86_64

[root@netqe19 ~]# rpm -qa | grep qemu
qemu-kvm-common-rhev-2.10.0-20.el7.x86_64
qemu-img-rhev-2.10.0-20.el7.x86_64
qemu-kvm-rhev-2.10.0-20.el7.x86_64
libvirt-daemon-driver-qemu-3.9.0-13.el7.x86_64
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch

[root@netqe19 ~]# virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # list --all
 Id    Name                           State
----------------------------------------------------
 5     vhost-server                   running

virsh # 

[root@netqe19 ~]# vs
844bf661-2cea-43c3-a26a-b3e1a4975e14
    Bridge "ovsbr0"
        Port "vhost1"
            Interface "vhost1"
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhost1"}
        Port "vhost0"
            Interface "vhost0"
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhost0"}
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal
    ovs_version: "2.9.0"
[root@netqe19 ~]#

Comment 6 Jean-Tsung Hsiao 2018-02-20 22:44:43 UTC
(In reply to Flavio Leitner from comment #4)
> Jean,
> 
> Could you check whether qemu or OVS that is having issues?
> Try 2.9 fdp with previous known good qemu.
> Try 2.7 fdp with latest qemu.

Hi Flavio and Aaron,
2.7 fdP works well with qemu-kvm-rhev-2.9.0-16. Please see attached below.
So, I believe this is an OVS 2.9.0-1 fdP issue.
What'd you think?
Thanks!
Jean


[root@netqe5 ~]# vs
cc4985b0-5f26-4dce-a971-48005e9856bf
    Bridge "ovsbr0"
        Port "vhost1"
            Interface "vhost1"
                type: dpdkvhostuser
        Port "dpdk-10"
            Interface "dpdk-10"
                type: dpdk
                options: {dpdk-devargs="0000:81:00.0", n_rxq="1"}
        Port "dpdk-11"
            Interface "dpdk-11"
                type: dpdk
                options: {dpdk-devargs="0000:81:00.1", n_rxq="1"}
        Port "vhost0"
            Interface "vhost0"
                type: dpdkvhostuser
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal
    ovs_version: "2.7.3"
[root@netqe5 ~]# rpm -q openvswitch
openvswitch-2.7.3-3.git20180112.el7fdp.x86_64
[root@netqe5 ~]# rpm -qa | grep qemu
libvirt-daemon-driver-qemu-3.9.0-13.el7.x86_64
qemu-kvm-rhev-2.9.0-16.el7_4.14.x86_64
qemu-kvm-common-rhev-2.9.0-16.el7_4.14.x86_64
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
qemu-img-rhev-2.9.0-16.el7_4.14.x86_64
[root@netqe5 ~]#

Comment 7 Jean-Tsung Hsiao 2018-02-21 15:14:49 UTC
Hi Flavio and Eelco,

This is a new finding: On netqe19 guest can be started successfully in SERVER mode using 2.9.0-3 fdP and qemu-kvm-rhev-2.10.0-20.

Will run guest in CLIENT mode next to see what happen.

Thanks!

Jean

Comment 8 Jean-Tsung Hsiao 2018-02-21 16:00:25 UTC
Selinux could be the issue here.

On netqe19 when guest ran in CLIENT mode 2.9.0-1 fdP and qemu-kvm-rhev-2.10.0-20. If Selinux=Permissive, there was no such issue.

But, if Selinux=Enforcing, the issue happened --- lots of "truncted msg" ERRs seen in ovs-vswitchd.log.

See below for a USER_AVC.

[root@netqe19 ~]# tail -f /var/log/audit/audit.log | grep AVC
type=USER_AVC msg=audit(1519227919.365:2627): pid=1104 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  denied  { send_msg } for msgtype=method_call interface=org.freedesktop.login1.Manager member=Inhibit dest=org.freedesktop.login1 spid=2650 tpid=1095 scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:systemd_logind_t:s0 tclass=dbus  exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?'


2018-02-21T15:54:30.709Z|1446065|dpdk|ERR|VHOST_CONFIG: truncted msg
2018-02-21T15:54:30.709Z|1446066|dpdk|ERR|VHOST_CONFIG: vhost read message failed
2018-02-21T15:54:30.709Z|1446067|dpdk|INFO|VHOST_CONFIG: new vhost user connection is 62
2018-02-21T15:54:30.709Z|1446068|dpdk|INFO|VHOST_CONFIG: new device, handle is 0
2018-02-21T15:54:30.709Z|1446069|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
2018-02-21T15:54:30.709Z|1446070|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
2018-02-21T15:54:30.709Z|1446071|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
2018-02-21T15:54:30.709Z|1446072|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
2018-02-21T15:54:30.709Z|1446073|dpdk|ERR|VHOST_CONFIG: truncted msg

Comment 9 Kevin Traynor 2018-02-21 18:11:48 UTC
Based on earlier comments, I ran with reported versions and things working ok in my setup. Probably there were some differences in setup but not worth exploring now that selinux looks to be the culprit.

[root@wsfd-netdev69 ~]# rpm -qa | grep openvswitch
openvswitch-2.9.0-1.el7fdp.x86_64
[root@wsfd-netdev69 ~]# rpm -qa | grep qemu
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
qemu-kvm-rhev-2.9.0-16.el7_4.14.x86_64
qemu-kvm-common-rhev-2.9.0-16.el7_4.14.x86_64
qemu-kvm-tools-rhev-2.9.0-16.el7_4.14.x86_64
qemu-img-rhev-2.9.0-16.el7_4.14.x86_64
libvirt-daemon-driver-qemu-3.2.0-14.el7.x86_64

Comment 10 Jean-Tsung Hsiao 2018-02-21 18:34:36 UTC
(In reply to Kevin Traynor from comment #9)
> Based on earlier comments, I ran with reported versions and things working
> ok in my setup. Probably there were some differences in setup but not worth
> exploring now that selinux looks to be the culprit.
Not sure yet as I forgot to install openstack-selinux and container-selinux on netqe19.

Hi Kevin,
The one failed is with qemu-kvm-rhev-2.10.0-20. So, can you try this qemu ?
In the meantime, I'll try qemu-img-rhev-2.9.0-16.
Thanks!
Jean

> 
> [root@wsfd-netdev69 ~]# rpm -qa | grep openvswitch
> openvswitch-2.9.0-1.el7fdp.x86_64
> [root@wsfd-netdev69 ~]# rpm -qa | grep qemu
> ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
> qemu-kvm-rhev-2.9.0-16.el7_4.14.x86_64
> qemu-kvm-common-rhev-2.9.0-16.el7_4.14.x86_64
> qemu-kvm-tools-rhev-2.9.0-16.el7_4.14.x86_64
> qemu-img-rhev-2.9.0-16.el7_4.14.x86_64
> libvirt-daemon-driver-qemu-3.2.0-14.el7.x86_64

Comment 11 Jean-Tsung Hsiao 2018-02-21 18:49:44 UTC
(In reply to Jean-Tsung Hsiao from comment #10)
> (In reply to Kevin Traynor from comment #9)
> > Based on earlier comments, I ran with reported versions and things working
> > ok in my setup. Probably there were some differences in setup but not worth
> > exploring now that selinux looks to be the culprit.
> Not sure yet as I forgot to install openstack-selinux and container-selinux
> on netqe19.
> 
> Hi Kevin,
> The one failed is with qemu-kvm-rhev-2.10.0-20. So, can you try this qemu ?
> In the meantime, I'll try qemu-img-rhev-2.9.0-16.
> Thanks!
> Jean
> 
> > 
> > [root@wsfd-netdev69 ~]# rpm -qa | grep openvswitch
> > openvswitch-2.9.0-1.el7fdp.x86_64
> > [root@wsfd-netdev69 ~]# rpm -qa | grep qemu
> > ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
> > qemu-kvm-rhev-2.9.0-16.el7_4.14.x86_64
> > qemu-kvm-common-rhev-2.9.0-16.el7_4.14.x86_64
> > qemu-kvm-tools-rhev-2.9.0-16.el7_4.14.x86_64
> > qemu-img-rhev-2.9.0-16.el7_4.14.x86_64
> > libvirt-daemon-driver-qemu-3.2.0-14.el7.x86_64

Hi Kevin,

The test failed with emu-img-rhev-2.9.0-16 as well on my test-bed.

Are you sure Selinux=Enforcing ?


[root@netqe5 jhsiao]# rpm -q openvswitch
openvswitch-2.9.0-2.el7fdp.x86_64
[root@netqe5 jhsiao]# rpm -qa | grep qemu
libvirt-daemon-driver-qemu-3.9.0-13.el7.x86_64
qemu-img-rhev-2.9.0-16.el7_4.14.x86_64
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
qemu-kvm-rhev-2.9.0-16.el7_4.14.x86_64
qemu-kvm-common-rhev-2.9.0-16.el7_4.14.x86_64
018-02-21T18:47:34.320Z|315441|dpdk|ERR|VHOST_CONFIG: truncted msg
2018-02-21T18:47:34.320Z|315442|dpdk|ERR|VHOST_CONFIG: vhost read message failed
2018-02-21T18:47:34.320Z|315443|dpdk|INFO|VHOST_CONFIG: new vhost user connection is 77
2018-02-21T18:47:34.320Z|315444|dpdk|INFO|VHOST_CONFIG: new device, handle is 0
2018-02-21T18:47:34.320Z|315445|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
2018-02-21T18:47:34.320Z|315446|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
2018-02-21T18:47:34.320Z|315447|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
2018-02-21T18:47:34.320Z|315448|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
2018-02-21T18:47:34.320Z|315449|dpdk|ERR|VHOST_CONFIG: truncted msg
2018-02-21T18:47:34.320Z|315450|dpdk|ERR|VHOST_CONFIG: vhost read message failed

Comment 12 Jean-Tsung Hsiao 2018-02-21 19:32:44 UTC
The test with ovs-2.9.0-0.4 & qemu 2.9.0-16 combination also failed.

But, with ovs-2.7.3-2 & qemu 2.10.0-20 combination, the test passed with "truncted msg" ERRs. NOTE: Sill see USER_AVC, denied {send_msg}.

Comment 13 Maxime Coquelin 2018-02-22 18:37:09 UTC
DPDK RFC posted upstream:
http://dpdk.org/ml/archives/dev/2018-February/091353.html

Comment 14 Maxime Coquelin 2018-02-22 18:38:08 UTC
Sorry, I updated the wrong Bz.

Comment 15 Kevin Traynor 2018-03-02 16:38:31 UTC
hi Jean, Aaron says he thinks your config is ok now and things are working. Can you confirm?

Comment 16 Jean-Tsung Hsiao 2018-03-04 20:17:27 UTC
(In reply to Kevin Traynor from comment #15)
> hi Jean, Aaron says he thinks your config is ok now and things are working.
> Can you confirm?

Sorry! I am not sure about that. From the IRC log with him I can't find this bug mentioned. Probably, he's talking about Bug 1544948.

Comment 17 Jean-Tsung Hsiao 2018-03-04 21:15:07 UTC
Hi Aaron and Kevin,

The key question here is getting the same USER_AVC, denied  { send_msg }, between  different qemu-kvm --- qemu-kvm-rhev/2.10.0-20 versu qemu-kvm-rhev-2.6.0-28. See attached below.

Here the fact: With 2.10.0-20 this issue happened; But, not With 2.6.0-28.

Here the big question: why does OVS behave so differently ?

Thanks!

Jean

 
*** qemu-kvm-rhev/2.10.0 ***
type=USER_AVC msg=audit(1520194971.539:1604): pid=1393 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  denied  { send_msg } for msgtype=method_call interface=org.freedesktop.login1.Manager member=Inhibit dest=org.freedesktop.login1 spid=3123 tpid=1390 scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:systemd_logind_t:s0 tclass=dbus  exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?'

*** Get the same USER_AVC with qemu-kvm-rhev-2.6.0-28 *** 
type=USER_AVC  msg=audit(1520195408.164:1668): pid=1393 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  denied  { send_msg } for msgtype=method_call interface=org.freedesktop.login1.Manager member=Inhibit dest=org.freedesktop.login1 spid=3123 tpid=1390 scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:systemd_logind_t:s0 tclass=dbus  exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?'

Comment 18 Aaron Conole 2018-03-05 14:03:19 UTC
OvS isn't involved in this AVC at all:

scontext=system_u:system_r:virtlogd_t
tcontext=system_u:system_r:systemd_logind_t

PLUS this is a logind denial over dbus (something that OvS doesn't do at all).  This is quite clearly a QEMU policy issue.

Also, this looks to be the same AVC both for 2.10 and 2.6.0 from qemu-kvm-rhev.  Did I misunderstand something?

Comment 19 Jean-Tsung Hsiao 2018-03-05 14:33:56 UTC
(In reply to Aaron Conole from comment #18)
> OvS isn't involved in this AVC at all:
> 
> scontext=system_u:system_r:virtlogd_t
> tcontext=system_u:system_r:systemd_logind_t
> 
> PLUS this is a logind denial over dbus (something that OvS doesn't do at
> all).  This is quite clearly a QEMU policy issue.
> 
> Also, this looks to be the same AVC both for 2.10 and 2.6.0 from
> qemu-kvm-rhev.  Did I misunderstand something?

Yes, looks to be the same AVC both for 2.10 and 2.6.0.

Let me reassign this to Karen's libvirt group.

Hi Karen,
Please assign someone to take a look into this issue.
Thanks!
Jean

Comment 20 Daniel Berrangé 2018-03-06 16:17:02 UTC
The virNetDaemon class that's used by virtlogd (and libvirtd) calls virNetDaemonCallInhibit() when it wants to prevent shutdown of the login session. This invokes the Inhibit message on logind over DBus, hence why this AVC is triggered. 

virtlogd inhibits shutdown whenever it has a log file for a running guest open, though. So the AVC being reported here is a gap in the policy.

That said, I think we could reasonably argue that virtlogd should not try to inhibit shutdown itself. libvirtd can already inhibit shutdown when QEMU is running, if required, so virtlogd is really not adding value in this respect.

So I'd suggest we can probably just remove the inhibit logic from src/logging/log_handler.c

Comment 21 Saravanan KR 2018-03-23 10:23:49 UTC
Facing this issue with OSP13 puddle, disabling SELinux is working fine. Sosreport  in a relevant BZ#1549938 - https://bugzilla.redhat.com/attachment.cgi?id=1412031

Comment 22 Karen Noel 2018-03-28 19:12:24 UTC
Jarda, Is a fix required in libvirt? See comment #20.

I think the osp team would like a z-stream fix. Setting LP=OpenStack and rhel-7.5.z. Thanks.

Comment 23 Jaroslav Suchanek 2018-03-29 08:51:41 UTC
The issue from comment 20 should be certainly fixed. I am not sure what is the relation to the original bug description and comment 11 though.

As for the z-stream request, preferably osp team should ask for it. Maybe Martin Tessun?

Comment 24 Yanqiu Zhang 2018-03-30 06:09:10 UTC
The "avc denied" can be reproduced by rhel7.5 libvirt:
Pkg version:
# rpm -q libvirt qemu-kvm-rhev
libvirt-3.9.0-14.el7_5.2.x86_64
qemu-kvm-rhev-2.10.0-21.el7_5.1.x86_64

Steps:
# getenforce
Enforcing

# virsh start V
Domain V started

# tail -f /var/log/audit/audit.log|grep AVC
type=USER_AVC msg=audit(1522389704.412:30336): pid=830 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  denied  { send_msg } for msgtype=method_call interface=org.freedesktop.login1.Manager member=Inhibit dest=org.freedesktop.login1 spid=13121 tpid=812 scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:systemd_logind_t:s0 tclass=dbus  exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?'


# getenforce
Permissive

# virsh start V
Domain V started

# tail -f /var/log/audit/audit.log|grep AVC
type=AVC msg=audit(1522389656.074:30299): avc:  denied  { write } for  pid=13121 comm="virtlogd" path="/run/systemd/inhibit/1857.ref" dev="tmpfs" ino=18948025 scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:systemd_logind_inhibit_var_run_t:s0 tclass=fifo_file


Additional info:
No such avc info on rhel7.4

Comment 25 Jean-Tsung Hsiao 2018-04-02 14:12:59 UTC
For OVS-2.9.0-15 fdP testing we need to set Selinux=Permissive due to this issue.

Comment 26 Martin Tessun 2018-04-20 18:29:19 UTC
Frank, I believe we need a hotfix here for 7.5.z as well? Could you comment, so we can decide on the z-stream?

Thanks!
Martin

Comment 30 Martin Kletzander 2018-04-25 13:59:19 UTC
Potential fix posted upsteam:

https://www.redhat.com/archives/libvir-list/2018-April/msg02381.html

Comment 31 Martin Kletzander 2018-04-26 10:33:40 UTC
v2 posted upstream:

https://www.redhat.com/archives/libvir-list/2018-April/msg02497.html

Comment 35 Martin Kletzander 2018-05-01 13:57:20 UTC
Fixed upstream by commit v4.3.0-rc1-1-gf94e5b215720:

commit f94e5b215720c91c60219f1694783a603f0b619c
Author: Martin Kletzander <mkletzan>
Date:   Thu Apr 26 12:17:03 2018 +0200

    logging: Don't inhibit shutdown in system daemon

Comment 36 Jean-Tsung Hsiao 2018-05-10 18:40:23 UTC
Was able to "virsh start <guest>" successfully using openstack-selinux-0.8.14-5.el7ost  with Selinux=Enforcing.

[root@netqe5 ~]# getenforce
Enforcing

Related selinux packages:
[root@netqe5 ~]# rpm -qa | grep selinux
openstack-selinux-0.8.14-5.el7ost.noarch
selinux-policy-targeted-3.13.1-193.el7.noarch
libselinux-python-2.5-12.el7.x86_64
selinux-policy-3.13.1-193.el7.noarch
container-selinux-2.57-1.el7.noarch
libselinux-2.5-12.el7.x86_64
libselinux-utils-2.5-12.el7.x86_64

Qemu packages:
[root@netqe5 ~]# rpm -qa | grep qemu
qemu-img-rhev-2.10.0-21.el7.x86_64
libvirt-daemon-driver-qemu-3.9.0-14.el7_5.4.x86_64
qemu-kvm-rhev-2.10.0-21.el7.x86_64
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
qemu-kvm-common-rhev-2.10.0-21.el7.x86_64

OVS and kernel:
[root@netqe5 ~]# rpm -q openvswitch
openvswitch-2.9.0-19.el7fdp.x86_64

[root@netqe5 ~]# uname -a
Linux netqe5.knqe.lab.eng.bos.redhat.com 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

Comment 38 Martin Kletzander 2018-05-11 11:52:37 UTC
(In reply to Jean-Tsung Hsiao from comment #36)
So it was changed in selinux-policy package.

You can see for yourself that the rule appeared in the output of:

  sesearch -A -s virtlogd_t -t system_dbusd_t

I just checked before an update on older machine and before the update the output was:

  # sesearch -A -s virtlogd_t -t system_dbusd_t
  Found 4 semantic av rules:
     allow daemon initrc_transition_domain : fifo_file { ioctl read write getattr lock append } ; 
     allow domain domain : key { search link } ; 
     allow daemon initrc_transition_domain : fd use ; 
     allow domain domain : fd use ; 

and after the update (policy version 3.13.1-166):

  # sesearch -A -s virtlogd_t -t system_dbusd_t
  Found 6 semantic av rules:
     allow virtlogd_t system_dbusd_t : unix_stream_socket connectto ; 
     allow domain domain : key { search link } ; 
     allow daemon initrc_transition_domain : fd use ; 
     allow virtlogd_t system_dbusd_t : dbus send_msg ; 
     allow daemon initrc_transition_domain : fifo_file { ioctl read write getattr lock append } ; 
     allow domain domain : fd use ;

Where you can clearly see that the line:

  allow virtlogd_t system_dbusd_t : dbus send_msg ; 

is what made it "work".

Having said that, it doesn't need to be there and the fix I posted upstream is probably better in the long run anyway.

Comment 39 Daniel Berrangé 2018-05-11 11:59:22 UTC
The SElinux policy was unfortunately changed as a result of this https://bugzilla.redhat.com/show_bug.cgi?id=1481109

Comment 41 Jean-Tsung Hsiao 2018-05-11 13:05:53 UTC
(In reply to Martin Kletzander from comment #38)
> (In reply to Jean-Tsung Hsiao from comment #36)
> So it was changed in selinux-policy package.

Note that I was using the following openstack-selinux fix:
https://bugzilla.redhat.com/show_bug.cgi?id=1561728#c5

Not sure if you have a different fix inside libvirt.

Comment 44 Yanqiu Zhang 2018-05-29 15:46:43 UTC
Verify by libvirt on rhel7.6 with following pkgs:
libvirt-4.3.0-1.el7.x86_64
qemu-kvm-rhev-2.12.0-2.el7.x86_64

No "avc: denied" info when guest start whether selinux enforcing or permissive. Only get "avc:  received setenforce notice" info when setenforce or guest first start after setenforce.

Steps:
1.# setenforce 1
# tail -f  /var/log/audit/audit.log|grep AVC
type=USER_AVC msg=audit(1527607279.544:755): pid=820 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  received setenforce notice (enforcing=1)  exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?'

2.# virsh start V
# tail -f  /var/log/audit/audit.log|grep AVC
type=USER_AVC msg=audit(1527607299.452:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received setenforce notice (enforcing=1)  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'

3.Do "# virsh start V" for several times again:
No new avc info in audit.log.

4.# setenforce 0
# tail -f  /var/log/audit/audit.log|grep AVC
type=USER_AVC msg=audit(1527608062.259:829): pid=820 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  received setenforce notice (enforcing=0)  exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?'

5.# virsh start V
# tail -f  /var/log/audit/audit.log|grep AVC
type=USER_AVC msg=audit(1527608078.410:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received setenforce notice (enforcing=0)  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'

6.Do "# virsh start V" for several times again:
No new avc info in audit.log.

Comment 46 Erik Skultety 2018-06-29 09:47:23 UTC
*** Bug 1510287 has been marked as a duplicate of this bug. ***

Comment 48 errata-xmlrpc 2018-10-30 09:52:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:3113