Bug 1717394

Summary: RFE: add cgroups v2 BPF devices support
Product: Red Hat Enterprise Linux 8 Reporter: Pavel Hrdina <phrdina>
Component: libvirtAssignee: Pavel Hrdina <phrdina>
Status: CLOSED ERRATA QA Contact: yisun
Severity: medium Docs Contact:
Priority: medium    
Version: 8.0CC: ailan, berrange, dyuan, hpopal, jdenemar, jsuchane, jwboyer, kanderso, knoel, lhuang, lmen, mrichter, mtessun, phrdina, rbalakri, wchadwic, xuzhang, yafu, yalzhang, yisun
Target Milestone: rcKeywords: FutureFeature, Triaged
Target Release: 8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: libvirt-6.0.0-17.el8 Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 1689297
: 1717396 (view as bug list) Environment:
Last Closed: 2020-11-04 02:53:02 UTC Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1656432, 1689297    
Bug Blocks:    

Description Pavel Hrdina 2019-06-05 12:07:56 UTC
Description of problem:

In cgroups v2 the devices controller was dropped in favor of eBPF programs.
We need to implement support for eBPF cgroups programs in order to be able
filter access to devices.

This is not critical features as we already create a namespaces for QEMU
process to isolate it from the host.

https://www.kernel.org/doc/Documentation/cgroup-v2.txt
https://www.kernel.org/doc/Documentation/networking/filter.txt
https://cilium.readthedocs.io/en/v1.3/bpf/

Comment 7 yisun 2020-06-05 10:31:44 UTC
Verified
[root@lenovo-sr630-13 ~]# rpm -qa | grep ^libvirt-6
libvirt-6.4.0-1.module+el8.3.0+6881+88468c00.x86_64

[root@lenovo-sr630-13 ~]# setenforce 0

1. Check the default libvirt setting
[root@lenovo-sr630-13 ~]# cat /etc/libvirt/qemu.conf | grep cgroup_device_acl -A10
#cgroup_device_acl = [
#    "/dev/null", "/dev/full", "/dev/zero",
#    "/dev/random", "/dev/urandom",
#    "/dev/ptmx", "/dev/kvm"
#]

[root@lenovo-sr630-13 ~]# python get_bpf_map.py /sys/fs/cgroup/machine.slice/machine-qemu\\x2d3\\x2davocado\\x2dvt\\x2dvm1.scope/
c 136:* rw
c 1:9 rw		urandom
c 1:8 rw		random
c 5:2 rw		ptmx
c 1:7 rw		full
c 10:232 rw		kvm
c 1:3 rw		null
c 1:5 rw		zero
<===== all devices existing in bpf map


2. Remove /dev/zero form the qemu.conf
[root@lenovo-sr630-13 ~]# cat /etc/libvirt/qemu.conf | grep cgroup_device_acl -A10
cgroup_device_acl = [
    "/dev/null", "/dev/full",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm"
]

[root@lenovo-sr630-13 ~]# virsh destroy avocado-vt-vm1
vDomain avocado-vt-vm1 destroyed

[root@lenovo-sr630-13 ~]# virsh start avocado-vt-vm1
Domain avocado-vt-vm1 started

[root@lenovo-sr630-13 ~]# python get_bpf_map.py /sys/fs/cgroup/machine.slice/machine-qemu\\x2d4\\x2davocado\\x2dvt\\x2dvm1.scope/
c 1:9 rw		urandom
c 1:8 rw		random
c 5:2 rw		ptmx
c 1:7 rw		full
c 10:232 rw		kvm
c 136:* rw
c 1:3 rw		null
<==== zero gone as expected

3. Test hot plug and unplug a new device to vm
Using a local block device
[root@lenovo-sr630-13 ~]# ll /dev/sdb
brw-rw----. 1 root disk 8, 16 Jun  5 06:26 /dev/sdb

[root@lenovo-sr630-13 ~]# python get_bpf_map.py /sys/fs/cgroup/machine.slice/machine-qemu\\x2d4\\x2davocado\\x2dvt\\x2dvm1.scope/
c 1:9 rw		urandom
c 1:8 rw		random
c 5:2 rw		ptmx
c 1:7 rw		full
c 10:232 rw		kvm
c 136:* rw
c 1:3 rw		null

[root@lenovo-sr630-13 ~]# virsh attach-disk avocado-vt-vm1 /dev/sdb vdb
Disk attached successfully

[root@lenovo-sr630-13 ~]# python get_bpf_map.py /sys/fs/cgroup/machine.slice/machine-qemu\\x2d4\\x2davocado\\x2dvt\\x2dvm1.scope/
c 1:9 rw		urandom
c 1:8 rw		random
c 5:2 rw		ptmx
c 1:7 rw		full
c 10:232 rw		kvm
c 136:* rw
b 8:16 rw		sdb
<======= added as expected
c 1:3 rw		null

[root@lenovo-sr630-13 ~]# virsh detach-disk avocado-vt-vm1 vdb
Disk detached successfully

[root@lenovo-sr630-13 ~]# python get_bpf_map.py /sys/fs/cgroup/machine.slice/machine-qemu\\x2d4\\x2davocado\\x2dvt\\x2dvm1.scope/
c 1:9 rw		urandom
c 1:8 rw		random
c 5:2 rw		ptmx
c 1:7 rw		full
c 10:232 rw		kvm
c 136:* rw
None 8:16
<======= still hit bz1810356
c 1:3 rw		null

Comment 10 errata-xmlrpc 2020-11-04 02:53:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:4676