Bug 1833321
Summary: | [cgroup_v2] failed to count cgroup BPF map items: No such file or directory | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | Jing Qi <jinqi> |
Component: | libvirt | Assignee: | Pavel Hrdina <phrdina> |
Status: | CLOSED ERRATA | QA Contact: | yisun |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 8.2 | CC: | dyuan, jdenemar, jsuchane, lmen, phrdina, virt-maint, xuzhang, yalzhang, yisun |
Target Milestone: | rc | ||
Target Release: | 8.3 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | libvirt-6.6.0-4.el8 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-11-17 17:48:34 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jing Qi
2020-05-08 12:31:01 UTC
Part of the libvirt.log - 2020-05-08 12:38:03.034+0000: 553687: info : virObjectRef:386 : OBJECT_REF: obj=0x7fe6741bf420 2020-05-08 12:38:03.034+0000: 553687: info : virObjectUnref:348 : OBJECT_UNREF: obj=0x7fe660003040 2020-05-08 12:38:03.034+0000: 553687: info : virEventPollUpdateHandle:147 : EVENT_POLL_UPDATE_HANDLE: watch=11 events=13 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollInterruptLocked:723 : Skip interrupt, 1 140629459041280 2020-05-08 12:38:03.034+0000: 553687: info : virObjectUnref:348 : OBJECT_UNREF: obj=0x7fe660003040 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollDispatchHandles:487 : i=11 w=12 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollCleanupTimeouts:520 : Cleanup 2 2020-05-08 12:38:03.034+0000: 553687: info : virEventPollCleanupTimeouts:533 : EVENT_POLL_PURGE_TIMEOUT: timer=4 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollCleanupHandles:569 : Cleanup 12 2020-05-08 12:38:03.034+0000: 553687: debug : virEventRunDefaultImpl:350 : running default event implementation 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollCleanupTimeouts:520 : Cleanup 1 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollCleanupHandles:569 : Cleanup 12 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollMakePollFDs:396 : Prepare n=0 w=1, f=8 e=1 d=0 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollMakePollFDs:396 : Prepare n=1 w=2, f=10 e=1 d=0 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollMakePollFDs:396 : Prepare n=2 w=3, f=5 e=1 d=0 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollMakePollFDs:396 : Prepare n=3 w=4, f=3 e=1 d=0 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollMakePollFDs:396 : Prepare n=4 w=5, f=4 e=1 d=0 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollMakePollFDs:396 : Prepare n=5 w=6, f=13 e=1 d=0 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollMakePollFDs:396 : Prepare n=6 w=7, f=14 e=1 d=0 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollMakePollFDs:396 : Prepare n=7 w=8, f=18 e=0 d=0 2020-05-08 12:38:03.034+0000: 553786: debug : virThreadJobSetWorker:75 : Thread 553786 is running worker qemuProcessEventHandler 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollMakePollFDs:396 : Prepare n=8 w=9, f=18 e=1 d=0 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollMakePollFDs:396 : Prepare n=9 w=10, f=28 e=1 d=0 2020-05-08 12:38:03.034+0000: 553786: debug : qemuProcessEventHandler:4866 : vm=0x7fe6741bf420, event=2 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollMakePollFDs:396 : Prepare n=10 w=11, f=30 e=25 d=0 2020-05-08 12:38:03.034+0000: 553786: info : virObjectRef:386 : OBJECT_REF: obj=0x7fe674145560 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollMakePollFDs:396 : Prepare n=11 w=12, f=22 e=25 d=0 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollCalculateTimeout:333 : Calculate expiry of 1 timers 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollCalculateTimeout:341 : Got a timeout scheduled for 1588941601559 2020-05-08 12:38:03.034+0000: 553786: debug : processDeviceDeletedEvent:4283 : Removing device hostdev0 from domain 0x7fe6741bf420 avocado-vt-vm1 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollCalculateTimeout:354 : Schedule timeout then=1588941601559 now=1588941483034 2020-05-08 12:38:03.034+0000: 553786: info : virObjectRef:386 : OBJECT_REF: obj=0x7fe674145560 2020-05-08 12:38:03.034+0000: 553687: debug : virEventPollCalculateTimeout:364 : Timeout at 1588941601559 due in 118525 ms 2020-05-08 12:38:03.034+0000: 553786: debug : qemuDomainObjBeginJobInternal:9798 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7fe6741bf420 name=avocado-vt-vm1, current job=none agentJob=none async=none) 2020-05-08 12:38:03.034+0000: 553687: info : virEventPollRunOnce:635 : EVENT_POLL_RUN: nhandles=11 timeout=118525 2020-05-08 12:38:03.034+0000: 553786: debug : qemuDomainObjBeginJobInternal:9847 : Started job: modify (async=none vm=0x7fe6741bf420 name=avocado-vt-vm1) 2020-05-08 12:38:03.034+0000: 553786: info : virObjectUnref:348 : OBJECT_UNREF: obj=0x7fe674145560 2020-05-08 12:38:03.034+0000: 553786: debug : qemuDomainRemoveHostDevice:4426 : Removing host device hostdev0 from domain 0x7fe6741bf420 avocado-vt-vm1 2020-05-08 12:38:03.035+0000: 553786: debug : virFileClose:110 : Closed fd 34 2020-05-08 12:38:03.035+0000: 553786: debug : virFileClose:110 : Closed fd 34 2020-05-08 12:38:03.035+0000: 553786: debug : virPCIDeviceNew:1418 : 8086 10ed 0000:05:10.1: initialized 2020-05-08 12:38:03.035+0000: 553786: debug : virPCIDeviceFree:1449 : 8086 10ed 0000:05:10.1: freeing 2020-05-08 12:38:03.035+0000: 553786: debug : qemuTeardownHostdevCgroup:479 : Cgroup deny /dev/vfio/58 2020-05-08 12:38:03.035+0000: 553786: error : virCgroupV2DevicesDetectProg:423 : failed to count cgroup BPF map items: No such file or directory 2020-05-08 12:38:03.035+0000: 553786: debug : virFileClose:110 : Closed fd 34 2020-05-08 12:38:03.035+0000: 553786: warning : qemuDomainRemoveHostDevice:4480 : Failed to remove host device cgroup ACL 2020-05-08 12:38:03.035+0000: 553786: debug : virFileClose:110 : Closed fd 34 2020-05-08 12:38:03.035+0000: 553786: debug : virFileClose:110 : Closed fd 34 Pavel, is there something libvirt should fix or is it likely a kernel problem? Thanks. I'll have to investigate if it's a libvirt of kernel issue. So the issue is in libvirt in the code that loads BPF map of running QEMU process after the daemon was restarted. I'll post a patch to upstream and back-port it to downstream. Upstream patch posted: https://www.redhat.com/archives/libvir-list/2020-August/msg00429.html Upstream commit: commit 7e574d1a079bd13aeeedb7024cc45f85b1843fcc Author: Pavel Hrdina <phrdina> Date: Tue Aug 11 11:07:06 2020 +0200 vircgroupv2devices: fix counting entries in BPF map Test with: libvirt-6.6.0-4.module+el8.3.0+7883+3d717aa8.x86_64 Result: PASS 1. prepare a scsi device on host [root@dell-per740xd-11 ~]# lsscsi [0:2:0:0] disk DELL PERC H730P Adp 4.30 /dev/sda [17:0:0:0] disk LIO-ORG device.logical- 4.0 /dev/sdb 2. prepare a device xml to be attached [root@dell-per740xd-11 ~]# cat scsi.xml <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host17'/> <address bus='0' target='0' unit='0'/> </source> </hostdev> 3. start vm [root@dell-per740xd-11 ~]# virsh start vm1 Domain vm1 started 4. restart libvirtd [root@dell-per740xd-11 ~]# systemctl restart libvirtd 5. attach the device [root@dell-per740xd-11 ~]# virsh attach-device vm1 scsi.xml Device attached successfully 6. check the device actually attched into vm localhost login: root Password: Last login: Thu Sep 3 16:47:54 on ttyS0 [root@localhost ~]# lsscsi [root@localhost ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT ... vdb 252:16 0 100M 0 disk Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt:8.3 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5137 |