Bug 1809620
Summary: | VM loaded to 100% CPU is shown in engine with 0% CPU | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Polina <pagranat> | ||||||||
Component: | BLL.Virt | Assignee: | Milan Zamazal <mzamazal> | ||||||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Polina <pagranat> | ||||||||
Severity: | medium | Docs Contact: | |||||||||
Priority: | unspecified | ||||||||||
Version: | 4.4.0 | CC: | bugs, michal.skrivanek, rbarry | ||||||||
Target Milestone: | ovirt-4.4.1 | Flags: | pm-rhel:
ovirt-4.4+
|
||||||||
Target Release: | --- | ||||||||||
Hardware: | x86_64 | ||||||||||
OS: | Linux | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | rhv-4.4.1-3 | Doc Type: | If docs needed, set a value | ||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2020-07-08 08:27:28 UTC | Type: | Bug | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Bug Depends On: | 1808940 | ||||||||||
Bug Blocks: | |||||||||||
Attachments: |
|
Description
Polina
2020-03-03 14:35:54 UTC
Created attachment 1667210 [details]
virsh-client VM getStats vmID=""
How long after guest is booted? I wait about 3 min till the VM has the IP, then start to load CPU. I can't reproduce the bug. What are your libvirt and QEMU versions? And could you please provide Vdsm debug log? Created attachment 1669132 [details]
logs
[root@lynx01 qemu]# rpm -qa |grep libvirt
libvirt-admin-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-config-nwfilter-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-storage-rbd-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-bash-completion-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-lock-sanlock-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-libs-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-storage-gluster-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-nodedev-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-network-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-storage-disk-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-storage-logical-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-storage-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-client-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-storage-core-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-config-network-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-storage-iscsi-direct-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-storage-scsi-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-secret-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-interface-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-kvm-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-storage-iscsi-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-qemu-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
python3-libvirt-6.0.0-1.module+el8.2.0+5453+31b2b136.x86_64
libvirt-daemon-driver-nwfilter-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
libvirt-daemon-driver-storage-mpath-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
[root@lynx01 qemu]#
[root@lynx01 qemu]# rpm -qa |grep qemu
qemu-kvm-block-curl-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64
qemu-kvm-core-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64
qemu-kvm-block-iscsi-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64
qemu-img-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64
qemu-kvm-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64
qemu-kvm-block-rbd-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64
ipxe-roms-qemu-20181214-5.git133f4c47.el8.noarch
qemu-kvm-common-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64
qemu-kvm-block-ssh-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64
libvirt-daemon-driver-qemu-6.0.0-7.module+el8.2.0+5869+c23fe68b.x86_64
qemu-kvm-block-gluster-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64
the relevant timestamp in engine.log
2020-03-11 01:42:54,229+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-89) [] VM 'f1e9524e-53a3-4c75-ba74-0bdc508f2c38'(golden_env_mixed_virtio_1_0) moved from 'PoweringUp' --> 'Up
We've found out with Polina that it's apparently this platform bug: https://bugzilla.redhat.com/1808940, which is fixed in systemd-239-28. libvirt couldn't access some CPU stats due to the bug with cgroups access. After upgrading to the given systemd version (and current versions of everything else), CPU usage is shown as expected. Fixed in systemd-239-28.el8, systemd-239-29.el8 is now available in RHEL 8.2. verified on ovirt-engine-4.4.1.2-0.10.el8ev.noarch, libvirt-6.0.0-24.module+el8.2.1+6997+c666f621.x86_64, qemu-kvm-core-4.2.0-25.module+el8.2.1+6985+9fd9d514.x86_64 This bugzilla is included in oVirt 4.4.1 release, published on July 8th 2020. Since the problem described in this bug report should be resolved in oVirt 4.4.1 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |