Bug 1720213 - Couldn't boot L2 guest on L1 when boot L1 with -cpu $model_name,vmx
Summary: Couldn't boot L2 guest on L1 when boot L1 with -cpu $model_name,vmx
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.1
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Paolo Bonzini
QA Contact: Qinghua Cheng
URL:
Whiteboard:
Depends On: 1559846 1791648 1794843
Blocks: 1771318
TreeView+ depends on / blocked
 
Reported: 2019-06-13 12:09 UTC by Li Xiaohui
Modified: 2021-01-08 16:53 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-08 16:53:03 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Li Xiaohui 2019-06-13 12:09:04 UTC
Description of problem:
Could not access KVM kernel module and qemu-kvm: failed to initialize KVM when boot L2 guest. Only occured when boot L1 guest with -cpu $module_name,vmx


Version-Release number of selected component (if applicable):
host info:
(1)cpu info
[root@dhcp-12-148 qemu-sh]# virsh capabilities | grep model
      <model>IvyBridge-IBRS</model>
    <secmodel>
      <model>selinux</model>
    </secmodel>
    <secmodel>
      <model>dac</model>
    </secmodel>
(2)version info:
kernel-4.18.0-103.el8.x86_64 & qemu-img-4.0.0-4.module+el8.1.0+3356+cda7f1ee.x86_64

guest info:
kernel-4.18.0-100.el8.x86_64 & qemu-img-4.0.0-4.module+el8.1.0+3356+cda7f1ee.x86_64


How reproducible:
5/5


Steps to Reproduce:
1.enable nested on L0 host:
[root@dhcp-12-148 ~]# cat /sys/module/kvm_intel/parameters/nested
Y
2.boot L1 guest with commands on L0:
/usr/libexec/qemu-kvm \
-enable-kvm \
-machine q35  \
-m 4096 \
-smp 2 \
-cpu "Nehalem",vmx \
-device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
-device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
-device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
-blockdev node-name=back_image,driver=file,cache.direct=on,cache.no-flush=off,filename=/mnt/nfs/rhel8-1-0-blk.qcow2,aio=threads \
-blockdev node-name=drive-virtio-disk0,driver=qcow2,cache.direct=on,cache.no-flush=off,file=back_image \
-device virtio-blk-pci,drive=drive-virtio-disk0,id=disk0,bus=pcie.0-root-port-2 \
-netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown,queues=4 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1b:21:7a:76:1c,bus=pcie.0-root-port-3,vectors=10,mq=on \
-vnc :0 \
-device VGA \
-monitor stdio \
-qmp tcp:0:1234,server,nowait \
3.in L1 guest, install qemu-kvm and config switch network.
4.boot L2 guest in L1 with commands:
/usr/libexec/qemu-kvm \
-enable-kvm \
-machine q35  \
-m 4096 \
-smp 2 \
-cpu "Nehalem",vmx \
-device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
-device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
-device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
-blockdev node-name=back_image,driver=file,cache.direct=on,cache.no-flush=off,filename=rhel8-1-0-blk.qcow2,aio=threads \
-blockdev node-name=drive-virtio-disk0,driver=qcow2,cache.direct=on,cache.no-flush=off,file=back_image \
-device virtio-blk-pci,drive=drive-virtio-disk0,id=disk0,bus=pcie.0-root-port-2 \
-netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown,queues=4 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1b:21:7a:76:e1,bus=pcie.0-root-port-3,vectors=10,mq=on \
-vnc :0 \
-device VGA \
-monitor stdio \
-qmp tcp:0:1234,server,nowait \


Actual results:
when execute step4, qemu couldn't start, get error prompts:
Could not access KVM kernel module: No such file or directory
qemu-kvm: failed to initialize KVM: No such file or directory

and found kvm module loaded in L1 failed:
[root@dhcp-12-206 work]# lsmod | grep kvm
kvm                   749568  0
irqbypass              16384  1 kvm


Expected results:
L2 guest could start successfully in L1.


Additional info:
1.when boot L1 with "-cpu host", L2 guest could start successfully in L1, and kvm module loaded succeeded.
2.test on the latest rhel7.7 host using same guest rhel8.1, L2 guest started successfully in L1.

Comment 2 Paolo Bonzini 2019-06-21 12:51:10 UTC
Since it is limited to named machine types, I think we can delay it to 8.2 and fix it together with bug 1559846 ("Nested KVM: limit VMX features according to CPU models") and bug 1689270 ("https://bugzilla.redhat.com/show_bug.cgi?id=1689270").  Since it works with "-cpu host", it should not be a kernel issue.

Comment 3 Paolo Bonzini 2019-06-21 12:55:32 UTC
Also, since we are not limiting VMX features in named CPU models, live migration would _not_ work across hosts with different CPU.  Therefore, now that live migration is going to be supported for nested virtualization, it's arguably better if "-cpu foo,vmx" is broken.

Comment 4 Rick Barry 2019-06-21 16:10:40 UTC
(In reply to Paolo Bonzini from comment #2)
> Since it is limited to named machine types, I think we can delay it to 8.2
> and fix it together with bug 1559846 ("Nested KVM: limit VMX features
> according to CPU models") and bug 1689270
> ("https://bugzilla.redhat.com/show_bug.cgi?id=1689270").  Since it works
> with "-cpu host", it should not be a kernel issue.

Thanks. I've moved this to 8.2 and added dependencies on 1559846/1689270.

Comment 5 Ademar Reis 2020-02-05 22:59:19 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 9 Qinghua Cheng 2020-03-05 08:54:30 UTC
Verified on RHEL8.2

Kernel: 4.18.0-180.el8.x86_64
qemu-kvm: qemu-kvm-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64

This bug is not reproduced.

Comment 11 Jeff Nelson 2021-01-08 16:53:03 UTC
Changing this TestOnly BZ as CLOSED CURRENTRELEASE. Please reopen if the issue is not resolved.


Note You need to log in before you can comment on or make changes to this bug.