Bug 1669102 - can't start VM qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NU MA mappings is obsoleted and will be removed in future
Summary: can't start VM qemu-kvm: warning: All CPU(s) up to maxcpus should be describe...
Keywords:
Status: CLOSED DUPLICATE of bug 1644693
Alias: None
Product: vdsm
Classification: oVirt
Component: Core
Version: ---
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ovirt-4.3.0
: ---
Assignee: Ryan Barry
QA Contact: meital avital
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-24 10:58 UTC by Sandro Bonazzola
Modified: 2019-01-24 12:18 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-24 12:18:29 UTC
oVirt Team: Virt
Embargoed:
rule-engine: ovirt-4.3+
rule-engine: blocker+


Attachments (Terms of Use)

Description Sandro Bonazzola 2019-01-24 10:58:31 UTC
Installed latest 4.3.0 pre release, hosted engine on 2 nodes with NFS storage.
Hosted engine is up and running but trying to create a new VM and launching it fails with VDSM log showing:

2019-01-24 11:52:36,803+0100 ERROR (vm/efe51494) [virt.vm] (vmId='efe51494-6f18-4864-b9d4-3e120b43d566') The vm start process failed (vm:937)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 866, in _startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2845, in _run
    dom.createWithFlags(flags)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1110, in createWithFlags
    if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
libvirtError: internal error: qemu unexpectedly closed the monitor: 2019-01-24T10:52:36.491043Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NU
MA mappings is obsoleted and will be removed in future

VM XML is;
2019-01-24 11:52:35,960+0100 INFO  (vm/efe51494) [virt.vm] (vmId='efe51494-6f18-4864-b9d4-3e120b43d566') <?xml version="1.0" encoding="utf-8"?><domain type="kvm" xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ov
irt-vm="http://ovirt.org/vm/1.0">
    <name>ReactOS</name>
    <uuid>efe51494-6f18-4864-b9d4-3e120b43d566</uuid>
    <memory>524288</memory>
    <currentMemory>524288</currentMemory>
    <maxMemory slots="16">2097152</maxMemory>
    <vcpu current="1">16</vcpu>
    <sysinfo type="smbios">
        <system>
            <entry name="manufacturer">oVirt</entry>
            <entry name="product">oVirt Node</entry>
            <entry name="version">7-6.1810.2.el7.centos</entry>
            <entry name="serial">4c4c4544-0059-4310-8035-c4c04f595831</entry>
            <entry name="uuid">efe51494-6f18-4864-b9d4-3e120b43d566</entry>
        </system>
    </sysinfo>
    <clock adjustment="0" offset="variable">
        <timer name="hypervclock" present="yes"/>
        <timer name="rtc" tickpolicy="catchup"/>
        <timer name="pit" tickpolicy="delay"/>
        <timer name="hpet" present="no"/>
    </clock>
    <features>
        <acpi/>
        <hyperv>
            <relaxed state="on"/>
            <vapic state="on"/>
            <spinlocks retries="8191" state="on"/>
            <synic state="on"/>
            <stimer state="on"/>
        </hyperv>
    </features>
    <cpu match="exact">
        <model>SandyBridge</model>
        <feature name="pcid" policy="require"/>
        <feature name="spec-ctrl" policy="require"/>
        <feature name="ssbd" policy="require"/>
        <topology cores="1" sockets="16" threads="1"/>
        <numa>
            <cell cpus="0" id="0" memory="524288"/>
        </numa>
    </cpu>
    <cputune/>
    <devices>
        <input bus="ps2" type="mouse"/>
        <channel type="unix">
            <target name="ovirt-guest-agent.0" type="virtio"/>
            <source mode="bind" path="/var/lib/libvirt/qemu/channels/efe51494-6f18-4864-b9d4-3e120b43d566.ovirt-guest-agent.0"/>
        </channel>
        <channel type="unix">
            <target name="org.qemu.guest_agent.0" type="virtio"/>
            <source mode="bind" path="/var/lib/libvirt/qemu/channels/efe51494-6f18-4864-b9d4-3e120b43d566.org.qemu.guest_agent.0"/>
        </channel>

        <controller index="0" model="piix3-uhci" type="usb"/>
        <video>
            <model heads="1" ram="65536" type="qxl" vgamem="16384" vram="8192"/>
            <alias name="ua-d567d27d-0faf-406f-b7f4-4495f27b4659"/>
        </video>
        <rng model="virtio">
            <backend model="random">/dev/urandom</backend>
            <alias name="ua-f1da1bc8-6e2a-4c04-aa49-62a0d04d9a05"/>
        </rng>
        <graphics autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice">
            <channel mode="secure" name="main"/>
            <channel mode="secure" name="inputs"/>
            <channel mode="secure" name="cursor"/>
            <channel mode="secure" name="playback"/>
            <channel mode="secure" name="record"/>
            <channel mode="secure" name="display"/>
            <channel mode="secure" name="smartcard"/>
            <channel mode="secure" name="usbredir"/>
            <listen network="vdsm-ovirtmgmt" type="network"/>
        </graphics>
        <memballoon model="none"/>
        <channel type="spicevmc">
            <target name="com.redhat.spice.0" type="virtio"/>
        </channel>
        <disk device="cdrom" snapshot="no" type="file">
            <driver error_policy="report" name="qemu" type="raw"/>
            <source file="/rhev/data-center/043eb642-1fb0-11e9-a500-00163e3e843d/3f3ab345-0539-44df-9bc5-402435ed819f/images/163f110e-dd65-46e5-a84d-ef8687fc4a0e/52891de2-4282-4df9-a414-80350166d1ad" startupPoli
cy="optional">
                <seclabel model="dac" relabel="no" type="none"/>
            </source>
            <target bus="ide" dev="hdc"/>
            <readonly/>
            <alias name="ua-99878028-dd1a-4324-8b7e-3d51560ede51"/>
            <boot order="1"/>
        </disk>
        <disk device="disk" snapshot="no" type="file">
            <target bus="virtio" dev="vda"/>
            <source file="/rhev/data-center/mnt/minidell.home:_home_data/3f3ab345-0539-44df-9bc5-402435ed819f/images/9570d1a5-9587-4425-940a-6d9afa6136ea/a53ec4a6-b8ce-42a1-96d3-cdad72090d13">
                <seclabel model="dac" relabel="no" type="none"/>
            </source>
            <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/>
            <alias name="ua-9570d1a5-9587-4425-940a-6d9afa6136ea"/>
            <boot order="2"/>
            <serial>9570d1a5-9587-4425-940a-6d9afa6136ea</serial>
        </disk>
        <interface type="bridge">
            <model type="virtio"/>
            <link state="up"/>
            <source bridge="ovirtmgmt"/>
            <alias name="ua-89225df7-c084-46bc-bddd-124fcf20b452"/>
            <boot order="3"/>
            <mac address="56:6f:ce:93:00:00"/>
            <mtu size="1500"/>
            <filterref filter="vdsm-no-mac-spoofing"/>
            <bandwidth/>
        </interface>
    </devices>
    <pm>
        <suspend-to-disk enabled="no"/>
        <suspend-to-mem enabled="no"/>
    </pm>
    <os>
        <type arch="x86_64" machine="pc-i440fx-rhel7.6.0">hvm</type>
        <smbios mode="sysinfo"/>
    </os>
    <metadata>
        <ns0:qos/>
        <ovirt-vm:vm>
            <ovirt-vm:minGuaranteedMemoryMb type="int">512</ovirt-vm:minGuaranteedMemoryMb>
            <ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion>
            <ovirt-vm:custom/>
            <ovirt-vm:device mac_address="56:6f:ce:93:00:00">
                <ovirt-vm:custom/>
            </ovirt-vm:device>
            <ovirt-vm:device devtype="disk" name="vda">
                <ovirt-vm:poolID>043eb642-1fb0-11e9-a500-00163e3e843d</ovirt-vm:poolID>
                <ovirt-vm:volumeID>a53ec4a6-b8ce-42a1-96d3-cdad72090d13</ovirt-vm:volumeID>
                <ovirt-vm:imageID>9570d1a5-9587-4425-940a-6d9afa6136ea</ovirt-vm:imageID>
                <ovirt-vm:domainID>3f3ab345-0539-44df-9bc5-402435ed819f</ovirt-vm:domainID>
            </ovirt-vm:device>
            <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
            <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
        </ovirt-vm:vm>
    </metadata>
</domain> (vm:2840)

Comment 1 Sandro Bonazzola 2019-01-24 11:00:48 UTC
# rpm -qa |egrep "(vdsm|libvirt|qemu)"|sort
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
libvirt-4.5.0-10.el7_6.3.x86_64
libvirt-bash-completion-4.5.0-10.el7_6.3.x86_64
libvirt-client-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-config-network-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-config-nwfilter-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-interface-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-lxc-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-network-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-nodedev-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-nwfilter-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-qemu-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-secret-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-storage-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-storage-core-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-storage-disk-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-storage-iscsi-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-storage-logical-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-storage-mpath-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-storage-rbd-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-driver-storage-scsi-4.5.0-10.el7_6.3.x86_64
libvirt-daemon-kvm-4.5.0-10.el7_6.3.x86_64
libvirt-libs-4.5.0-10.el7_6.3.x86_64
libvirt-lock-sanlock-4.5.0-10.el7_6.3.x86_64
libvirt-python-4.5.0-1.el7.x86_64
qemu-img-ev-2.12.0-18.el7_6.1.1.x86_64
qemu-kvm-common-ev-2.12.0-18.el7_6.1.1.x86_64
qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64
vdsm-4.30.8-1.el7.x86_64
vdsm-api-4.30.8-1.el7.noarch
vdsm-client-4.30.8-1.el7.noarch
vdsm-common-4.30.8-1.el7.noarch
vdsm-hook-ethtool-options-4.30.8-1.el7.noarch
vdsm-hook-fcoe-4.30.8-1.el7.noarch
vdsm-hook-openstacknet-4.30.8-1.el7.noarch
vdsm-hook-vhostmd-4.30.8-1.el7.noarch
vdsm-hook-vmfex-dev-4.30.8-1.el7.noarch
vdsm-http-4.30.8-1.el7.noarch
vdsm-jsonrpc-4.30.8-1.el7.noarch
vdsm-network-4.30.8-1.el7.x86_64
vdsm-python-4.30.8-1.el7.noarch
vdsm-yajsonrpc-4.30.8-1.el7.noarch

Comment 2 Sandro Bonazzola 2019-01-24 11:25:13 UTC
looks like setting "Operating System" to "Red Hat Enterprise Linux 7.x x64" allow the machine to start but setting it to "Windows XP", "Windows 7" and "Windows 10" fails with the above error.

Comment 3 Milan Zamazal 2019-01-24 11:46:29 UTC
I can reproduce the error (on a not up-to-date setup). The NUMA warning is harmless, the real error is this:

qemu-kvm: can't apply global SandyBridge-x86_64-cpu.hv-synic=on: Property '.hv-synic' not found
shutting down, reason=failed

Comment 4 Milan Zamazal 2019-01-24 12:02:09 UTC
Isn't it related to bug 1644693?

Comment 5 Sandro Bonazzola 2019-01-24 12:18:29 UTC
Closing as duplicate of bug #1644693. Will be fixed by next CentOS batch update.

*** This bug has been marked as a duplicate of bug 1644693 ***


Note You need to log in before you can comment on or make changes to this bug.