Bug 1325723 - can not power on the vm with chinese name
Summary: can not power on the vm with chinese name
Keywords:
Status: CLOSED DUPLICATE of bug 1323140
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.6.3
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Nobody
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-11 01:28 UTC by vanlos wang
Modified: 2016-04-11 08:42 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-04-11 08:42:41 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description vanlos wang 2016-04-11 01:28:41 UTC
Description of problem:
can not power on vm with chinese name

Version-Release number of selected component (if applicable):


How reproducible:
Create a vm with chinese name on engine and power on it, it fails to power on.


Steps to Reproduce:
1.Create a vm with chinese name on engine, attach a disk 
2.Power on it

Actual results:
fails to power on the vm with chinese name.
And vdsm reports as follow:
Thread-128::INFO::2016-04-09 16:23:39,161::vm::1932::virt.vm::(_run) vmId=`6722a7e7-ab5a-466f-91ed-2fe8cfc5061d`::<?xml version="1.0" encoding="utf-8"?>
<domain type="kvm" xmlns:ovirt="http://ovirt.org/vm/tune/1.0">
        <name>我就是要有中文</name>
        <uuid>6722a7e7-ab5a-466f-91ed-2fe8cfc5061d</uuid>
        <memory>524288</memory>
        <currentMemory>524288</currentMemory>
        <maxMemory slots="16">4294967296</maxMemory>
        <vcpu current="1">16</vcpu>
        <devices>
                <channel type="unix">
                        <target name="com.redhat.rhevm.vdsm" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/6722a7e7-ab5a-466f-91ed-2fe8cfc5061d.com.redhat.rhevm.vdsm"/>
                </channel>
                <channel type="unix">
                        <target name="org.qemu.guest_agent.0" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/6722a7e7-ab5a-466f-91ed-2fe8cfc5061d.org.qemu.guest_agent.0"/>
                </channel>
                <input bus="ps2" type="mouse"/>
                <memballoon model="virtio"/>
                <controller index="0" model="virtio-scsi" type="scsi"/>
                <controller index="0" ports="16" type="virtio-serial"/>
                <video>
                        <model heads="1" ram="65536" type="qxl" vgamem="16384" vram="32768"/>
                </video>
                <graphics autoport="yes" listen="0" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice"/>
                <interface type="bridge">
                        <mac address="00:1a:4a:16:01:51"/>
                        <model type="virtio"/>
                        <source bridge="ovirtmgmt"/>
                        <filterref filter="vdsm-no-mac-spoofing"/>
                        <link state="up"/>
                        <bandwidth/>
                </interface>
                <disk device="cdrom" snapshot="no" type="file">
                        <source file="" startupPolicy="optional"/>
                        <target bus="ide" dev="hdc"/>
                        <readonly/>
                        <serial/>
                </disk>
                <disk device="disk" snapshot="no" type="file">
                        <source file="/rhev/data-center/00000001-0001-0001-0001-000000000013/0b2d6ab0-5303-4d6e-b70f-3fc268c40735/images/03a6930b-7443-4f8d-93da-4f7d6bc15dfd/4bb34638-a353-4597-a055-a4fa64174cb8"/>
                        <target bus="virtio" dev="vda"/>
                        <serial>03a6930b-7443-4f8d-93da-4f7d6bc15dfd</serial>
                        <boot order="1"/>
                        <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/>
                </disk>
                <channel type="spicevmc">
                        <target name="com.redhat.spice.0" type="virtio"/>
                </channel>
        </devices>
        <metadata>
                <ovirt:qos/>
        </metadata>
        <os>
                <type arch="x86_64" machine="pc-i440fx-rhel7.2.0">hvm</type>
                <smbios mode="sysinfo"/>
        </os>
        <sysinfo type="smbios">
                <system>
                        <entry name="manufacturer">Red Hat</entry>
                        <entry name="product">RHEV Hypervisor</entry>
                        <entry name="version">7.2-20160328.0.el7ev</entry>
                        <entry name="serial">DF8D4D56-6F37-B6E3-6F63-503409CB1965</entry>
                        <entry name="uuid">6722a7e7-ab5a-466f-91ed-2fe8cfc5061d</entry>
                </system>
        </sysinfo>
        <clock adjustment="0" offset="variable">
                <timer name="rtc" tickpolicy="catchup"/>
                <timer name="pit" tickpolicy="delay"/>
                <timer name="hpet" present="no"/>
        </clock>
        <features>
                <acpi/>
        </features>
        <cpu match="exact">
                <model>Broadwell</model>
                <topology cores="1" sockets="16" threads="1"/>
                <numa>
                        <cell cpus="0" memory="524288"/>
                </numa>
        </cpu>
</domain>

mailbox.SPMMonitor::DEBUG::2016-04-09 16:23:39,166::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00758017 s, 135 MB/s\n'; <rc> = 0
Thread-128::ERROR::2016-04-09 16:23:39,640::vm::759::virt.vm::(_startUnderlyingVm) vmId=`6722a7e7-ab5a-466f-91ed-2fe8cfc5061d`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 703, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/virt/vm.py", line 1941, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 124, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: Invalid machine name
Thread-128::INFO::2016-04-09 16:23:39,691::vm::1330::virt.vm::(setDownStatus) vmId=`6722a7e7-ab5a-466f-91ed-2fe8cfc5061d`::Changed state to Down: Invalid machine name (code=1)


Expected results:
power on.

Additional info:

Comment 1 vanlos wang 2016-04-11 01:31:13 UTC
The hypervisor version is rhevh-7.2-20160328.0.el7ev.

Comment 2 Moran Goldboim 2016-04-11 08:32:00 UTC
could it be similar problem to what we have in bug 1260131

Comment 3 Michal Skrivanek 2016-04-11 08:37:21 UTC
is https://rhn.redhat.com/errata/RHBA-2016-0555.html applied?
please confirm the libvirt version

Comment 4 Michal Skrivanek 2016-04-11 08:41:11 UTC
sorry, I missed this is rhevh
It should have libvirt-1.2.17-13.el7_2.3.x86_64, version required is libvirt-1.2.17-13.el7_2.4.x86_64.rpm

That should be included in rhev-hypervisor7-7.2-20160406.0

Either wait for it to be released (as 3.6.5) or test on RHEL host with comment #3 applied

Comment 5 Michal Skrivanek 2016-04-11 08:42:41 UTC
closing since we already have a bug for that

*** This bug has been marked as a duplicate of bug 1323140 ***


Note You need to log in before you can comment on or make changes to this bug.