Bug 1479872

Summary: [vhost hook] vm failed to start if network name in custom property is invalid
Product: [oVirt] vdsm Reporter: Israel Pinto <ipinto>
Component: GeneralAssignee: Edward Haas <edwardh>
Status: CLOSED CURRENTRELEASE QA Contact: Israel Pinto <ipinto>
Severity: medium Docs Contact:
Priority: low    
Version: 4.19.26CC: bugs, danken, ipinto, michal.skrivanek, mzamazal
Target Milestone: ovirt-4.2.0Flags: rule-engine: ovirt-4.2+
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-20 11:35:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Network RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
vdsm_log none

Description Israel Pinto 2017-08-09 15:34:09 UTC
Created attachment 1311295 [details]
vdsm_log

Description of problem:
VM failed to start with invalid property value in vhost custom property.


Version-Release number of selected component (if applicable):
Engine: 4.1.5.1-0.1.el7
OS Version:RHEL - 7.4 - 18.el7
Kernel Version: 3.10.0 - 693.el7.x86_64
KVM Version:2.9.0 - 16.el7_4.3
LIBVIRT Version:libvirt-3.2.0-14.el7
VDSM Version:vdsm-4.19.26-1.el7ev

How reproducible:
All the time

Steps to Reproduce:
1. Install on all hosts hook rpm: vdsm-hook-vhostmd.noarch
2. Create VM with disk and NIC
3. Set VM custom properties:
   vhost = host_mixed_1_:true
   sap_agent = true 

Actual results:
VM failed to start serialize exception 


Additional info:
from vdsm log:
2017-08-09 09:44:06,939+0300 INFO  (jsonrpc/0) [vdsm.api] START __init__(cif=<clientIF.clientIF object at 0x2691d10>, params={'acpiEnable': 'true', 'emulatedMachine': 'pc-i440fx-rhel7.3.0', 'vmId': 'a9192441-ffdd-4929-9868-bee869f3d1d2', 'memGuaranteedSize': 1024, 'transparentHugePages': 'true', 'timeOffset': '0', 'cpuType': 'Conroe', 'smp': '1', 'guestNumaNodes': [{'nodeIndex': 0, 'cpus': '0', 'memory': '1024'}], 'custom': {'device_b10c28b1-835f-4c59-9b9c-4458b1cab8bddevice_46661b35-7e9f-4a0a-9fa4-c15bca8ff025': "VmDevice:{id='VmDeviceId:{deviceId='46661b35-7e9f-4a0a-9fa4-c15bca8ff025', vmId='a9192441-ffdd-4929-9868-bee869f3d1d2'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}", 'device_b10c28b1-835f-4c59-9b9c-4458b1cab8bd': "VmDevice:{id='VmDeviceId:{deviceId='b10c28b1-835f-4c59-9b9c-4458b1cab8bd', vmId='a9192441-ffdd-4929-9868-bee869f3d1d2'}', device='disk', type='DISK', bootOrder='0', specParams='[]', address='{slot=0x07, bus=0x00, domain=0x0000, type=pci, function=0x0}', managed='false', plugged='true', readOnly='false', deviceAlias='virtio-disk701', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}", 'device_b10c28b1-835f-4c59-9b9c-4458b1cab8bddevice_46661b35-7e9f-4a0a-9fa4-c15bca8ff025device_85f6dc46-6521-47f5-a774-bbe5cd3c1d30device_478d0835-76d7-4a6c-b2ee-1a1feab32966device_da9f9616-4233-4b3c-bfa7-4d54e9e6a515': "VmDevice:{id='VmDeviceId:{deviceId='da9f9616-4233-4b3c-bfa7-4d54e9e6a515', vmId='a9192441-ffdd-4929-9868-bee869f3d1d2'}', device='spicevmc', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=3}', managed='false', plugged='true', readOnly='false', deviceAlias='channel2', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}", 'vhost': 'host_mixed_1_:true', 'device_b10c28b1-835f-4c59-9b9c-4458b1cab8bddevice_46661b35-7e9f-4a0a-9fa4-c15bca8ff025device_85f6dc46-6521-47f5-a774-bbe5cd3c1d30': "VmDevice:{id='VmDeviceId:{deviceId='85f6dc46-6521-47f5-a774-bbe5cd3c1d30', vmId='a9192441-ffdd-4929-9868-bee869f3d1d2'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}", 'device_b10c28b1-835f-4c59-9b9c-4458b1cab8bddevice_46661b35-7e9f-4a0a-9fa4-c15bca8ff025device_85f6dc46-6521-47f5-a774-bbe5cd3c1d30device_478d0835-76d7-4a6c-b2ee-1a1feab32966': "VmDevice:{id='VmDeviceId:{deviceId='478d0835-76d7-4a6c-b2ee-1a1feab32966', vmId='a9192441-ffdd-4929-9868-bee869f3d1d2'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}", 'sap_agent': 'true'}, 'vmType': 'kvm', 'memSize': 1024, 'smpCoresPerSocket': '1', 'vmName': '1', 'nice': '0', 'maxMemSize': 4096, 'bootMenuEnable': 'false', 'smpThreadsPerCore': '1', 'smartcardEnable': 'false', 'maxMemSlots': 16, 'kvmEnable': 'true', 'displayNetwork': 'ovirtmgmt', 'devices': [{'device': 'qxl', 'specParams': {'vram': '8192', 'vgamem': '16384', 'heads': '1', 'ram': '65536'}, 'type': 'video', 'deviceId': 'b54e0888-e1af-43dc-9d2c-105915d90e86', 'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x02'}}, {'device': 'spice', 'specParams': {'fileTransferEnable': 'true', 'spiceSecureChannels': 'smain,sinputs,scursor,splayback,srecord,sdisplay,ssmartcard,susbredir', 'copyPasteEnable': 'true'}, 'type': 'graphics', 'deviceId': 'ff40557d-24c6-4078-a46a-4fd0ea4253f9'}, {'index': '2', 'iface': 'ide', 'specParams': {'path': u''}, 'readonly': 'true', 'deviceId': '84f90f60-5762-4a3f-a86b-7d3e17ed1ab8', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': u'', 'type': 'disk'}, {'index': 0, 'domainID': '04cf8130-d771-4821-9984-49340e91a7b3', 'bootOrder': '1', 'format': 'raw', 'shared': 'false', 'type': 'disk', 'poolID': '59d68f09-a2c0-473f-af97-b1aaddf40754', 'volumeID': 'cfb6b06c-e392-4b2f-89df-43ca7cbc6a23', 'imageID': '78caf513-4ce5-4eeb-a7fe-bcfbab69ae12', 'specParams': {}, 'readonly': 'false', 'iface': 'scsi', 'deviceId': '78caf513-4ce5-4eeb-a7fe-bcfbab69ae12', 'address': {'bus': '0', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'disk', 'discard': False, 'propagateErrors': 'off', 'optional': 'false'}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:16:10:64', 'linkActive': 'true', 'network': 'ovirtmgmt', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {'inbound': {}, 'outbound': {}}, 'deviceId': 'bf1cab0e-de29-422a-a7c6-cf26b02d26b9', 'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x03'}, 'device': 'bridge', 'type': 'interface'}, {'index': '0', 'specParams': {}, 'deviceId': '027fd828-46d9-4d88-a9bd-619442525deb', 'address': {'function': '0x2', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x01'}, 'device': 'usb', 'model': 'piix3-uhci', 'type': 'controller'}, {'device': 'memballoon', 'specParams': {'model': 'virtio'}, 'type': 'balloon', 'deviceId': '692212ce-02cc-4f74-9bf1-a756ce9c41d8', 'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x06'}}, {'index': '0', 'specParams': {}, 'deviceId': '694221a1-3f99-4d2c-9e1c-aeae033211d1', 'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'device': 'virtio-serial', 'specParams': {}, 'type': 'controller', 'deviceId': '90196a19-0241-4c8f-ba09-806c2d311571', 'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x05'}}, {'device': 'virtio', 'specParams': {'source': 'urandom'}, 'model': 'virtio', 'type': 'rng', 'deviceId': '796e3dcc-6d29-41eb-aa89-04d2ce165127'}], 'maxVCpus': '16', 'display': 'qxl'}, recover=False) from=::ffff:10.35.161.176,32930, flow_id=a5339d89-e0c9-4bbb-ab82-16805dc1ba6a (api:46)
2017-08-09 09:44:06,940+0300 INFO  (jsonrpc/0) [vdsm.api] FINISH __init__ return=None from=::ffff:10.35.161.176,32930, flow_id=a5339d89-e0c9-4bbb-ab82-16805dc1ba6a (api:52)
2017-08-09 09:44:06,942+0300 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call VM.create succeeded in 0.00 seconds (__init__:539)
2017-08-09 09:44:06,943+0300 INFO  (vm/a9192441) [virt.vm] (vmId='a9192441-ffdd-4929-9868-bee869f3d1d2') VM wrapper has started (vm:1930)
2017-08-09 09:44:06,944+0300 INFO  (vm/a9192441) [vdsm.api] START getVolumeSize(sdUUID='04cf8130-d771-4821-9984-49340e91a7b3', spUUID='59d68f09-a2c0-473f-af97-b1aaddf40754', imgUUID='78caf513-4ce5-4eeb-a7fe-bcfbab69ae12', volUUID='cfb6b06c-e392-4b2f-89df-43ca7cbc6a23', options=None) from=internal, task_id=1206c680-674e-4c0e-b167-d9c97bb63fc8 (api:46)
2017-08-09 09:44:06,946+0300 INFO  (vm/a9192441) [vdsm.api] FINISH getVolumeSize return={'truesize': '0', 'apparentsize': '1073741824'} from=internal, task_id=1206c680-674e-4c0e-b167-d9c97bb63fc8 (api:52)
2017-08-09 09:44:06,946+0300 INFO  (vm/a9192441) [vds] prepared volume path:  (clientIF:380)
2017-08-09 09:44:06,946+0300 INFO  (vm/a9192441) [vdsm.api] START prepareImage(sdUUID='04cf8130-d771-4821-9984-49340e91a7b3', spUUID='59d68f09-a2c0-473f-af97-b1aaddf40754', imgUUID='78caf513-4ce5-4eeb-a7fe-bcfbab69ae12', leafUUID='cfb6b06c-e392-4b2f-89df-43ca7cbc6a23', allowIllegal=False) from=internal, task_id=22b56f1d-9232-4fa8-9136-869ce2708557 (api:46)
2017-08-09 09:44:06,967+0300 INFO  (vm/a9192441) [vdsm.api] FINISH prepareImage return={'info': {'path': u'/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_41_ENV__4_nfs__0/04cf8130-d771-4821-9984-49340e91a7b3/images/78caf513-4ce5-4eeb-a7fe-bcfbab69ae12/cfb6b06c-e392-4b2f-89df-43ca7cbc6a23', 'volType': 'path'}, 'path': u'/rhev/data-center/59d68f09-a2c0-473f-af97-b1aaddf40754/04cf8130-d771-4821-9984-49340e91a7b3/images/78caf513-4ce5-4eeb-a7fe-bcfbab69ae12/cfb6b06c-e392-4b2f-89df-43ca7cbc6a23', 'imgVolumesInfo': [{'domainID': '04cf8130-d771-4821-9984-49340e91a7b3', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_41_ENV__4_nfs__0/04cf8130-d771-4821-9984-49340e91a7b3/images/78caf513-4ce5-4eeb-a7fe-bcfbab69ae12/cfb6b06c-e392-4b2f-89df-43ca7cbc6a23', 'volumeID': u'cfb6b06c-e392-4b2f-89df-43ca7cbc6a23', 'leasePath': u'/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_41_ENV__4_nfs__0/04cf8130-d771-4821-9984-49340e91a7b3/images/78caf513-4ce5-4eeb-a7fe-bcfbab69ae12/cfb6b06c-e392-4b2f-89df-43ca7cbc6a23.lease', 'imageID': '78caf513-4ce5-4eeb-a7fe-bcfbab69ae12'}]} from=internal, task_id=22b56f1d-9232-4fa8-9136-869ce2708557 (api:52)
2017-08-09 09:44:06,968+0300 INFO  (vm/a9192441) [vds] prepared volume path: /rhev/data-center/59d68f09-a2c0-473f-af97-b1aaddf40754/04cf8130-d771-4821-9984-49340e91a7b3/images/78caf513-4ce5-4eeb-a7fe-bcfbab69ae12/cfb6b06c-e392-4b2f-89df-43ca7cbc6a23 (clientIF:380)
2017-08-09 09:44:06,978+0300 ERROR (vm/a9192441) [virt.vm] (vmId='a9192441-ffdd-4929-9868-bee869f3d1d2') The vm start process failed (vm:631)
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 562, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/virt/vm.py", line 2013, in _run
    domxml = hooks.before_vm_start(self._buildDomainXML(),
  File "/usr/share/vdsm/virt/vm.py", line 1692, in _buildDomainXML
    self._appendDevices(domxml)
  File "/usr/share/vdsm/virt/vm.py", line 1639, in _appendDevices
    deviceXML = vmxml.format_xml(dev.getXML())
  File "/usr/share/vdsm/virt/vmxml.py", line 73, in format_xml
    return etree.tostring(element, encoding='UTF-8')
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1126, in tostring
    ElementTree(element).write(file, encoding, method=method)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 820, in write
    serialize(write, self._root, encoding, qnames, namespaces)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 939, in _serialize_xml
    _serialize_xml(write, e, encoding, qnames, None)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 932, in _serialize_xml
    v = _escape_attrib(v, encoding)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1092, in _escape_attrib
    _raise_serialization_error(text)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1052, in _raise_serialization_error
    "cannot serialize %r (type %s)" % (text, type(text).__name__)
TypeError: cannot serialize False (type bool)
2017-08-09 09:44:06,980+0300 INFO  (vm/a9192441) [virt.vm] (vmId='a9192441-ffdd-4929-9868-bee869f3d1d2') Changed state to Down: cannot serialize False (type bool) (code=1) (vm:1221)
2017-08-09 09:44:06,980+0300 INFO  (vm/a9192441) [virt.vm] (vmId='a9192441-ffdd-4929-9868-bee869f3d1d2') Stopping connection (guestagent:430)

Comment 1 Milan Zamazal 2017-08-09 16:46:32 UTC
The problem originates in virt.vmdevices.network.Interface._customize:

  vhosts = self._getVHostSettings()
  if vhosts:
      self.driver['name'] = vhosts.get(self.network, False)

If network name specified in `vhost' custom property in Engine UI doesn't correspond to a network contained in self.network then False is assigned as the driver name. False as a boolean value is then stored to device DOM structure as an attribute value and export of the structure to XML crashes because the attribute value is boolean and not string.

Comment 2 Dan Kenigsberg 2017-08-13 12:49:13 UTC
I believe that this bug is as old as http://gerrit.ovirt.org/788

Israel, why did you check this now?

Instead of fixing it, I woder if we can drop the vdsm-side code, and trust libvirt to choose the driver properly.

Comment 4 Israel Pinto 2017-08-21 07:12:52 UTC
Verify with:
Software version:4.2.0-0.0.master.20170820180837.git59243e9.el7.centos

Steps:
1. Install on all hosts hook rpm: vdsm-hook-vhostmd.noarch
2. Create VM with disk and NIC
3. Set VM custom properties:
   case 1:
   vhost = host_mixed_1_:true
   sap_agent = true 
   case 2:
   vhost = host_mixed_1_:false
   sap_agent = true 
   case 3:
   vhost = ovirtmgmt:true
4. Run VM in each case

Results:
PASS

Comment 5 Sandro Bonazzola 2017-12-20 11:35:33 UTC
This bugzilla is included in oVirt 4.2.0 release, published on Dec 20th 2017.

Since the problem described in this bug report should be
resolved in oVirt 4.2.0 release, published on Dec 20th 2017, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.