Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1047360

Summary: Fail to start VM again if VM fail to start before - VM is in down status in VDSM
Product: Red Hat Enterprise Virtualization Manager Reporter: Meni Yakove <myakove>
Component: ovirt-engineAssignee: Dan Kenigsberg <danken>
Status: CLOSED DUPLICATE QA Contact:
Severity: high Docs Contact:
Priority: urgent    
Version: 3.3.0CC: acathrow, bazulay, hateya, iheim, lpeer, ofrenkel, Rhev-m-bugs, yeylon
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard: virt
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-01-02 09:12:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
logs none

Description Meni Yakove 2013-12-30 16:34:45 UTC
Created attachment 843474 [details]
logs

Description of problem:
If VM start fails (for now I only seen that if cloud-init is involved) VDSM still have this VM.
vdsClient -s 0 list table
15e1be9a-3432-4bbd-acd1-2bbc6bc60000      0  CLOUD-INIT-VM        Down 

And start VM fails with error:
Desktop already exist.

The error that cause the VM to fail to start:
exitMessage = internal error process exited while connecting to monitor: qemu-kvm: -drive if=none,media=cdrom,id=drive-ide0-1-1,readonly=on,format=raw,serial=: Duplicate ID 'drive-ide0-1-1' for drive

Version-Release number of selected component (if applicable):
vdsm-4.13.2-0.4.el6ev.x86_64

How reproducible:
100%

Steps to Reproduce:
1. create linux VM with cloud-init on it.
2. Check "initial init" under "run once dialog"
3. Start the VM > should fail
4. Start the VM again without cloud-init (not "run once")

Actual results:
VM fail to start

Expected results:
VM should start

Additional info:
[root@navy-vds1 ~]# vdsClient -s 0 list

15e1be9a-3432-4bbd-acd1-2bbc6bc60000
        Status = Down
        acpiEnable = true
        emulatedMachine = rhel6.5.0
        memGuaranteedSize = 512
        timeOffset = -1
        displaySecurePort = -1
        spiceSslCipherSuite = DEFAULT
        cpuType = Conroe
        smp = 1
        custom = {}
        vmType = kvm
        memSize = 1024
        smpCoresPerSocket = 1
        vmName = CLOUD-INIT-VM
        nice = 0
        exitMessage = internal error process exited while connecting to monitor: qemu-kvm: -drive if=none,media=cdrom,id=drive-ide0-1-1,readonly=on,format=raw,serial=: Duplicate ID 'drive-ide0-1-1' for drive

        pid = 0
        displayIp = 0
        displayPort = -1
        smartcardEnable = false
        spiceSecureChannels = smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard
        exitCode = 1
        nicModel = rtl8139,pv
        keyboardLayout = en-us
        kvmEnable = true
        pitReinjection = false
        transparentHugePages = true
        devices = [{'specParams': {}, 'deviceId': 'f09c336e-ee80-40dd-816e-de350ea99bab', 'address': {' bus': '0x00', 'domain': '0x0000', '  type': 'pci', '  slot': '0x04', '  function': '0x0'}, 'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'device': 'qxl', 'specParams': {'vram': '32768', 'ram': '65536', 'heads': '1'}, 'type': 'video', 'deviceId': 'd07ecc88-5d5c-494e-96f4-3e0a707da340', 'address': {' bus': '0x00', 'domain': '0x0000', '  type': 'pci', '  slot': '0x02', '  function': '0x0'}}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:16:88:55', 'linkActive': 'true', 'network': 'rhevm', 'bootOrder': '2', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': '3121005e-0cf0-48f0-83bb-cd5d8a5a9e64', 'address': {' bus': '0x00', 'domain': '0x0000', '  type': 'pci', '  slot': '0x03', '  function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:16:88:65', 'linkActive': 'true', 'network': 'rhevm', 'bootOrder': '3', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'b029b9a3-32b4-471e-a71e-18fdde233f4b', 'address': {' bus': '0x00', 'domain': '0x0000', '  type': 'pci', '  slot': '0x05', '  function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'index': '3', 'iface': 'ide', 'specParams': {'vmPayload': {'volId': 'config-2', 'file': {'openstack/latest/meta_data.json': 'ewogICJsYXVuY2hfaW5kZXgiIDogIjAiLAogICJhdmFpbGFiaWxpdHlfem9uZSIgOiAibm92YSIs\nCiAgInV1aWQiIDogIjhmODJkZDY3LTkyMTAtNGU4Ni04Yzc2LTkyZGU5YTRkZGYxMyIsCiAgIm1l\ndGEiIDogewogICAgImVzc2VudGlhbCIgOiAiZmFsc2UiLAogICAgInJvbGUiIDogInNlcnZlciIs\nCiAgICAiZHNtb2RlIiA6ICJsb2NhbCIKICB9Cn0=\n', 'openstack/latest/user_data': 'I2Nsb3VkLWNvbmZpZwpvdXRwdXQ6CiAgYWxsOiAnPj4gL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRw\ndXQubG9nJwp1c2VyOiByb290CnJ1bmNtZDoKLSAnc2VkIC1pICcnL15kYXRhc291cmNlX2xpc3Q6\nIC9kJycgL2V0Yy9jbG91ZC9jbG91ZC5jZmc7IGVjaG8gJydkYXRhc291cmNlX2xpc3Q6CiAgWyJO\nb0Nsb3VkIiwgIkNvbmZpZ0RyaXZlIl0nJyA+PiAvZXRjL2Nsb3VkL2Nsb3VkLmNmZycK\n'}}}, 'readonly': 'true', 'deviceId': '754f34a3-1359-4588-a146-5baac58a4ac6', 'shared': 'false', 'device': 'cdrom', 'path': '', 'type': 'disk'}, {'index': '2', 'iface': 'ide', 'shared': 'false', 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': 'a7d49b85-9243-4d72-b787-df1bf3e33ffc', 'address': {'bus': '1', '  target': '0', '  controller': '0', '  type': 'drive', ' unit': '1'}, 'device': 'cdrom', 'path': '', 'type': 'disk'}, {'address': {' bus': '0x00', 'domain': '0x0000', '  type': 'pci', '  slot': '0x06', '  function': '0x0'}, 'reqsize': '0', 'index': 0, 'iface': 'virtio', 'apparentsize': '3221225472', 'imageID': '6e00d761-273e-4735-953d-504b931471ce', 'readonly': 'false', 'shared': 'false', 'truesize': '3221225472', 'type': 'disk', 'domainID': 'a69080fa-5e68-4b2d-9999-344f0710fc87', 'volumeInfo': {'domainID': 'a69080fa-5e68-4b2d-9999-344f0710fc87', 'volType': 'path', 'leaseOffset': 113246208, 'volumeID': '6c0a1ebe-c8b3-4884-beef-ca0692793614', 'leasePath': '/dev/a69080fa-5e68-4b2d-9999-344f0710fc87/leases', 'imageID': '6e00d761-273e-4735-953d-504b931471ce', 'path': '/rhev/data-center/mnt/blockSD/a69080fa-5e68-4b2d-9999-344f0710fc87/images/6e00d761-273e-4735-953d-504b931471ce/6c0a1ebe-c8b3-4884-beef-ca0692793614'}, 'format': 'cow', 'deviceId': '6e00d761-273e-4735-953d-504b931471ce', 'poolID': 'bc126acc-82b0-4157-bbe5-6c03660520dc', 'device': 'disk', 'path': '/rhev/data-center/mnt/blockSD/a69080fa-5e68-4b2d-9999-344f0710fc87/images/6e00d761-273e-4735-953d-504b931471ce/6c0a1ebe-c8b3-4884-beef-ca0692793614', 'propagateErrors': 'off', 'optional': 'false', 'bootOrder': '1', 'volumeID': '6c0a1ebe-c8b3-4884-beef-ca0692793614', 'specParams': {}, 'volumeChain': [{'domainID': 'a69080fa-5e68-4b2d-9999-344f0710fc87', 'volType': 'path', 'leaseOffset': 113246208, 'volumeID': '6c0a1ebe-c8b3-4884-beef-ca0692793614', 'leasePath': '/dev/a69080fa-5e68-4b2d-9999-344f0710fc87/leases', 'imageID': '6e00d761-273e-4735-953d-504b931471ce', 'path': '/rhev/data-center/mnt/blockSD/a69080fa-5e68-4b2d-9999-344f0710fc87/images/6e00d761-273e-4735-953d-504b931471ce/6c0a1ebe-c8b3-4884-beef-ca0692793614'}]}, {'device': 'memballoon', 'specParams': {'model': 'virtio'}, 'type': 'balloon', 'deviceId': '916e2571-5715-46e8-a447-9462f9d32394', 'target': 1048576}]
        clientIp = 
        display = qxl

Comment 1 Meni Yakove 2013-12-30 16:50:40 UTC
In order to run the VM again restart vdsm service is needed.
I have two hosts in the cluster and the VM exist on both hosts in down status.

Comment 2 Dan Kenigsberg 2014-01-01 16:50:05 UTC
In the vmCreate command, Vdsm receives two copies of cdrom, one with ide address, the other - without. Vdsm should not have 

{'device': 'cdrom',
              'deviceId': '754f34a3-1359-4588-a146-5baac58a4ac6',
              'iface': 'ide',
              'index': '3',
              'path': '',
              'readonly': 'true',
              'shared': 'false',
              'specParams': {'vmPayload': {'file': {'openstack/latest
                                                    'openstack/latest
                                           'volId': 'config-2'}},
              'type': 'disk'},

and

             {'address': {'  controller': '0',
                          '  target': '0',
                          '  type': 'drive',
                          ' unit': '1',
                          'bus': '1'},
              'device': 'cdrom',
              'deviceId': 'a7d49b85-9243-4d72-b787-df1bf3e33ffc',
              'iface': 'ide',
              'index': '2',
              'path': '',
              'readonly': 'true',
              'shared': 'false',
              'specParams': {'path': ''},
              'type': 'disk'},

It's an interesting question why both cdroms received the same ide-0-1-1 tag from libvirt, but the truly urgent issue is why two cdroms has been sent to begin with.

Comment 3 Omer Frenkel 2014-01-02 09:12:30 UTC
the two cdroms are sent because one is the vm cd, the other is the cd for vm-payload.
for some reason vdsm fails to create the payload file, probably duplicate of bug 1047356

when payload is working correctly, the libvirt xml looks like this

                <disk device="cdrom" snapshot="no" type="file">
                        <source file="/var/run/vdsm/payload/970189b9-10c2-48dc-8a87-40503550a9bf.09ca161b22a33248ff570b6530223ba2.img" startupPolicy="optional"/>
                        <target bus="ide" dev="hdd"/>
                        <readonly/>
                        <serial/>
                </disk>
                <disk device="cdrom" snapshot="no" type="file">
                        <source file="" startupPolicy="optional"/>
                        <target bus="ide" dev="hdc"/>
                        <readonly/>
                        <serial/>
                </disk>

*** This bug has been marked as a duplicate of bug 1047356 ***