Description of problem: While running the VM3, VM's Event log messages showed the following errors: 1. Failed to run VM vm3 2. VM vm3 is down. Exit message: internal error boot orders have to be contiguous and starting from 1. Version-Release number of selected component (if applicable): glusterfs 3.3.0rhsvirt1 built on Oct 28 2012 23:50:59 (glusterfs-3.3.0rhsvirt1-8.el6rhs.x86_64) Steps Carried: ============== Initial setup: distributed-replicate (2*2) 1. Created VM1, with 30G and thin partitioning 2. Created VM2, with 30G and pre-allocated partition 3. Created Vm3, with 30G and thin partitioning 4. Started VM1, VM2 and Vm3 5. VM3 failed to start and logged the errors in event messages Actual results: ============== VM3 should also run successfully without any error.
From vdsm.log, you can see that Engine passed 'bootOrder': '2' for the vNIC but nothing on the disk devices. As libvirt puts it: boot orders have to be contiguous and starting from 1. Thread-28518::DEBUG::2012-11-09 12:32:18,316::BindingXMLRPC::900::vds::(wrapper) return vmCreate with {'status': {'message': 'Done', 'code': 0}, 'vmList': {'status': 'WaitForLaunch', 'acpiEnable': 'true', 'emulatedMachine': 'rhel6.3.0', 'vmId': 'c6056c76-2b6b-412d-b112-66acf1082423', 'pid': '0', 'timeOffset': '19800', 'displayPort': '-1', 'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'Nehalem', 'custom': {}, 'clientIp': '', 'nicModel': 'rtl8139,pv', 'keyboardLayout': 'en-us', 'kvmEnable': 'true', 'transparentHugePages': 'true', 'devices': [{'device': 'qxl', 'specParams': {'vram': '65536'}, 'type': 'video', 'deviceId': 'd11454d2-cab5-40b8-b395-b10da69c4c6b'}, {'index': '2', 'iface': 'ide', 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': 'c18427a2-9972-47b6-b679-baa55b7f2494', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'format': 'raw', 'type': 'disk', 'volumeID': 'e41bdc30-da6d-4868-a529-7c7e0fbcac1f', 'imageID': 'ecdeddf1-c7ce-4a40-a7a1-cfbdb10d796a', 'specParams': {}, 'readonly': 'false', 'domainID': '61dd7e2a-8df1-4e72-8421-f7441c9d7665', 'deviceId': 'ecdeddf1-c7ce-4a40-a7a1-cfbdb10d796a', 'poolID': '1eaedbbb-1c0e-4028-9b0e-adac7e07d25f', 'device': 'disk', 'shared': 'false', 'propagateErrors': 'off', 'optional': 'false'}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:46:22:4c', 'network': 'rhevm', 'bootOrder': '2', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'be4ba832-9e3d-4645-a6bc-5e1a937a2f1f', 'device': 'bridge', 'type': 'interface'}, {'device': 'memballoon', 'specParams': {'model': 'virtio'}, 'type': 'balloon', 'deviceId': '80af14e6-2243-4aa3-a6e7-b15977a42465'}], 'smp': '1', 'vmType': 'kvm', 'memSize': 512, 'displayIp': '0', 'spiceSecureChannels': 'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard', 'smpCoresPerSocket': '1', 'vmName': 'v3', 'display': 'qxl', 'nice': '0'}}
This is a bug in rhevm. Please reassign accordingly.
Vijay can you please co-ordinate here (as per comment#8 we need RHEVM guys to have a look).
Omer, can you check if this is related to 888642? Though here the engine apparently didn't send the boot order at all.
I could not reproduce with steps as provided, can you please provide detailed steps of creation of the vms that caused this problem? (vm/server, how many disks, how many nics, order of creation, etc)
Hi After importing machine from RHEV 3.0 environment, I face the same problem. After changing the boot order and then return it to the original state (Hard drive and then CDRom) the guest aws able to be powered on, but unable to boot from the hard drive. Booting the guest to rescue indicates there are no usable hard drive on this guest, although looking in the RHEV GUI reveals the hard drive exists.
(In reply to comment #13) > Hi After importing machine from RHEV 3.0 environment, I face the same > problem. importing to where? what exact version? Maybe you've seen 888642?
closing as I could not reproduce this issue, and other related bugs fixed around this. please re-open and attach new logs if relevant.