Bug 1138753
| Summary: | Engine allows starting pool VM after its disk was deleted | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Lukas Svaty <lsvaty> | ||||
| Component: | ovirt-engine | Assignee: | Vitor de Lima <vdelima> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Lukas Svaty <lsvaty> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | medium | ||||||
| Version: | 3.4.1-1 | CC: | aberezin, amureini, ecohen, gklein, iheim, juwu, lpeer, lsvaty, mavital, michal.skrivanek, ofrenkel, pnovotny, rbalakri, Rhev-m-bugs, yeylon | ||||
| Target Milestone: | --- | Keywords: | ZStream | ||||
| Target Release: | 3.4.3 | ||||||
| Hardware: | ppc | ||||||
| OS: | Linux | ||||||
| Whiteboard: | virt | ||||||
| Fixed In Version: | org.ovirt.engine-root-3.4.3-1 | Doc Type: | Bug Fix | ||||
| Doc Text: |
Previously there was not a way to address two virtual CD drives at the same time in the sPAPR VSCSI controller. With this update, payload for CD-ROMs in ppc64 VMs is addressed.
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2014-10-23 12:30:36 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | |||||||
| Bug Blocks: | 1122979 | ||||||
| Attachments: |
|
||||||
i tend to believe it is not related to the removal of the disk, does this work ok without removing the disk?
from the logs it seems the problem is duplicate device id, for some reason cloud-init cd got treatment of ide device (index=3) and not scsi (index=0,unit=1)
cloud-init device:
{'index': '3', 'iface': 'scsi', 'address': {'bus': '0', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'specParams': {'vmPayload': {'volId': 'config-2', 'file': {'openstack/latest/meta_data.json': 'ewogICJsYXVuY2hfaW5kZXgiIDogIjAiLAogICJhdmFpbGFiaWxpdHlfem9uZSIgOiAibm92YSIs\nCiAgIm5ldHdvcmstaW50ZXJmYWNlcyIgOiAiYXV0b1xuIiwKICAibmV0d29ya19jb25maWciIDog\newogICAgImNvbnRlbnRfcGF0aCIgOiAiL2NvbnRlbnQvMDAwMCIsCiAgICAicGF0aCIgOiAiL2V0\nYy9uZXR3b3JrL2ludGVyZmFjZXMiCiAgfSwKICAidXVpZCIgOiAiZGU1MTFjMWEtNzQzZS00MjYz\nLTljMTMtMmFhNTc4OTA1YjYxIiwKICAibWV0YSIgOiB7CiAgICAiZXNzZW50aWFsIiA6ICJmYWxz\nZSIsCiAgICAicm9sZSIgOiAic2VydmVyIiwKICAgICJkc21vZGUiIDogImxvY2FsIgogIH0KfQ==\n', 'openstack/content/0000': 'YXV0bwo=\n', 'openstack/latest/user_data': 'I2Nsb3VkLWNvbmZpZwpzc2hfcHdhdXRoOiB0cnVlCmRpc2FibGVfcm9vdDogMApvdXRwdXQ6CiAg\nYWxsOiAnPj4gL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJwp1c2VyOiByb290CmNocGFz\nc3dkOgogIGV4cGlyZTogZmFsc2UKcnVuY21kOgotICdzZWQgLWkgJycvXmRhdGFzb3VyY2VfbGlz\ndDogL2QnJyAvZXRjL2Nsb3VkL2Nsb3VkLmNmZzsgZWNobyAnJ2RhdGFzb3VyY2VfbGlzdDoKICBb\nIk5vQ2xvdWQiLCAiQ29uZmlnRHJpdmUiXScnID4+IC9ldGMvY2xvdWQvY2xvdWQuY2ZnJwo=\n'}}}, 'readonly': 'true', 'deviceId': 'f9e9575d-637b-434a-a07b-ff1fe55eb2d7', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'},
cdrom device:
{'index': '0', 'iface': 'scsi', 'address': {'bus': '0', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': '841481fb-2ee1-4b11-87c6-4498e7342b79', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'},
looks like duplicate of bug 1138314 i thought so too, but this looks different, here, cloud-init conflicts with regular cd rom, and its not clear why index is sent as 3, where there is a specific code for ppc to send index 0 for scsi cdroms. in bug 1138314 the problem is cloud-init is implemented with payload, so when using both there is a conflict. I tried to reproduce the bug using oVirt engine 3.4.4 (the master branch of 2014-09-16, SHA-1 e8875730cfbbc9b910b986763bfea7290b280c46) and the bug didn't occur. Are there any extra steps that I am missing to make this happen again? Vitor: arch ppc maybe if its related? If not I can try to reproduce it today. Vitor, i see the engine is sending the cloud-init payload, this is sent for vms that are configured to have some linux os, and cloud-init enabled in the edit vm dialog. it is also sent only on the first run for the vm, this is why in vm-pool use case it is always sent (vms are stateless so every run is like the first run) Ok, I reproduced the bug. This was already fixed in change #25632, but it looks like this fix didn't get into the 3.4 branch. I will backport it, but looks like there is another bug, since the check for missing bootable disks should fail before trying to launch the VM using QEMU. *** Bug 1147036 has been marked as a duplicate of this bug. *** fix only missing for 3.4.z verified in av12.2 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2014-1712.html |
Created attachment 934836 [details] libvirtd, vdsm, engine, qemu logs Description of problem: After pool VM is created, when disk is removed and user/admin tries to start VM. Engine should not allow this and should print error message such as "Cannot run VM without at least one bootable disk.". Version-Release number of selected component (if applicable): av12_ppc How reproducible: 100% Steps to Reproduce: 1. Create VM 2. Create template 3. Create pool of 1 VM 4. Remove VM disk of pool VM Actual results: libvirtError: internal error: process exited while connecting to monitor: qemu-system-ppc64: -drive file=/var/run/vdsm/payload/fb81e4fd-ca1f-454f-8eea-b0f9ce4f7ed5.6248c9716e7e39d950b15e580e046dc3.img,if=none,id=drive-scsi0-0-0-0,readonly=on,format=raw,serial=: Duplicate ID 'drive-scsi0-0-0-0' for drive Expected results: Starting VM should be disabled in engine. Additional info: Will add engine.log, vdsm.log and libvirtd.log in attachment qemu-vm.log is empty