Bug 857690

Summary: [RHEVM-ENGINE] VM created from template with "Bad volume specification"
Product: Red Hat Enterprise Virtualization Manager Reporter: Barak Dagan <bdagan>
Component: ovirt-engineAssignee: Federico Simoncelli <fsimonce>
Status: CLOSED CURRENTRELEASE QA Contact: Gadi Ickowicz <gickowic>
Severity: urgent Docs Contact:
Priority: high    
Version: 3.1.0CC: acathrow, adarazs, amureini, dyasny, ecohen, fsimonce, iheim, jrd, jvlcek, lpeer, mfojtik, michal.skrivanek, nlevinki, pstehlik, Rhev-m-bugs, tjelinek, yeylon, ykaul
Target Milestone: ---   
Target Release: 3.1.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: virt
Fixed In Version: si21 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-12-04 20:06:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
vdsm log, VM id = 433f273d-44ce-4fdc-8faf-5a73292a8f72
none
engine log none

Description Barak Dagan 2012-09-16 09:31:45 UTC
Created attachment 613385 [details]
vdsm log, VM id = 433f273d-44ce-4fdc-8faf-5a73292a8f72

Description of problem:
Trying to create a new VM using existing template (created from UI), seems to work. 

[RHEVM shell (connected)]# create vm --name fire-rh2-cli --cluster-name Default --template-name fire-rh-template --memory 4294967296

id                        : 433f273d-44ce-4fdc-8faf-5a73292a8f72
name                      : fire-rh2-cli
cluster-id                : 99408929-82cf-4dc7-a532-9d998063fa95
cpu-topology-cores        : 1
cpu-topology-sockets      : 1
creation_status-state     : pending
creation_time             : 2012-09-16T10:37:03.451+03:00
display-allow_reconnect   : False
display-monitors          : 1
display-type              : spice
high_availability-enabled : False
high_availability-priority: 1
memory                    : 4294967296
memory_policy-guaranteed  : 4294967296
origin                    : ovirt
os-boot-dev               : hd
os-type                   : rhel_6x64
placement_policy-affinity : migratable
quota-id                  : 00000000-0000-0000-0000-000000000000
stateless                 : False
status-state              : image_locked
template-id               : b71a32a9-5443-4961-bfb3-67f9028142d1
type                      : server
usb-enabled               : False


but when I try to activate the VM, I get the following error (in UI):
	
2012-Sep-16, 10:43:49 Failed to run VM fire-rh2-cli on Host puma32.
	
2012-Sep-16, 10:43:49 VM fire-rh2-cli is down. Exit message: Bad volume specification {'index': '2', 'iface': 'ide', 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': '18499135-e4c7-466b-a0d8-61d80737fee6', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}.
	
2012-Sep-16, 10:43:48 VM fire-rh2-cli was started by vdcadmin (Host: puma32).
2012-Sep-16, 10:37:11 VM fire-rh2-cli creation has been completed.
2012-Sep-16, 10:37:04 VM fire-rh2-cli creation was initiated by admin@internal.



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 Ayal Baron 2012-09-19 08:42:19 UTC
Please attach engine log as well (always).
Please specify reproduction flow (in steps to reproduce) and how reproducible this is.

Comment 2 Barak Dagan 2012-09-19 09:43:36 UTC
Created attachment 614291 [details]
engine log

Comment 3 Barak Dagan 2012-09-19 10:51:52 UTC
(In reply to comment #1)
> Please attach engine log as well (always).
> Please specify reproduction flow (in steps to reproduce) and how
> reproducible this is.

It is 100% reprocubile.
1) Create VM template from UI (I used rhel6 template).
2) Create a new VM from template, using the following CLI command:
create vm --name new_name --cluster-name Default --template-name UI_template --memory 4294967296
3) activate the VM, through UI, CLI etc.

Comment 4 Federico Simoncelli 2012-09-19 15:22:27 UTC
For some reason the (empty?) 'path' wasn't present in the cdrom specification. I suppose it's related to the previous failing getIsoList commands (Permission Denied, probably the usual nfs v4 nobody:nobody issue). Still investigating.

Comment 6 Michal Fojtik 2012-09-20 14:33:54 UTC
*** Bug 859062 has been marked as a duplicate of this bug. ***

Comment 7 Federico Simoncelli 2012-09-20 14:54:21 UTC
It looks like some flows set the vm_static.iso_path to NULL (vm => template for instance). I think we can try to identify those flows and modify them to use an empty string instead (""). Anyway in general I think we should live with the fact that it can be NULL (I like it better than an empty string), this patch should fix the issue:

http://gerrit.ovirt.org/8092

Comment 9 Tomas Jelinek 2012-10-05 09:24:58 UTC
Other way how to fix this issue would be to modify directly the API to send an empty string instead of null (the same way as the GWT frontend does).

http://gerrit.ovirt.org/#/c/8376/

What do you think?

Comment 10 Federico Simoncelli 2012-10-05 15:37:59 UTC
(In reply to comment #9)
> What do you think?

In a situation where an empty string has a specific meaning, using NULL in the db would be the correct solution. Here it doesn't make a difference (an empty name for an iso image is invalid), except for the fact that the column default value is not an empty string but NULL.

All considered I prefer dealing with the fact that it could be NULL rather than trying to force an empty string in all the possible scenarios.

Comment 11 Tomas Jelinek 2012-10-08 07:04:49 UTC
> All considered I prefer dealing with the fact that it could be NULL rather than > trying to force an empty string in all the possible scenarios.
Fair enough - abandoning change. It was just an idea and one line of code :)

Comment 12 Michal Skrivanek 2012-10-15 08:23:33 UTC
merged upstream: 6aff51e3882761ea170d029b22a2f3832dc2d96d

Comment 15 jrd 2012-10-17 18:57:05 UTC
If I'm reading this correctly, the effect of this change is to cause the existing DC api call to VDSM to work.  IOW, no changes to DC are required to track this.  Is that correct?

Comment 16 Michal Fojtik 2012-10-22 09:59:22 UTC
(In reply to comment #15)
> If I'm reading this correctly, the effect of this change is to cause the
> existing DC api call to VDSM to work.  IOW, no changes to DC are required to
> track this.  Is that correct?

Yes, DC now works fine with this patch, mean we can 'start' the machine we created using the API.

Comment 17 Attila Darazs 2012-10-26 09:08:00 UTC
VMs created from both CLI and UI with template booted up fine. Verified in SI22.1.