Hide Forgot
Description of problem: Have environment with two clusters(cluster_1, cluster_2), all templates placed on cluster_1, so when I want to create new vm from template on cluster_2, operation failed with error: Status: 400 Reason: Bad Request Detail: [Cannot add VM. CPU Profile doesn't match provided Cluster.] Version-Release number of selected component (if applicable): rhevm-3.5.0-0.23.beta.el6ev.noarch How reproducible: Always Steps to Reproduce: 1. Setup environment with two clusters 2. Create template from vm on cluster_1 3. Create new vm from template on cluster_2 Actual results: Operation failed with error. Expected results: Operation success, without any errors Additional info: My template: <template href= "/ovirt-engine/api/templates/366f1e3c-289f-4fc4-82d6-1872594fde82" id="366f1e3c-289f-4fc4-82d6-1872594fde82"> <actions> <link href= "/ovirt-engine/api/templates/366f1e3c-289f-4fc4-82d6-1872594fde82/export" rel="export"/> </actions> <name>clean_template</name> <link href= "/ovirt-engine/api/templates/366f1e3c-289f-4fc4-82d6-1872594fde82/disks" rel="disks"/> <link href= "/ovirt-engine/api/templates/366f1e3c-289f-4fc4-82d6-1872594fde82/nics" rel="nics"/> <link href= "/ovirt-engine/api/templates/366f1e3c-289f-4fc4-82d6-1872594fde82/cdroms" rel="cdroms"/> <link href= "/ovirt-engine/api/templates/366f1e3c-289f-4fc4-82d6-1872594fde82/tags" rel="tags"/> <link href= "/ovirt-engine/api/templates/366f1e3c-289f-4fc4-82d6-1872594fde82/permissions" rel="permissions"/> <link href= "/ovirt-engine/api/templates/366f1e3c-289f-4fc4-82d6-1872594fde82/watchdogs" rel="watchdogs"/> <type>server</type> <status> <state>ok</state> </status> <memory>1073741824</memory> <cpu> <topology sockets="1" cores="1"/> <architecture>X86_64</architecture> </cpu> <cpu_shares>0</cpu_shares> <bios> <boot_menu> <enabled>false</enabled> </boot_menu> </bios> <os type="other"> <boot dev="hd"/> </os> <cluster href= "/ovirt-engine/api/clusters/08ea0169-ae42-4e1b-bb20-839a66cd2010" id="08ea0169-ae42-4e1b-bb20-839a66cd2010"/> <creation_time>2014-12-09T15:01:34.047+02:00</creation_time> <origin>ovirt</origin> <high_availability> <enabled>false</enabled> <priority>1</priority> </high_availability> <display> <type>spice</type> <monitors>1</monitors> <single_qxl_pci>false</single_qxl_pci> <allow_override>true</allow_override> <smartcard_enabled>false</smartcard_enabled> <file_transfer_enabled>true</file_transfer_enabled> <copy_paste_enabled>true</copy_paste_enabled> </display> <stateless>false</stateless> <delete_protected>false</delete_protected> <sso> <methods> <method id="GUEST_AGENT"/> </methods> </sso> <timezone>Etc/GMT</timezone> <usb> <enabled>false</enabled> </usb> <migration_downtime>-1</migration_downtime> <cpu_profile href= "/ovirt-engine/api/cpuprofiles/69d5efc1-9dc0-46f3-84a4-37df7f12d432" id="69d5efc1-9dc0-46f3-84a4-37df7f12d432"/> </template> Create vm looks like: <vm> <name>clone</name> <cluster href="/api/clusters/67866b36-fd68-4106-8758-34cf31b0c3d4" id="67866b36-fd68-4106-8758-34cf31b0c3d4"> <name>cl_35_amd</name> </cluster> <template id="366f1e3c-289f-4fc4-82d6-1872594fde82"/> </vm>
This happens because the details of the VM, including the disk profile, are taken from the template, and the disk profile in the template is only valid for the cluster of the template. In previous similar discussions (see bug 1158458) it has been decided that rejecting this is correct behavior, and that it won't be changed, so this should probably closed as WONTFIX. However, I let that to the SLA team to decided. As a workaround you can create the VM initially in the same cluster that the template and then, with an additional request, change only the cluster.
The fix for this bug is included upstream since ovirt-engine-3.6.0_alpha1, thus I'm moving to ON_QA.
Hey Roy, I've updated the doc text. Please let me know if it is correct. Kind regards, Julie
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-0376.html