Bug 1361838

Summary: [Disk profile] Cannot add VM. Disk Profile YYY with id XXX is not assigned to Storage Domain ZZZ
Product: [oVirt] ovirt-engine Reporter: Israel Pinto <ipinto>
Component: BLL.StorageAssignee: Yanir Quinn <yquinn>
Status: CLOSED CURRENTRELEASE QA Contact: Israel Pinto <ipinto>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.6.7CC: bugs, dfediuck, eshenitz, ipinto, irosenzw, mavital, mgoldboi, nsimsolo, pzhukov, ratamir, rgolan, yquinn
Target Milestone: ovirt-4.0.4Keywords: AutomationBlocker
Target Release: 4.0.4.4Flags: rule-engine: ovirt-4.0.z+
rule-engine: ovirt-4.1+
rule-engine: blocker+
mgoldboi: planning_ack+
rgolan: devel_ack+
mavital: testing_ack+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1364792 (view as bug list) Environment:
Last Closed: 2016-09-26 12:39:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: SLA RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1364792, 1376751    
Attachments:
Description Flags
engine_logs
none
run_1_host_1_logs
none
run_1_host_2_logs
none
run_1_host_3_logs
none
engine.log none

Comment 2 Israel Pinto 2016-07-31 09:03:03 UTC
Created attachment 1185997 [details]
engine_logs

Comment 3 Israel Pinto 2016-07-31 09:04:08 UTC
Created attachment 1185998 [details]
run_1_host_1_logs

Comment 4 Israel Pinto 2016-07-31 09:05:24 UTC
Created attachment 1185999 [details]
run_1_host_2_logs

Comment 5 Israel Pinto 2016-07-31 09:08:17 UTC
Created attachment 1186000 [details]
run_1_host_3_logs

Comment 6 Israel Pinto 2016-07-31 09:54:31 UTC
Description of problem:
Failed to create VM from template which his disks are copy to several storage domails since disk profile is not assigned.
As part of Automation test (VIRT) on 3.6.8 with Host 7.3,
We see failure were disk profile is not assigned.
The iscsi_0 storage domain is on maintenance, and while create new VM from template we get the failure:
"Cannot add VM. Disk Profile iscsi_0 with id 1d9fe23e-4188-426c-a3e8-61a0f1afdd79 is not assigned to Storage Domain nfs_2."


Version-Release number of selected component (if applicable):
RHEVM Version: 3.6.8.1-0.1.el6
Host:
OS Version:RHEL - 7.3 - 5.el7
Kernel Version:3.10.0 - 475.el7.x86_64
KVM Version:2.6.0 - 15.el7
LIBVIRT Version:libvirt-2.0.0-3.el7
VDSM Version:vdsm-4.17.33-1.el7ev
SPICE Version:0.12.4 - 18.el7

How reproducible:
All the time

Additional info:
Attaching logs

REST Request:

<vm>
    <name>memory_hotplug</name>
    <description>memory_hotplug</description>
    <os type="rhel_6x64"/>
    <cluster href="/api/clusters/33b6e0a6-e16a-4370-bb34-3a8ede7cc400" id="33b6e0a6-e16a-4370-bb34-3a8ede7cc400">
        <actions>
            <link href="/api/clusters/33b6e0a6-e16a-4370-bb34-3a8ede7cc400/resetemulatedmachine" rel="resetemulatedmachine"/>
        </actions>
        <name>golden_env_mixed_1</name>
        <description></description>
        <link href="/api/clusters/33b6e0a6-e16a-4370-bb34-3a8ede7cc400/networks" rel="networks"/>
        <link href="/api/clusters/33b6e0a6-e16a-4370-bb34-3a8ede7cc400/permissions" rel="permissions"/>
        <link href="/api/clusters/33b6e0a6-e16a-4370-bb34-3a8ede7cc400/glustervolumes" rel="glustervolumes"/>
        <link href="/api/clusters/33b6e0a6-e16a-4370-bb34-3a8ede7cc400/glusterhooks" rel="glusterhooks"/>
        <link href="/api/clusters/33b6e0a6-e16a-4370-bb34-3a8ede7cc400/affinitygroups" rel="affinitygroups"/>
        <link href="/api/clusters/33b6e0a6-e16a-4370-bb34-3a8ede7cc400/cpuprofiles" rel="cpuprofiles"/>
        <cpu id="Intel SandyBridge Family">
            <architecture>X86_64</architecture>
        </cpu>
        <data_center href="/api/datacenters/bf9ebc38-f0e3-49d0-90ea-815a04347a42" id="bf9ebc38-f0e3-49d0-90ea-815a04347a42"/>
        <memory_policy>
            <overcommit percent="200"/>
            <transparent_hugepages>
                <enabled>true</enabled>
            </transparent_hugepages>
        </memory_policy>
        <scheduling_policy href="/api/schedulingpolicies/5a2b0939-7d46-4b73-a469-e9c2c7fc6a53" id="5a2b0939-7d46-4b73-a469-e9c2c7fc6a53">
            <name>power_saving</name>
            <policy>power_saving</policy>
            <thresholds high="61" duration="240" low="21"/>
            <properties>
                <property>
                    <name>HighUtilization</name>
                    <value>61</value>
                </property>
                <property>
                    <name>CpuOverCommitDurationMinutes</name>
                    <value>4</value>
                </property>
                <property>
                    <name>LowUtilization</name>
                    <value>21</value>
                </property>
            </properties>
        </scheduling_policy>
        <version major="3" minor="6"/>
        <error_handling>
            <on_error>migrate</on_error>
        </error_handling>
        <virt_service>true</virt_service>
        <gluster_service>false</gluster_service>
        <threads_as_cores>false</threads_as_cores>
        <tunnel_migration>false</tunnel_migration>
        <trusted_service>false</trusted_service>
        <ha_reservation>false</ha_reservation>
        <optional_reason>false</optional_reason>
        <maintenance_reason_required>false</maintenance_reason_required>
        <ballooning_enabled>false</ballooning_enabled>
        <ksm>
            <enabled>false</enabled>
            <merge_across_nodes>true</merge_across_nodes>
        </ksm>
        <required_rng_sources/>
        <fencing_policy>
            <enabled>true</enabled>
            <skip_if_sd_active>
                <enabled>false</enabled>
            </skip_if_sd_active>
            <skip_if_connectivity_broken>
                <enabled>false</enabled>
                <threshold>50</threshold>
            </skip_if_connectivity_broken>
        </fencing_policy>
        <migration>
            <auto_converge>inherit</auto_converge>
            <compressed>inherit</compressed>
        </migration>
    </cluster>
    <display>
        <type>spice</type>
    </display>
    <template id="fee2c352-ef8b-47ed-8a1c-bbbb7c08a982"/>
</vm>

Response:

<fault>
<reason>Operation Failed</reason>
<detail>[Cannot add VM. Disk Profile iscsi_0 with id 1d9fe23e-4188-426c-a3e8-61a0f1afdd79 is not assigned to Storage Domain nfs_2.]</detail>
</fault>

Comment 7 Eyal Shenitzky 2016-08-01 08:31:53 UTC
Attaching a bug about the same error that closed because it cannot reproduced manually, only through automation.

https://bugzilla.redhat.com/show_bug.cgi?id=1311610

Comment 8 Yanir Quinn 2016-08-01 10:01:06 UTC
problem cause:

A. When a template has a disk which is allocated to more than one storage domain 
It will contain for that disk image :
 - A list of storage domains
 - A list of disk profiles that belong to each storage domain

B. Creating a VM from that template using REST will :
 - Select the first storage domain from the list
 - Select the first disk profile from the list

Note : VM creation via UI selects a specific storage domain and a specific disk profile which is associated with that storage domain.
There is no option to leave these values empty - unlike REST call where these values can be dismissed.

Consequence: 
In case the selected disk profile is not associated with the selected storage domain - the VM will not be created due to this inconsistency.

Comment 9 Raz Tamir 2016-08-01 11:53:12 UTC
See also https://bugzilla.redhat.com/show_bug.cgi?id=1360355

Comment 10 Yanir Quinn 2016-09-08 10:11:08 UTC
Disk profile filtering according to user_disk_profile_permissions_view in db is wrong causing regression in adding new disks to vms with non-default disk profiles .
Fix will be added on this BZ.

Comment 11 Nisim Simsolo 2016-09-11 15:11:19 UTC
same issue occurred on my setup after upgrading to  ovirt-engine-4.0.4.1-0.1.el7ev from ovirt-engine-4.0.3.1-0.1.el7ev.
This issue also affect import of VMs from external providers.
engine.log attached

Comment 12 Nisim Simsolo 2016-09-11 15:13:45 UTC
Created attachment 1199884 [details]
engine.log

Comment 13 Yanir Quinn 2016-09-13 06:38:55 UTC
target milestone is 4.0.5

Comment 14 Nisim Simsolo 2016-09-13 07:20:22 UTC
Workaround for this issue is to assign DiskCreator role for the master storage domain.

Comment 15 Tal Nisan 2016-09-22 11:10:37 UTC
*** Bug 1377696 has been marked as a duplicate of this bug. ***

Comment 16 Israel Pinto 2016-09-25 07:46:55 UTC
Verify with:
Engine: 4.0.4.4-0.1.el7ev
Host:
OS Version:RHEL - 7.2 - 13.0.el7ev
OS Description:Red Hat Enterprise Linux Server 7.2 (Maipo)
Kernel Version:3.10.0 - 327.36.1.el7.x86_64
KVM Version:2.3.0 - 31.el7_2.21
LIBVIRT Version:libvirt-1.2.17-13.el7_2.5
VDSM Version:vdsm-4.18.13-1.el7ev
SPICE Version:0.12.4 - 15.el7_2.2

Steps:
1. Create template with disk on master storage domain and copy it's disk to more then one storage domain
2. Put Master storage domain to maintenance  
3. Create VM from template from UI
4. Create VM from template via REST without setting disk profile 

Results:
VM is created on both scenarios - PASS

Comment 17 Pavel Zhukov 2016-09-26 14:20:45 UTC
*** Bug 1379333 has been marked as a duplicate of this bug. ***

Comment 18 Tal Nisan 2016-09-27 08:24:03 UTC
*** Bug 1379333 has been marked as a duplicate of this bug. ***