Bug 1590866 - SDK allows to create template in one DC with disk in another DC
Summary: SDK allows to create template in one DC with disk in another DC
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: RestAPI
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.4.0
: ---
Assignee: Fedor Gavrilov
QA Contact: Petr Kubica
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-13 14:32 UTC by Evgheni Dereveanchin
Modified: 2020-05-20 20:01 UTC (History)
5 users (show)

Fixed In Version: ovirt-engine-4.3.5.3
Clone Of:
Environment:
Last Closed: 2020-05-20 20:01:03 UTC
oVirt Team: Storage
Embargoed:
pm-rhel: ovirt-4.4+
rbarry: ovirt-4.5?


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 100462 0 master MERGED core: validate storage domain belongs to the DC on template import 2020-03-23 19:10:49 UTC
oVirt gerrit 101410 0 ovirt-engine-4.3 MERGED core: validate storage domain belongs to the DC on template import 2020-03-23 19:10:49 UTC
oVirt gerrit 102717 0 master MERGED core: validate SD and cluster belong to same DC on repo image import 2020-03-23 19:10:49 UTC

Description Evgheni Dereveanchin 2018-06-13 14:32:14 UTC
Description of problem:
I hit an issue today when using ansible to manage oVirt templates, which was caused by a typo. I effectively created a template in one datacenter with a disk in another one.

Version-Release number of selected component (if applicable):
python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
ansible-2.5.3-1.el7.noarch

How reproducible:
Always

Steps to Reproduce:
1. set up oVirt 4.2 with two datacenters (local storage in my case): 
dc1 (master storage domain dc1_local, cluster dc1)
dc2 (master storage domain dc2_local, cluster dc2)
2. create an ansible playbook to import a template from Glance and specify a cluster: dc1 and storage_domain: dc2_local
3. run the playbook

Actual results:
Playbook runs fine, resulting in a template in dc1 whose disk resides in dc2

Expected results:
Playbook errors out as there is no storage domain dc2_local in dc1

Additional info:
The problem may be much deeper since the ovirt-engine API should not allow for such a template to be defined. May need to re-assign this to the proper component after finding the root cause.

Here's the playbook snippet I used:
    - name: Import glance image as template
      ovirt_templates:
        auth: "{{ ovirt_auth }}"
        state: imported
        name: fc28-cloud-test
        image_disk: "Fedora 28 Cloud Base Image v1.1 for x86_64"
        template_image_disk_name: fc28-cloud-sda
        image_provider: ovirt-image-repository
        storage_domain: "dc2_local"
        cluster: "dc1"

Also a note: now I cannot delete the template since the DC it is assigned to obviously can't remove the disk from another DC's storage.

Comment 1 Ondra Machacek 2019-04-17 12:06:03 UTC
The problem is in ImportRepoImageCommand:validate. We need to check there wheter specified storage domain is in specified datacenter and if not fail.

Comment 2 Petr Kubica 2019-07-17 11:52:07 UTC
It's still possible create template in <datacenter_1> with disk on storage domain in <datacenter_2>

Tested:
python-ovirt-engine-sdk4-4.3.2-1.el7ev.x86_64 (older but I think that version of SDK doesn't matter) and 4.3.5.4-0.1.el7

used task from ansible as is mentioned in comment #0

Also it's not possible delete template: 
fc28-cloud-test: Cannot remove Template. Storage Domain doesn't exist.

Comment 3 RHEL Program Management 2019-07-17 11:52:08 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 4 Sandro Bonazzola 2020-05-20 20:01:03 UTC
This bugzilla is included in oVirt 4.4.0 release, published on May 20th 2020.

Since the problem described in this bug report should be
resolved in oVirt 4.4.0 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.