Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1418758 - [ovirt-ansible-modules] adding disk to VM as bootable fail when there is already disk with the same name
Summary: [ovirt-ansible-modules] adding disk to VM as bootable fail when there is alre...
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: ovirt-distribution
Classification: oVirt
Component: General
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Ondra Machacek
QA Contact: Petr Kubica
URL:
Whiteboard:
Depends On:
Blocks: 1405975
TreeView+ depends on / blocked
 
Reported: 2017-02-02 16:26 UTC by Petr Kubica
Modified: 2017-05-11 09:25 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-10 10:46:09 UTC
oVirt Team: Infra


Attachments (Terms of Use)
ansible playbook (3.30 KB, text/plain)
2017-02-02 16:26 UTC, Petr Kubica
no flags Details

Description Petr Kubica 2017-02-02 16:26:15 UTC
Created attachment 1247153 [details]
ansible playbook

Description of problem:
adding disk to VM as bootable fail when there is already disk with the same name

There is unclear what should happen:

 - name: Create disk 1
     ovirt_disks:
       name: myvm_disk1
       vm_name: myvm
       size: 20GiB
       format: cow
       interface: virtio
       bootable: True
       storage_domain: s1
       auth: "{{ovirt_auth}}"

first case: 
- there isn't any disk of name myvm_disk1 in storage s1
- so create it and attach it to vm myvm

second case:
- in storage s1 is already disk myvm_disk1 (as template or as disk of another VM) but not a disk with that name which is attached to VM myvm
- so create a new disk and attach it to VM myvm?
!!OR!!
- try to add any? all? disk of that name to VM?

third case:
- in storage s1 is already disk myvm_disk1 (as template or as disk of another VM) and other disk with that name is attached to VM myvm
- so based on declarative models of ansible, it should check that disk with that name is already attached to VM myvm?
!!OR!!
- try to add any? all? disk of that name to VM which isn't attached to VM myvm?

- This case is described in attached playbook.yml - run twice

Version-Release number of selected component (if applicable):
4.1.0

How reproducible:
always with provided playbook

Steps to Reproduce:
1. Run attached playbook twice

Actual results:
Failed with "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Virtual Disk. Disk myvm_disk1 in VM myvm is already marked as boot.]\". HTTP response code is 409."

Additional info:
Attached playbook will fail in second run, but it should pass with all ok

Comment 1 Petr Kubica 2017-02-02 16:41:04 UTC
Maybe in third case, there is missing a parametr "force" to force create another disk for that VM

Comment 2 Oved Ourfali 2017-02-03 07:37:44 UTC
Ansible is declarative, so I guess that it doesn't create the disk as it exists already, which is normal.

But, I'll let Ondra reply. 
Anyhow I don't see any high severity here.

Comment 3 Yaniv Kaul 2017-02-03 11:54:05 UTC
Why not open the bug upstream, btw?

Comment 4 Ondra Machacek 2017-02-03 18:12:33 UTC
I think we should implement force parameter. That's in my opinion all we can do,
as user must implement his playbook properly and work with IDs to reliably work
with disks.

Comment 5 Ondra Machacek 2017-02-10 10:46:09 UTC
As agreed please open this issue at github.


Note You need to log in before you can comment on or make changes to this bug.