Bug 2016638 - Document that "vm_name" parameter should be specified for Ansible "ovirt_disk" extend attached disk flow
Summary: Document that "vm_name" parameter should be specified for Ansible "ovirt_disk...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-ansible-collection
Version: 4.4.8
Hardware: x86_64
OS: All
low
low
Target Milestone: ovirt-4.5.2
: ---
Assignee: Pavel Bar
QA Contact: Barbora Dolezalova
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-10-22 13:24 UTC by Sam Wachira
Modified: 2022-10-12 10:02 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
"ovirt_disk" module documentation was updated by adding the following user tip: "If the disk is referenced by name and is attached to a VM, make sure to specify vm_name/vm_id to prevent extension of another disk that is not attached to the VM." So IMHO no release documentation / release note are required.
Clone Of:
Environment:
Last Closed: 2022-09-08 11:29:03 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github oVirt ovirt-ansible-collection pull 559 0 None Merged Improving the "ovirt_disk" documentation for disk extend flow 2022-07-13 11:13:48 UTC
Red Hat Issue Tracker RHV-43869 0 None None None 2021-10-22 13:25:33 UTC
Red Hat Product Errata RHBA-2022:6394 0 None None None 2022-09-08 11:29:13 UTC

Description Sam Wachira 2021-10-22 13:24:33 UTC
Description of problem:
- The ovirt_disk Ansible module uses the 'size' parameter both to create new disks and increase the size of existing disks.
- In a scenario where an existing disk already attached to a virtual machine needs to be increased, specifying a larger size is accepted but the disk is not expanded if the 'vm_name' parameter is not defined.
- Increasing the size of an existing disk is successful if the 'vm_name' is defined in the play.

Version-Release number of selected component (if applicable):


How reproducible:
Fully

Steps to Reproduce:
In this example, the disk was 2GiB in size and needed to be increased to 3GiB.

[1] Write Ansible play using the ovirt_disk Ansible module to extend a disk already attached to a VM excluding 'vm_name' parameter.
~~~
   - name: Extend virtual disk 
	 ovirt_disk:
	   auth: "{{ ovirt_auth }}"
	   name: myvm_Disk2
	   storage_domain: vmstore1
	   interface: virtio_scsi
	   size: 3GiB
~~~

[2] Write Ansible play using the ovirt_disk Ansible module to extend a disk already attached to a VM excluding 'vm_name' parameter and including 'state' parameter.
~~~
   - name: Extend virtual disk
	 ovirt_disk:
	   auth: "{{ ovirt_auth }}"
	   name: myvm_Disk2
	   storage_domain: vmstore1
	   interface: virtio_scsi
	   size: 3GiB
	   state: attached
~~~

[3] Write Ansible play using the ovirt_disk Ansible module to extend a disk already attached to a VM including 'vm_name' parameter.
~~~
   - name: Extend virtual disk
	 ovirt_disk:
	   auth: "{{ ovirt_auth }}"
	   name: myvm_Disk2
	   storage_domain: vmstore1
	   interface: virtio_scsi
	   size: 3GiB
	   vm_name: myvm
~~~

4. Run plays

Actual results:
- Play [1] runs successfully but disk is not extended.
- Play [2] runs successfully but disk is not extended.
- Play [3] runs successfully and disk is extended.

Expected results:
- If mandatory to define the 'vm_name' parameter when extending a disk, an informational message should be displayed stating this requirement.
- Increasing the size of an existing disk should be successful without the 'vm_name' parameter.

Additional info:
- Documentation for the 'size' and 'vm_name' parameters needs to be updated to include this information (https://console.redhat.com/ansible/automation-hub/repo/published/redhat/rhv/content/module/ovirt_disk)

Comment 3 Arik 2022-05-24 12:51:24 UTC
It was confirmed that by adding the vm_name it works so there is a workaround and we don't see much value in fixing this

Comment 4 Sam Wachira 2022-05-24 14:13:21 UTC
Hi Arik,

A workaround is not a fix.

If this won't be fixed, the least that can be done is updating the module docs for users to know that the 'vm_name' and 'size' must be defined.
(https://console.redhat.com/ansible/automation-hub/repo/published/redhat/rhv/content/module/ovirt_disk)

Comment 5 Arik 2022-05-25 10:27:22 UTC
Sure, we can update the documentation to reflect this

Comment 6 Pavel Bar 2022-07-08 16:00:50 UTC
After some local testing on my side I see a quite different behavior:
Disks (tested NFS & iSCSI) can be successfully extended *without* "vm_name" or "vm_id" specified and with no errors in all the scenarios:
1) Disks are not attached to any VM.
2) Disks are attached to a non-running VM.
3) Disks are attached to a running VM  (???should this be allowed???).

The problems actually start to happen after "vm_name" or "vm_id" are specified...

1) Disk is attached to a VM, but wrong (non-existing) "vm_id" is specified (same behavior for both running and not running VM).
Result: Ansible script finishes with an error, "changed=0".
BUT a new disk with the same name is created and is not attached to any VM.
TASK [Extend virtual disk 'Disk4_iSCSI_VM1'] *******************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Cannot attach Virtual Disk. VM is not found.]". HTTP response code is 400.
fatal: [engine1]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Virtual Disk. VM is not found.]\". HTTP response code is 400."}
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
engine1                    : ok=3    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

Note: wrong "vm_name" returns an error as one would expect:
  An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Exception: Entity 'VM2' was not found.

2) Trying to extend disk and/or change "interface" type (i.e. from "virtio" to "virtio_scsi"), while teh disk is attached to a running VM.
Results:
  a) Correct "vm_name" or "vm_id" are specified:
    i) Extend disk only - disk is extended successfully, no error. Should it work????
    ii) Change "interface" type only - "interface" doesn't change and Ansible script finishes with an error, "changed=0".
TASK [Extend virtual disk 'Disk2_NFS_VM1'] *********************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Cannot edit Virtual Disk. At least one of the VMs is not down.]". HTTP response code is 409.
fatal: [engine1]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot edit Virtual Disk. At least one of the VMs is not down.]\". HTTP response code is 409."}
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
engine1                    : ok=3    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

    iii) Extend disk + change "interface": the "interface" doesn't change and Ansible script finishes with an above error, "changed=0", BUT disk is extended successfully despite the error.

  b) "vm_name" and "vm_id" are *not* specified:
    Same as above, but errors are muted - disk is extended successfully, while "interface" doesn't change.

Comment 7 Arik 2022-07-10 12:09:46 UTC
(In reply to Pavel Bar from comment #6)
> After some local testing on my side I see a quite different behavior:
> Disks (tested NFS & iSCSI) can be successfully extended *without* "vm_name"
> or "vm_id" specified and with no errors in all the scenarios:
> 1) Disks are not attached to any VM.
> 2) Disks are attached to a non-running VM.
> 3) Disks are attached to a running VM  (???should this be allowed???).

Yes, this should work

> 
> The problems actually start to happen after "vm_name" or "vm_id" are
> specified...
> 
> 1) Disk is attached to a VM, but wrong (non-existing) "vm_id" is specified
> (same behavior for both running and not running VM).
> Result: Ansible script finishes with an error, "changed=0".
> BUT a new disk with the same name is created and is not attached to any VM.
> TASK [Extend virtual disk 'Disk4_iSCSI_VM1']
> *****************************************************************************
> *****************************************************************************
> *********************************
> An exception occurred during task execution. To see the full traceback, use
> -vvv. The error was: ovirtsdk4.Error: Fault reason is "Operation Failed".
> Fault detail is "[Cannot attach Virtual Disk. VM is not found.]". HTTP
> response code is 400.
> fatal: [engine1]: FAILED! => {"changed": false, "msg": "Fault reason is
> \"Operation Failed\". Fault detail is \"[Cannot attach Virtual Disk. VM is
> not found.]\". HTTP response code is 400."}
> PLAY RECAP
> *****************************************************************************
> *****************************************************************************
> *******************************************************************
> engine1                    : ok=3    changed=0    unreachable=0    failed=1 
> skipped=0    rescued=0    ignored=0   
> 
> Note: wrong "vm_name" returns an error as one would expect:
>   An exception occurred during task execution. To see the full traceback,
> use -vvv. The error was: Exception: Entity 'VM2' was not found.

That's expected

> 
> 2) Trying to extend disk and/or change "interface" type (i.e. from "virtio"
> to "virtio_scsi"), while teh disk is attached to a running VM.
> Results:
>   a) Correct "vm_name" or "vm_id" are specified:
>     i) Extend disk only - disk is extended successfully, no error. Should it
> work????

That's also expected

>     ii) Change "interface" type only - "interface" doesn't change and
> Ansible script finishes with an error, "changed=0".
> TASK [Extend virtual disk 'Disk2_NFS_VM1']
> *****************************************************************************
> *****************************************************************************
> ***********************************
> An exception occurred during task execution. To see the full traceback, use
> -vvv. The error was: ovirtsdk4.Error: Fault reason is "Operation Failed".
> Fault detail is "[Cannot edit Virtual Disk. At least one of the VMs is not
> down.]". HTTP response code is 409.
> fatal: [engine1]: FAILED! => {"changed": false, "msg": "Fault reason is
> \"Operation Failed\". Fault detail is \"[Cannot edit Virtual Disk. At least
> one of the VMs is not down.]\". HTTP response code is 409."}
> PLAY RECAP
> *****************************************************************************
> *****************************************************************************
> *******************************************************************
> engine1                    : ok=3    changed=0    unreachable=0    failed=1 
> skipped=0    rescued=0    ignored=0   

Also expected

> 
>     iii) Extend disk + change "interface": the "interface" doesn't change
> and Ansible script finishes with an above error, "changed=0", BUT disk is
> extended successfully despite the error.

Also expected

> 
>   b) "vm_name" and "vm_id" are *not* specified:
>     Same as above, but errors are muted - disk is extended successfully,
> while "interface" doesn't change.

That's also expected

Pavel, I assume that since disk names are not unique, when the vm name wasn't specified we were not able to identify the right disk and when both the disk name and the vm name were specified, the disk that the user intended to extend was actually extended
Can you please confirm that? if that's the case then the documentation should say something "make sure to specify the vm name so the operation would work on the right disk in case there are multiple disks with the same name"

Comment 8 Pavel Bar 2022-07-10 15:53:37 UTC
(In reply to Arik from comment #7)
> (In reply to Pavel Bar from comment #6)
> > After some local testing on my side I see a quite different behavior:
> > Disks (tested NFS & iSCSI) can be successfully extended *without* "vm_name"
> > or "vm_id" specified and with no errors in all the scenarios:
> > 1) Disks are not attached to any VM.
> > 2) Disks are attached to a non-running VM.
> > 3) Disks are attached to a running VM  (???should this be allowed???).
> 
> Yes, this should work
> 
> > 
> > The problems actually start to happen after "vm_name" or "vm_id" are
> > specified...
> > 
> > 1) Disk is attached to a VM, but wrong (non-existing) "vm_id" is specified
> > (same behavior for both running and not running VM).
> > Result: Ansible script finishes with an error, "changed=0".
> > BUT a new disk with the same name is created and is not attached to any VM.
> > TASK [Extend virtual disk 'Disk4_iSCSI_VM1']
> > *****************************************************************************
> > *****************************************************************************
> > *********************************
> > An exception occurred during task execution. To see the full traceback, use
> > -vvv. The error was: ovirtsdk4.Error: Fault reason is "Operation Failed".
> > Fault detail is "[Cannot attach Virtual Disk. VM is not found.]". HTTP
> > response code is 400.
> > fatal: [engine1]: FAILED! => {"changed": false, "msg": "Fault reason is
> > \"Operation Failed\". Fault detail is \"[Cannot attach Virtual Disk. VM is
> > not found.]\". HTTP response code is 400."}
> > PLAY RECAP
> > *****************************************************************************
> > *****************************************************************************
> > *******************************************************************
> > engine1                    : ok=3    changed=0    unreachable=0    failed=1 
> > skipped=0    rescued=0    ignored=0   
> > 
> > Note: wrong "vm_name" returns an error as one would expect:
> >   An exception occurred during task execution. To see the full traceback,
> > use -vvv. The error was: Exception: Entity 'VM2' was not found.
> 
> That's expected
> 
> > 
> > 2) Trying to extend disk and/or change "interface" type (i.e. from "virtio"
> > to "virtio_scsi"), while teh disk is attached to a running VM.
> > Results:
> >   a) Correct "vm_name" or "vm_id" are specified:
> >     i) Extend disk only - disk is extended successfully, no error. Should it
> > work????
> 
> That's also expected
> 
> >     ii) Change "interface" type only - "interface" doesn't change and
> > Ansible script finishes with an error, "changed=0".
> > TASK [Extend virtual disk 'Disk2_NFS_VM1']
> > *****************************************************************************
> > *****************************************************************************
> > ***********************************
> > An exception occurred during task execution. To see the full traceback, use
> > -vvv. The error was: ovirtsdk4.Error: Fault reason is "Operation Failed".
> > Fault detail is "[Cannot edit Virtual Disk. At least one of the VMs is not
> > down.]". HTTP response code is 409.
> > fatal: [engine1]: FAILED! => {"changed": false, "msg": "Fault reason is
> > \"Operation Failed\". Fault detail is \"[Cannot edit Virtual Disk. At least
> > one of the VMs is not down.]\". HTTP response code is 409."}
> > PLAY RECAP
> > *****************************************************************************
> > *****************************************************************************
> > *******************************************************************
> > engine1                    : ok=3    changed=0    unreachable=0    failed=1 
> > skipped=0    rescued=0    ignored=0   
> 
> Also expected
> 
> > 
> >     iii) Extend disk + change "interface": the "interface" doesn't change
> > and Ansible script finishes with an above error, "changed=0", BUT disk is
> > extended successfully despite the error.
> 
> Also expected
> 
> > 
> >   b) "vm_name" and "vm_id" are *not* specified:
> >     Same as above, but errors are muted - disk is extended successfully,
> > while "interface" doesn't change.
> 
> That's also expected
> 
> Pavel, I assume that since disk names are not unique, when the vm name
> wasn't specified we were not able to identify the right disk and when both
> the disk name and the vm name were specified, the disk that the user
> intended to extend was actually extended
> Can you please confirm that? if that's the case then the documentation
> should say something "make sure to specify the vm name so the operation
> would work on the right disk in case there are multiple disks with the same
> name"

I tested an even more complex scenario:
4 disks with the same name "Disk1":
1) 2 disks attached to running VM1.
2) 1 disk attached to a non-running VM2
3) 1 disk not attached to any VM.

Scenario 1 - providing disk name only:
Result: the same disk is consistently chosen between the 4 disks. In my case it was the disk attached to VM2.
Looks like it has the smallest number of disk id between the 4 disks with the same name.

Scenario 2 - providing disk name & name of the VM "VM1" (that has 2 disks attached):
Result: the same disk is consistently chosen between the 2 disks attached to VM "VM1".
Again, looks like it has the smallest number of disk id between the 2 disks with the same name attached to this VM.

Solution that I suggest and I tested that it works:
Provide the disk id that describes the disk, that the user is interested in. No need in "name" or "vm_name" or "vm_id".
ovirt_disk:
  auth: "{{ ovirt_auth }}"
  id: c92f241e-be4c-40f3-92c0-80fb161c9776
  #name: Disk1
  storage_domain: iSCSI_SD2
  interface: virtio_scsi
  size: 6GiB
  #vm_name: VM1
  #vm_id: cf15a462-0c01-4649-ad7a-cd6b7e279561

Comment 9 Arik 2022-07-11 07:10:10 UTC
(In reply to Pavel Bar from comment #8)
> (In reply to Arik from comment #7)
> > Pavel, I assume that since disk names are not unique, when the vm name
> > wasn't specified we were not able to identify the right disk and when both
> > the disk name and the vm name were specified, the disk that the user
> > intended to extend was actually extended
> > Can you please confirm that? if that's the case then the documentation
> > should say something "make sure to specify the vm name so the operation
> > would work on the right disk in case there are multiple disks with the same
> > name"
> 
> I tested an even more complex scenario:
> 4 disks with the same name "Disk1":
> 1) 2 disks attached to running VM1.
> 2) 1 disk attached to a non-running VM2
> 3) 1 disk not attached to any VM.
> 
> Scenario 1 - providing disk name only:
> Result: the same disk is consistently chosen between the 4 disks. In my case
> it was the disk attached to VM2.
> Looks like it has the smallest number of disk id between the 4 disks with
> the same name.
> 
> Scenario 2 - providing disk name & name of the VM "VM1" (that has 2 disks
> attached):
> Result: the same disk is consistently chosen between the 2 disks attached to
> VM "VM1".
> Again, looks like it has the smallest number of disk id between the 2 disks
> with the same name attached to this VM.
> 
> Solution that I suggest and I tested that it works:
> Provide the disk id that describes the disk, that the user is interested in.
> No need in "name" or "vm_name" or "vm_id".
> ovirt_disk:
>   auth: "{{ ovirt_auth }}"
>   id: c92f241e-be4c-40f3-92c0-80fb161c9776
>   #name: Disk1
>   storage_domain: iSCSI_SD2
>   interface: virtio_scsi
>   size: 6GiB
>   #vm_name: VM1
>   #vm_id: cf15a462-0c01-4649-ad7a-cd6b7e279561

This approach of specifying the UUID of the disk when invoking operations on a disk may not always be applicable since the UUID is auto-generated when creating the disk and the UUID is not specified

The results above make sense. I don't think it's so common to have multiple disks with the same name attached to a VM - we can try to cover that possibility in the documentation but I wouldn't insist on that if it over-complicates the instruction. Adding something to the documentation that recommends to specify the vm_name/vm_id in order to choose the right disk when there are multiple disks with the same name (which is pretty common when provisioning from templates) would probably be enough

Comment 10 Pavel Bar 2022-07-11 07:32:19 UTC
(In reply to Arik from comment #9)
> (In reply to Pavel Bar from comment #8)
> > (In reply to Arik from comment #7)
> > > Pavel, I assume that since disk names are not unique, when the vm name
> > > wasn't specified we were not able to identify the right disk and when both
> > > the disk name and the vm name were specified, the disk that the user
> > > intended to extend was actually extended
> > > Can you please confirm that? if that's the case then the documentation
> > > should say something "make sure to specify the vm name so the operation
> > > would work on the right disk in case there are multiple disks with the same
> > > name"
> > 
> > I tested an even more complex scenario:
> > 4 disks with the same name "Disk1":
> > 1) 2 disks attached to running VM1.
> > 2) 1 disk attached to a non-running VM2
> > 3) 1 disk not attached to any VM.
> > 
> > Scenario 1 - providing disk name only:
> > Result: the same disk is consistently chosen between the 4 disks. In my case
> > it was the disk attached to VM2.
> > Looks like it has the smallest number of disk id between the 4 disks with
> > the same name.
> > 
> > Scenario 2 - providing disk name & name of the VM "VM1" (that has 2 disks
> > attached):
> > Result: the same disk is consistently chosen between the 2 disks attached to
> > VM "VM1".
> > Again, looks like it has the smallest number of disk id between the 2 disks
> > with the same name attached to this VM.
> > 
> > Solution that I suggest and I tested that it works:
> > Provide the disk id that describes the disk, that the user is interested in.
> > No need in "name" or "vm_name" or "vm_id".
> > ovirt_disk:
> >   auth: "{{ ovirt_auth }}"
> >   id: c92f241e-be4c-40f3-92c0-80fb161c9776
> >   #name: Disk1
> >   storage_domain: iSCSI_SD2
> >   interface: virtio_scsi
> >   size: 6GiB
> >   #vm_name: VM1
> >   #vm_id: cf15a462-0c01-4649-ad7a-cd6b7e279561
> 
> This approach of specifying the UUID of the disk when invoking operations on
> a disk may not always be applicable since the UUID is auto-generated when
> creating the disk and the UUID is not specified
> 
> The results above make sense. I don't think it's so common to have multiple
> disks with the same name attached to a VM - we can try to cover that
> possibility in the documentation but I wouldn't insist on that if it
> over-complicates the instruction. Adding something to the documentation that
> recommends to specify the vm_name/vm_id in order to choose the right disk
> when there are multiple disks with the same name (which is pretty common
> when provisioning from templates) would probably be enough

But extending the disk flow is different from the "creating the disk" flow that you describe.
Maybe I would try to add a "tip". Something like:
  If there are multiple disks with the same name, use "vm_name"/"vm_id" or disk "id" to specify the intended disk.

Comment 11 Arik 2022-07-11 09:26:57 UTC
(In reply to Pavel Bar from comment #10)
> But extending the disk flow is different from the "creating the disk" flow
> that you describe.

Right, my point was that using the UUID is not always easy because when you get to extend-disk you may be required to figure out what was the generated UUID while the disk name is more "obvious"

> Maybe I would try to add a "tip". Something like:
>   If there are multiple disks with the same name, use "vm_name"/"vm_id" or
> disk "id" to specify the intended disk.

Or: if the disk is referenced by name and is attached to a VM, make sure to specify vm_name or vm_id to prevent extension of another disk with the same name

Comment 12 Pavel Bar 2022-07-11 09:39:41 UTC
(In reply to Arik from comment #11)
> (In reply to Pavel Bar from comment #10)
> > But extending the disk flow is different from the "creating the disk" flow
> > that you describe.
> 
> Right, my point was that using the UUID is not always easy because when you
> get to extend-disk you may be required to figure out what was the generated
> UUID while the disk name is more "obvious"
> 
> > Maybe I would try to add a "tip". Something like:
> >   If there are multiple disks with the same name, use "vm_name"/"vm_id" or
> > disk "id" to specify the intended disk.
> 
> Or: if the disk is referenced by name and is attached to a VM, make sure to
> specify vm_name or vm_id to prevent extension of another disk with the same
> name

I published the PR and added you to it.
Please check the phrasing there :)
I also noticed some IMHO unexpected behavior, that in some cases when using a correct "vm_id" (and not "vm_name") the *new* disk is created instead of updating the existing disk with the provided name. It happened one 1 VM already having 2 disks with the same name, but didn't happen for another VM also already having 2 disks with the same name. On the 1st VM a 3rd disk was added, on the 2nd VM one of the existing was updated.
A real voodoo.

Comment 13 Pavel Bar 2022-07-13 11:16:28 UTC
Testing instructions.
The following additional note was added below the "size" parameter:
"If the disk is referenced by name and is attached to a VM, make sure to specify vm_name/vm_id to prevent extension of another disk that is not attached to the VM."

Comment 16 meital avital 2022-07-20 08:54:48 UTC
Due to QE capacity we are not going to cover this issue in our automation

Comment 18 Barbora Dolezalova 2022-08-10 12:26:41 UTC
The documentation (version 2.2.0) was updated and the user tips about vm_id, vm_name were added. 
Verified.

Comment 22 errata-xmlrpc 2022-09-08 11:29:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHV Engine and Host Common Packages update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:6394


Note You need to log in before you can comment on or make changes to this bug.