Bug 1506484

Summary: [RFE] Allow attached disks registration using REST-API
Product: Red Hat Enterprise Virtualization Manager Reporter: nijin ashok <nashok>
Component: ovirt-engineAssignee: Eyal Shenitzky <eshenitz>
Status: CLOSED ERRATA QA Contact: Natalie Gavrielov <ngavrilo>
Severity: high Docs Contact:
Priority: medium    
Version: 4.1.6CC: eshenitz, gveitmic, jcoscia, lsurette, mkalinin, mlipchuk, ratamir, rbalakri, Rhev-m-bugs, srevivo, tnisan, ykaul, ylavi
Target Milestone: ovirt-4.2.0Keywords: FutureFeature
Target Release: ---Flags: lsvaty: testing_plan_complete-
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: rhv-4.2.0-8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-05-15 17:45:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description nijin ashok 2017-10-26 08:07:08 UTC
Description of problem:

Currently, RHV-M allows registering the disk through api by sending a POST to https://<rhevm-engine>/api/storagedomains/storage-domain-uuid/disks;unregistered even if the disk is a part of any VMs OVF in the OVF_STORE. 

This can cause duplicate entries in the OVF_STORE and we will not be able to attach the storage domain again. The portal will only allow importing a disk which is not attached to any VM.

So if a user registers a disk which is already available in one of the OVF and if he creates a new VM using this disk, then the OVF_STORE will be having two VMs pointing to the same disk.

Detaching and attaching the storage domain again will fail with below error.

===
2017-10-25 06:21:56,624 ERROR [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (org.ovirt.thread.pool-6-thread-9) [1598e81f] Command 'org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand' failed: CallableStatementCallback; SQL [{call insertunregistereddiskstovms(?, ?, ?, ?)}]; ERROR: duplicate key value violates unique constraint "idx_unregistered_disks_storage_to_vms_unique"
  Detail: Key (disk_id, storage_domain_id)=(ca2bbda0-4b26-403f-8892-391862eb28fe, 8e9d4d8f-b7a0-4d72-9cf5-591874a730e6) already exists.
  Where: SQL statement "INSERT INTO unregistered_disks_to_vms (
        disk_id,
        entity_id,
        entity_name,
        storage_domain_id
        )
    VALUES (
        v_disk_id,
        v_entity_id,
        v_entity_name,
        v_storage_domain_id
        )"
PL/pgSQL function insertunregistereddiskstovms(uuid,uuid,character varying,uuid) line 3 at SQL statement; nested exception is org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "idx_unregistered_disks_storage_to_vms_unique"
  Detail: Key (disk_id, storage_domain_id)=(ca2bbda0-4b26-403f-8892-391862eb28fe, 8e9d4d8f-b7a0-4d72-9cf5-591874a730e6) already exists.
===

So it's trying to add ca2bbda0-4b26-403f-8892-391862eb28fe twice as we have two OVF which points to the same disk.


Version-Release number of selected component (if applicable):
ovirt-engine-4.1.6.2

How reproducible:

100%

Steps to Reproduce:

[1] Create a VM with a disk.

[2] Detach the storage domain so that RHV-M will initiate an OVF_STORE update.

[3] Check the OVF_STORE and check if the store contains the VM's OVF.

[4] Attach the storage domain again.

[5] Registered the disk using the api as per the steps in the article https://access.redhat.com/solutions/1411133

[6] Create a new VM with this disk.

[7] Detach the storage domain again. RHV-M will push new VM information to the OVF_STORE.

[8] Check the OVF_STORE again and there will be two VM ovf files which point to the same disk.

[9] Try to attach the storage domain and it fails with the duplicate error.


Actual results:

RHV-M is allowing registering the disk using api even if the disk is attached to a VM and this VMs ovf is available in OVF_STORE.

Expected results:

It should not allow just like it's  not allowing it from GUI

Additional info:

Comment 1 Tal Nisan 2017-10-26 08:35:31 UTC
Eyal this looks familiar to a bug you solved recently, can you please have a look?

Comment 4 Maor 2017-10-31 13:20:11 UTC
Looks like the same issue of https://bugzilla.redhat.com/1430865 which should be solved in oVirt 4.2

Comment 5 Marina Kalinin 2017-10-31 14:36:51 UTC
After talking to Maor, we decided to make this bug RFE and refine it to the following behavior:

1. Default behavior of registering a disk would be to prevent registering a disk, if it is already associated with a VM in the OVF store.

2. Add a flag that will override this default behavior and will allow registering a disk, even if it is associated with a VM in the OVF store. It is important to keep this optional functionality for any other DR scenario we cannot think of right now.

2.1. When this flag is specified, the registered disk would need to be disassociated from its original VM in the OVF store. 
This way, we avoid possible conflicts in the database in the future, whether we will try to import this VM to this new environment(resolved in bz#1446920 probably), or whether we would try to add the SD back to the original Data Center (today would fail due to duplicate key value, as specified in bug description).


Background:
If you register a disk today, that is associated with a VM in the OVF store, and later you add this disk to another VM, OVF store would be updated to contain 2 different VMs pointing to the same disk, which is creating a conflict in the database. We can remove the index restricting it, but it is not the right solution. We should prevent the conflict initially by creating all needed restrictions in the software, as suggested in this request.

Comment 6 Maor 2017-10-31 16:11:16 UTC
(In reply to Marina from comment #5)
> After talking to Maor, we decided to make this bug RFE and refine it to the
> following behavior:
> 
> 1. Default behavior of registering a disk would be to prevent registering a
> disk, if it is already associated with a VM in the OVF store.

Just to clarify, today those disks are filtered through the GUI.

> 
> 2. Add a flag that will override this default behavior and will allow
> registering a disk, even if it is associated with a VM in the OVF store. It
> is important to keep this optional functionality for any other DR scenario
> we cannot think of right now.
> 
> 2.1. When this flag is specified, the registered disk would need to be
> disassociated from its original VM in the OVF store. 
> This way, we avoid possible conflicts in the database in the future, whether
> we will try to import this VM to this new environment(resolved in bz#1446920
> probably), or whether we would try to add the SD back to the original Data
> Center (today would fail due to duplicate key value, as specified in bug
> description).
> 
> 
> Background:
> If you register a disk today, that is associated with a VM in the OVF store,
> and later you add this disk to another VM, OVF store would be updated to
> contain 2 different VMs pointing to the same disk, which is creating a
> conflict in the database. We can remove the index restricting it, but it is
> not the right solution. We should prevent the conflict initially by creating
> all needed restrictions in the software, as suggested in this request.

The index should be removed as part of the support for shareable disk in the future

Comment 7 Eyal Shenitzky 2017-11-07 09:37:59 UTC
After talking with Yaniv and Allon,
We decided to provide a filter flag which prevents from seeing the disks that are attached to a VM (same behavior as the UI).

The user can turn off the flag and register an attached disk.

A validation will add to VM registration. Check if the VM's disks are already existed in the DB/environment, if so a proper message will appear and the disk registration will skip (prevents the duplication in the DB).

Comment 8 Eyal Shenitzky 2017-11-19 06:00:19 UTC
Filter flag implementation abandoned (due to backward compatibility of the REST-API).

Implement the same behavior (validation in the registration) also for templates.

Comment 11 Sandro Bonazzola 2017-12-12 16:09:52 UTC
This bug is targeted 4.2.1 but is modified in 4.2.0.
Can you please check / verify this bug status and set target milestone and bug status accordingly?

Comment 12 Allon Mureinik 2017-12-12 17:05:24 UTC
(In reply to Sandro Bonazzola from comment #11)
> This bug is targeted 4.2.1 but is modified in 4.2.0.
> Can you please check / verify this bug status and set target milestone and
> bug status accordingly?
All the required patches are present in the upstream tag ovirt-engine-4.2.0 and/or the downstream tag rhv-4.2.0-8

Comment 14 Natalie Gavrielov 2017-12-20 15:24:13 UTC
Verified using: rhvm-4.2.0.2-0.1.el7.noarch

Tests performed for both file and block storage types (NFS, iSCSI).
Scenarios performed:
1. Create VM with disk.
2. Detach storage domain.
3. Attach storage domain.
4. Register disk.
5. Attach disk to another VM.
6. Detach storage domain.
7. Attach storage domain.

Option A:
8. Register second VM.
9. Register first VM.

Option B:
8. Register first VM.
9. Register second VM.

Option C:
8. Register disk
9. Register first VM.
10. Register second VM.

Option D:
8. Register disk.
9. Register second VM.
10. Register first VM.

Notes:
1. Now there is no problem re-attaching the storage domain as described in comment 0.
2. The first VM that registers will 'own' the disk, and when registering the second VM there is a need to specify 'allow_partial_import', if not operation fails with a message saying: "Cannot import VM. The following disks already exist: disk_name. Please import as a clone".

Comment 15 Natalie Gavrielov 2017-12-27 12:34:24 UTC
Also tested for templates:
Tests performed for both file and block storage types (NFS, GlusterFS and iSCSI).
1. Create VM with disk.
2. Create a template out of that VM.
3. Detach storage domain.
3. Attach storage domain.
4. Register template's disk.
5. Register template using 'allow_partial_import'.

build: rhvm-4.2.0.2-0.1.el7.noarch (rhv-4.2.0-12)

Comment 19 errata-xmlrpc 2018-05-15 17:45:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1488

Comment 20 Franta Kust 2019-05-16 13:05:37 UTC
BZ<2>Jira Resync