Description of problem: There is no way how to move floating disk via API. Version-Release number of selected component (if applicable): rhevm-restapi-3.1.0-32.el6ev How reproducible: always Steps to Reproduce: 1. /api/disks/id/move Actual results: HTTP 404 Expected results: Disk move action. Additional info:
Since it's blocking automatic storage live migration tests, adding a TestBlocker. There is no move action in /api/vms/{vm:id}/disks/{disk:id} as well. Now I'm not sure about the FutureFeature.
(In reply to comment #1) > Since it's blocking automatic storage live migration tests, adding a > TestBlocker. There is no move action in /api/vms/{vm:id}/disks/{disk:id} as > well. Now I'm not sure about the FutureFeature. move is combination of copy+delete, why can't you use this workaround instead?
(In reply to comment #3) > (In reply to comment #1) > > Since it's blocking automatic storage live migration tests, adding a > > TestBlocker. There is no move action in /api/vms/{vm:id}/disks/{disk:id} as > > well. Now I'm not sure about the FutureFeature. > > move is combination of copy+delete, why can't you use this workaround > instead? because we're also talking about live storage migration here and copy does not initiate that.
patch sent: http://gerrit.ovirt.org/#/c/10676/ implemented as: POST /api/vms/xxx/disks/yyy/move <action> <storage_domain id="zzz"/> </action>
is there a use-case for moving template's disks as well?
(In reply to comment #6) > is there a use-case for moving template's disks as well? Currently, the engine only supports moving vm's disks or floating disks.
(In reply to comment #6) > is there a use-case for moving template's disks as well? (In reply to comment #7) > (In reply to comment #6) > > is there a use-case for moving template's disks as well? > > Currently, the engine only supports moving vm's disks or floating disks. Clearing needinfo.
Any ETA when the patch will make it to downstream and new build for QE consumption? It's blocking us of making new tests.
(In reply to comment #9) > Any ETA when the patch will make it to downstream and new build for QE > consumption? It's blocking us of making new tests. next build.
Url defined in doc exists in rhevm-3.2.0-7.el6ev.noarch and works if virtual machine is down. But moveDisk fails in case machine is up and running (storage live migration) with error in candoaction that Storage domain doesn't exist: 2013-02-18 08:21:15,718 INFO [org.ovirt.engine.core.bll.MoveDisksCommand] (ajp-/127.0.0.1:8702-20) [298bfbb6] Running command: MoveDisksCommand internal: false. Entities affected : ID: cab7c93b-78dd-4103-9d5a-c772480b7b5a Type: Disk 2013-02-18 08:21:15,743 INFO [org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] (ajp-/127.0.0.1:8702-20) [298bfbb6] Lock Acquired to object EngineLock [exclusiveLocks= key: e03733a0-23aa-4c47-8bc6-c8deb2ba0228 value: VM , sharedLocks= ] 2013-02-18 08:21:15,765 WARN [org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] (ajp-/127.0.0.1:8702-20) [298bfbb6] CanDoAction of action LiveMigrateVmDisks failed. Reasons:VAR__ACTION__MOVE,VAR__TYPE__VM_DISK,ACTION_TYPE_FAILED_STORAGE_DOMAIN_NOT_EXIST 2013-02-18 08:21:15,765 INFO [org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] (ajp-/127.0.0.1:8702-20) [298bfbb6] Lock freed to object EngineLock [exclusiveLocks= key: e03733a0-23aa-4c47-8bc6-c8deb2ba0228 value: VM , sharedLocks= ] 2013-02-18 08:21:15,767 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (ajp-/127.0.0.1:8702-20) Operation Failed: [Cannot move Virtual Machine Disk. Storage Domain doesn't exist.] url: /api/vms/e03733a0-23aa-4c47-8bc6-c8deb2ba0228/disks/10bda735-b4c8-4129-bdfc-60b623ebcd4b/move body: <action> <async>false</async> <grace_period> <expiry>10</expiry> </grace_period> <storage_domain href="/api/storagedomains/43ae270f-f733-4bdb-93c6-6a3b089d180c" id="43ae270f-f733-4bdb-93c6-6a3b089d180c"> <name>iscsi_1</name> <link href="/api/storagedomains/43ae270f-f733-4bdb-93c6-6a3b089d180c/permissions" rel="permissions"/> <type>data</type> <master>false</master> <storage> <type>iscsi</type> <volume_group id="VVleeE-oYfI-KqV5-zi9l-lgyl-lnxO-32CVsD"> <logical_unit id="36006048c78acaa6eac8ebc05bf73dee1"> <address>10.34.63.200</address> <port>3260</port> <target>iqn.1992-05.com.emc:ckm001201002300000-5-vnxe</target> <serial>SEMC_Celerra_EMC-Celerra-iSCSI-VLU-fs74_T5_LUN3_CKM00120100230</serial> <vendor_id>EMC</vendor_id> <product_id>Celerra</product_id> <lun_mapping>3</lun_mapping> <portal>10.34.63.200:3260,1</portal> <size>214748364800</size> <paths>0</paths> <volume_group_id>VVleeE-oYfI-KqV5-zi9l-lgyl-lnxO-32CVsD</volume_group_id> <storage_domain_id>43ae270f-f733-4bdb-93c6-6a3b089d180c</storage_domain_id> </logical_unit> </volume_group> </storage> <available>202937204736</available> <used>10737418240</used> <committed>25769803776</committed> <storage_format>v3</storage_format> </storage_domain> </action> /api/storagedomains/43ae270f-f733-4bdb-93c6-6a3b089d180 shows correctly target storage domain.
Verified rhevm-3.2.0-10.14.beta1.el6ev.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2013-0888.html