Bug 976672 - rest-api: No feedback that CanDoAction failed when live migrating image of stateless VM
rest-api: No feedback that CanDoAction failed when live migrating image of st...
Status: CLOSED DUPLICATE of bug 926959
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine-restapi (Show other bugs)
3.2.0
All Linux
unspecified Severity high
: ---
: 3.3.0
Assigned To: Daniel Erez
Elena
storage
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-21 03:29 EDT by Jakub Libosvar
Modified: 2016-02-10 15:18 EST (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-12-02 08:16:41 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jakub Libosvar 2013-06-21 03:29:40 EDT
Description of problem:
When moving vm's disk while VM is running and is stateless, there is no feedback that action failed in engine on CanDoAction


Version-Release number of selected component (if applicable):
rhevm-3.2.1-0.31.el6ev.noarch

How reproducible:
Always

Steps to Reproduce:
1. Start stateless VM
2. Try to move its disk
3.

Actual results:
No response that it has failed

Expected results:
Some feedback that action failed

Additional info:
2013-06-21 09:20:05,871 INFO  [org.ovirt.engine.core.bll.MoveDisksCommand] (ajp-/127.0.0.1:8702-6) [62ee6a9c] Running command: MoveDisksCommand internal: false. Entities affected :  ID: aecf692f-9f3b-4958-9a54-e909c07562cc Type: Disk
2013-06-21 09:20:05,894 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] (ajp-/127.0.0.1:8702-6) [62ee6a9c] Lock Acquired to object EngineLock [exclusiveLocks= key: 870b4521-69f4-407e-9ddc-bb5de02a02a9 value: VM
, sharedLocks= ]
2013-06-21 09:20:05,910 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] (pool-4-thread-45) [62ee6a9c] Running command: LiveMigrateVmDisksCommandTask handler: LiveSnapshotTaskHandler internal: false. Entities affected :  ID: 691a2f9f-ceb4-49b5-b9b6-36c3312f5583 Type: Disk,  ID: b58d59a4-c5ba-4ea9-8dff-29685175cc59 Type: Storage
2013-06-21 09:20:05,919 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp-/127.0.0.1:8702-6) No string for UNASSIGNED type. Use default Log
2013-06-21 09:20:05,930 WARN  [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (pool-4-thread-45) [62ee6a9c] CanDoAction of action CreateAllSnapshotsFromVm failed. Reasons:VAR__ACTION__CREATE,VAR__TYPE__SNAPSHOT,ACTION_TYPE_FAILED_VM_RUNNING_STATELESS
2013-06-21 09:20:05,934 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] (pool-4-thread-45) [62ee6a9c] Lock freed to object EngineLock [exclusiveLocks= key: 870b4521-69f4-407e-9ddc-bb5de02a02a9 value: VM
, sharedLocks= ]


Communication with API:
Action request content is --  url:/api/vms/870b4521-69f4-407e-9ddc-bb5de02a02a9/disks/aecf692f-9f3b-4958-9a54-e909c07562cc/move body:<action>
    <async>false</async>
    <grace_period>
        <expiry>10</expiry>
    </grace_period>
    <storage_domain href="/api/storagedomains/b58d59a4-c5ba-4ea9-8dff-29685175cc59" id="b58d59a4-c5ba-4ea9-8dff-29685175cc59">
        <name>iscsi_0</name>
        <link href="/api/storagedomains/b58d59a4-c5ba-4ea9-8dff-29685175cc59/permissions" rel="permissions"/>
        <link href="/api/storagedomains/b58d59a4-c5ba-4ea9-8dff-29685175cc59/disks" rel="disks"/>
        <type>data</type>
        <master>true</master>
        <storage>
            <type>iscsi</type>
            <volume_group id="mXnvaZ-0bW9-qpBU-xJUM-JcSl-M4ix-rRL6rb">
                <logical_unit id="36006048c14e9eb8f668dfc53ea5995ca">
                    <address>10.34.63.200</address>
                    <port>3260</port>
                    <target>iqn.1992-05.com.emc:ckm001201002300000-5-vnxe</target>
                    <serial>SEMC_Celerra_EMC-Celerra-iSCSI-VLU-fs62_T5_LUN7_CKM00120100230</serial>
                    <vendor_id>EMC</vendor_id>
                    <product_id>Celerra</product_id>
                    <lun_mapping>7</lun_mapping>
                    <portal>10.34.63.200:3260,1</portal>
                    <size>268435456000</size>
                    <paths>0</paths>
                    <volume_group_id>mXnvaZ-0bW9-qpBU-xJUM-JcSl-M4ix-rRL6rb</volume_group_id>
                    <storage_domain_id>b58d59a4-c5ba-4ea9-8dff-29685175cc59</storage_domain_id>
                </logical_unit>
            </volume_group>
        </storage>
        <available>263066746880</available>
        <used>4294967296</used>
        <committed>0</committed>
        <storage_format>v3</storage_format>
    </storage_domain>
</action>

Response body for action request is:
<action>
    <async>false</async>
    <grace_period>
        <expiry>10</expiry>
    </grace_period>
    <storage_domain href="/api/storagedomains/b58d59a4-c5ba-4ea9-8dff-29685175cc59" id="b58d59a4-c5ba-4ea9-8dff-29685175cc59">
        <name>iscsi_0</name>
        <link href="/api/storagedomains/b58d59a4-c5ba-4ea9-8dff-29685175cc59/permissions" rel="permissions"/>
        <link href="/api/storagedomains/b58d59a4-c5ba-4ea9-8dff-29685175cc59/disks" rel="disks"/>
        <type>data</type>
        <master>true</master>
        <storage>
            <type>iscsi</type>
            <volume_group id="mXnvaZ-0bW9-qpBU-xJUM-JcSl-M4ix-rRL6rb">
                <logical_unit id="36006048c14e9eb8f668dfc53ea5995ca">
                    <address>10.34.63.200</address>
                    <port>3260</port>
                    <target>iqn.1992-05.com.emc:ckm001201002300000-5-vnxe</target>
                    <serial>SEMC_Celerra_EMC-Celerra-iSCSI-VLU-fs62_T5_LUN7_CKM00120100230</serial>
                    <vendor_id>EMC</vendor_id>
                    <product_id>Celerra</product_id>
                    <lun_mapping>7</lun_mapping>
                    <portal>10.34.63.200:3260,1</portal>
                    <size>268435456000</size>
                    <paths>0</paths>
                    <volume_group_id>mXnvaZ-0bW9-qpBU-xJUM-JcSl-M4ix-rRL6rb</volume_group_id>
                    <storage_domain_id>b58d59a4-c5ba-4ea9-8dff-29685175cc59</storage_domain_id>
                </logical_unit>
            </volume_group>
        </storage>
        <available>263066746880</available>
        <used>4294967296</used>
        <committed>0</committed>
        <storage_format>v3</storage_format>
    </storage_domain>
    <status>
        <state>complete</state>
    </status>
</action>
Comment 1 Itamar Heim 2013-12-01 14:48:11 EST
how can a CanDoAction not happen - isn't this infra?
Comment 2 Allon Mureinik 2013-12-02 08:16:41 EST
The CDA is (was) performed by an internal command, and is (was) not propagated back to the end user.

This was already solved in RHEVM 3.3 as part of bug 926959.
Closing as duplicate.

*** This bug has been marked as a duplicate of bug 926959 ***

Note You need to log in before you can comment on or make changes to this bug.