Bug 1476119
Summary: | RESTAPI - Updating a storage connection on a VM with status down fails | ||||||
---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Avihai <aefrat> | ||||
Component: | BLL.Storage | Assignee: | Fedor Gavrilov <fgavrilo> | ||||
Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | Avihai <aefrat> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 4.2.0 | CC: | aefrat, bugs, ebenahar, frolland, tnisan | ||||
Target Milestone: | ovirt-4.3.3 | Keywords: | Automation | ||||
Target Release: | --- | Flags: | rule-engine:
ovirt-4.3+
|
||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2019-03-12 05:10:57 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
The issue does not reproduce anymore. closing as insufficient data as QE env(backend storage) has changed from the time the bug was opened and there is no way to test it with that backend anymore. |
Created attachment 1305796 [details] engine & vdsm logs Description of problem: Via RESTAPI try to update the storage connection to address from 10.35.146.129 to 10.35.146.161 fails. REST response is : "Storage connection already exists" Version-Release number of selected component (if applicable): Engine - 4.2.0-0.0.master.20170723141021.git463826a.el7.centos VDSM - 4.20.1-251 How reproducible: 100% Steps to Reproduce: Automation TestCase5241: 1) Create NFS domain SD1 2) Create 2 VM's from template , Add to each VM an additional ISCSI direct lun disk (50G size) to from storage address IP =10.35.146.129 .(Same storage have also other IP's to same luns as 10.35.146.161) 3) Start both VM's to up & after it's active try to update ISCSI storage connection to address 10.35.146.161 which fails as expected as VM's are active. 4) Change VM's to Down state 5) VIa RESTAPI Try to update ISCSI storage connection to address 10.35.146.161 . Actual results: REST REQUEST: 2017-07-27 17:53:00,468 - MainThread - storageconnections - DEBUG - PUT request content is -- url:/ovirt-engine/api/storageconnections/f58d815a-d70f-4eae-94fc-27f2b2e87191 body: <storage_connection> <address>10.35.146.161</address> <port>3260</port> <target>iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01</target> <type>iscsi</type> </storage_connection> REST RESPONSE: 2017-07-27 17:53:00,627 - MainThread - api_utils - ERROR - Failed to update element NOT as expected: Status: 409 Reason: Conflict Detail: [Cannot edit Storage Connection. Storage connection already exists.] Expected results: Update storage connection should succeed . Additional info: Before the update ,on the the single host of the current DC you can see current connection to storage is to IP =10.35.146.129 : iscsiadm -m session tcp: [207] 10.35.146.129:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00 (non-flash) GET REQUEST before&after the update shows the only storage connection did not change: <storage_connection href="/ovirt-engine/api/storageconnections/f58d815a-d70f-4eae-94fc-27f2b2e87191" id="f58d815a-d70f-4eae-94fc-27f2b2e87191"> <address>10.35.146.129</address> <port>3260</port> <target> iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00 </target> <type>iscsi</type> </storage_connection> Engine: 2017-07-27 17:53:00,619+03 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-14) [] Operation Failed: [Cannot edit Storage Connection. Storage connection already exists.] VDSM: 2017-07-27 17:53:00,046+0300 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call VM.destroy succeeded in 0.86 seconds (__init__:592) 2017-07-27 17:53:00,058+0300 ERROR (jsonrpc/4) [api] FINISH destroy error=Virtual machine does not exist: {'vmId': '86b4a238-cbd8-4867-9a2b-ad747c63f820'} (api:119) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 117, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 301, in destroy res = self.vm.destroy(gracefulAttempts) File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 129, in vm raise exception.NoSuchVM(vmId=self._UUID) NoSuchVM: Virtual machine does not exist: {'vmId': '86b4a238-cbd8-4867-9a2b-ad747c63f820'}