Bug 1269439 - [Cinder][Automation] Wrong Error and "disk leftovers"(in db) remain upon deleting 10* { vms+cinder disk }
[Cinder][Automation] Wrong Error and "disk leftovers"(in db) remain upon dele...
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.6.0
Unspecified Unspecified
unspecified Severity medium
: ovirt-3.6.1
: 3.6.0
Assigned To: Maor
Ori Gofen
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-10-07 07:14 EDT by Ori Gofen
Modified: 2016-03-10 07:05 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
logs (10.45 MB, text/plain)
2015-10-07 07:14 EDT, Ori Gofen
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 47108 master MERGED core: Catch SC_NOT_FOUND status on cinder disk remove Never
oVirt gerrit 47125 ovirt-engine-3.6 MERGED core: Catch SC_NOT_FOUND status on cinder disk remove Never

  None (edit)
Description Ori Gofen 2015-10-07 07:14:56 EDT
Created attachment 1080616 [details]
logs

Description of problem:
This bug is part of the 3_6_Storage_VM_Sanity automation plan and reproduce solely with cinder storage.

Had 10 vms with one cinder disk each, and deleted them one by one using py-sdk.
   56         for vm in self.api.vms.list():
   57             if vm.get_status().get_state() == 'up':
   58                 try:
   59                     vm.stop()
   60                     sleep(5)
   61                 except Exception:
   62                     pass
   63             vm.delete()
during the operation ovirt throws an exception regarding disk's status check. 

2015-10-07 12:53:35,242 INFO  [org.ovirt.engine.core.bll.RemoveDiskCommand] (ajp-/127.0.0.1:8702-15) [c4cee1b] Running command: RemoveDiskCommand internal: false. Entities affected :  ID: 28595cc1-7544-4f84-8b54-a02118e7d215 Type: DiskAction group DELETE_DISK with role type USER
2015-10-07 12:53:35,353 ERROR [org.ovirt.engine.core.bll.storage.AbstractCinderDiskCommandCallback] (DefaultQuartzScheduler_Worker-63) [439202a2] An exception occured while verifying status for volume id '{0}' with the following exception: {1}.: com.woorea.openstack.base.client.OpenStackResponseException: Not Found
 
The ERROR message that end the transaction state that:
"Message: Failed to remove disk vm_cinder_9_Disk_1 from storage domain cinder. The following entity id could not be deleted from the Cinder provider"

it is a faulty message, ovirt did successfully manage to delete that ceph backend disk using cinder.

2015-10-07 12:53:35,486 INFO  [org.ovirt.engine.core.bll.storage.RemoveCinderDiskCommand] (pool-6-thread-4) [7208570e] Running command: RemoveCinderDiskCommand internal: true
. Entities affected :  ID: 00000000-0000-0000-0000-000000000000 Type: Storage
2015-10-07 12:53:37,701 ERROR [org.ovirt.engine.core.bll.storage.AbstractCinderDiskCommandCallback] (DefaultQuartzScheduler_Worker-96) [7208570e] Failed deleting volume/snaps
hot from Cinder. ID: 228de28a-8978-4f60-b08d-7930b231ba77
2015-10-07 12:53:37,701 ERROR [org.ovirt.engine.core.bll.storage.RemoveCinderDiskCommand] (DefaultQuartzScheduler_Worker-96) [7208570e] Ending command 'org.ovirt.engine.core.
bll.storage.RemoveCinderDiskCommand' with failure.
2015-10-07 12:53:37,701 ERROR [org.ovirt.engine.core.bll.storage.RemoveCinderDiskCommand] (DefaultQuartzScheduler_Worker-96) [7208570e] Could not volume id vm_cinder_9_Disk_1
 from Cinder which is related to disk 228de28a-8978-4f60-b08d-7930b231ba77
2015-10-07 12:53:37,743 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-96) [7208570e] Correlation ID: 439202a2, Call Stack: null, Custom Event ID: -1, Message: Failed to remove disk vm_cinder_9_Disk_1 from storage domain cinder. The following entity id could not be deleted from the Cinder provider '228de28a-8978-4f60-b08d-7930b231ba77'. (User: admin@internal).

in the end of the operation 'vm_cinder_9_Disk_1' psql entry is not deleted and gain status illegal.

name                type    format size V_size interface   domain  status
------------------  ------  --------   ------  -----------  ----------- 
OVF_STORE           image   raw   0.12   0.13  ide          nfs_5   OK
OVF_STORE           image   raw   0.12   0.13  ide          nfs_5   OK
OVF_STORE           image   raw   0.12   0.13  ide          nfs-1   OK
OVF_STORE           image   raw   0.12   0.13  ide          nfs-1   OK
vm_cinder_9_Disk_1  cinder  raw          8     virtio       cinder  illegal

cinder list outputs "no disks" which is expected results.

Version-Release number of selected component (if applicable):
rhevm-3.6-14

How reproducible:
100%

Steps to Reproduce:
1.create 10 vms+cinder disk
2.delete all vms using py-sdk


Actual results:
a wrong error is displayed, at least one disk remains after the operation and wrongly gains status "illegal"

Expected results:
operation successful

Additional info:
Comment 1 Ori Gofen 2016-01-21 05:06:08 EST
Verified on rhevm-3.6.1
Comment 2 Allon Mureinik 2016-03-10 05:45:32 EST
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE
Comment 3 Allon Mureinik 2016-03-10 05:48:58 EST
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE
Comment 4 Allon Mureinik 2016-03-10 07:05:28 EST
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE

Note You need to log in before you can comment on or make changes to this bug.