Created attachment 1080616 [details] logs Description of problem: This bug is part of the 3_6_Storage_VM_Sanity automation plan and reproduce solely with cinder storage. Had 10 vms with one cinder disk each, and deleted them one by one using py-sdk. 56 for vm in self.api.vms.list(): 57 if vm.get_status().get_state() == 'up': 58 try: 59 vm.stop() 60 sleep(5) 61 except Exception: 62 pass 63 vm.delete() during the operation ovirt throws an exception regarding disk's status check. 2015-10-07 12:53:35,242 INFO [org.ovirt.engine.core.bll.RemoveDiskCommand] (ajp-/127.0.0.1:8702-15) [c4cee1b] Running command: RemoveDiskCommand internal: false. Entities affected : ID: 28595cc1-7544-4f84-8b54-a02118e7d215 Type: DiskAction group DELETE_DISK with role type USER 2015-10-07 12:53:35,353 ERROR [org.ovirt.engine.core.bll.storage.AbstractCinderDiskCommandCallback] (DefaultQuartzScheduler_Worker-63) [439202a2] An exception occured while verifying status for volume id '{0}' with the following exception: {1}.: com.woorea.openstack.base.client.OpenStackResponseException: Not Found The ERROR message that end the transaction state that: "Message: Failed to remove disk vm_cinder_9_Disk_1 from storage domain cinder. The following entity id could not be deleted from the Cinder provider" it is a faulty message, ovirt did successfully manage to delete that ceph backend disk using cinder. 2015-10-07 12:53:35,486 INFO [org.ovirt.engine.core.bll.storage.RemoveCinderDiskCommand] (pool-6-thread-4) [7208570e] Running command: RemoveCinderDiskCommand internal: true . Entities affected : ID: 00000000-0000-0000-0000-000000000000 Type: Storage 2015-10-07 12:53:37,701 ERROR [org.ovirt.engine.core.bll.storage.AbstractCinderDiskCommandCallback] (DefaultQuartzScheduler_Worker-96) [7208570e] Failed deleting volume/snaps hot from Cinder. ID: 228de28a-8978-4f60-b08d-7930b231ba77 2015-10-07 12:53:37,701 ERROR [org.ovirt.engine.core.bll.storage.RemoveCinderDiskCommand] (DefaultQuartzScheduler_Worker-96) [7208570e] Ending command 'org.ovirt.engine.core. bll.storage.RemoveCinderDiskCommand' with failure. 2015-10-07 12:53:37,701 ERROR [org.ovirt.engine.core.bll.storage.RemoveCinderDiskCommand] (DefaultQuartzScheduler_Worker-96) [7208570e] Could not volume id vm_cinder_9_Disk_1 from Cinder which is related to disk 228de28a-8978-4f60-b08d-7930b231ba77 2015-10-07 12:53:37,743 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-96) [7208570e] Correlation ID: 439202a2, Call Stack: null, Custom Event ID: -1, Message: Failed to remove disk vm_cinder_9_Disk_1 from storage domain cinder. The following entity id could not be deleted from the Cinder provider '228de28a-8978-4f60-b08d-7930b231ba77'. (User: admin@internal). in the end of the operation 'vm_cinder_9_Disk_1' psql entry is not deleted and gain status illegal. name type format size V_size interface domain status ------------------ ------ -------- ------ ----------- ----------- OVF_STORE image raw 0.12 0.13 ide nfs_5 OK OVF_STORE image raw 0.12 0.13 ide nfs_5 OK OVF_STORE image raw 0.12 0.13 ide nfs-1 OK OVF_STORE image raw 0.12 0.13 ide nfs-1 OK vm_cinder_9_Disk_1 cinder raw 8 virtio cinder illegal cinder list outputs "no disks" which is expected results. Version-Release number of selected component (if applicable): rhevm-3.6-14 How reproducible: 100% Steps to Reproduce: 1.create 10 vms+cinder disk 2.delete all vms using py-sdk Actual results: a wrong error is displayed, at least one disk remains after the operation and wrongly gains status "illegal" Expected results: operation successful Additional info:
Verified on rhevm-3.6.1
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE