Red Hat Bugzilla – Bug 1292218
[Cinder] Unable to delete (an older) snapshot (that was committed) after a new snapshot was created, snapshot gets locked
Last modified: 2016-03-10 02:28:20 EST
Description of problem:
Unable to delete an older snapshot (that was committed) after a new snapshot was created. Snapshot is locked.
Version-Release number of selected component:
rhevm-126.96.36.199-0.1.el6.noarch (build 188.8.131.52)
Steps to Reproduce:
1. Create a VM with 1 disk: Cinder or NFS.
2. Create a snapshot.
3. Preview and commit this snapshot.
4. Create another snapshot.
5. Delete the committed snapshot from steps 2 and 3.
1. Snapshot status is locked, never removed for both cinder and nfs vm's.
2. The following message appears in log (every ~10 seconds):
INFO [org.ovirt.engine.core.bll.RemoveAllCinderDisksCommandCallBack] (DefaultQuartzScheduler_Worker-42)  Waiting for child commands to complete
3. Only for NFS appears the following message in engine.log:
2015-12-16 20:06:47,502 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-48)  Correlation ID: 67cc9730, Job ID: a4c33e05-f656-4a44-a9cc-d0eeb419be0d, Call Stack: null, Custom Event ID: -1, Message: Snapshot 'snap1' deletion for VM 'vm' has been completed.
* Snapshot still appears in the UI and is locked.
** For cinder based vm there is no such message.
Delete snapshot operation to work (or if it's impossible to delete such a snapshot to have the delete button disabled).
Created attachment 1106506 [details]
engine.log, vdsm.log, cinder logs
Natalie, Which Cinder version are you using?
Basically we only support Cinder from Kilo version.
You can log in into Cinder and try to delete that snapshot - I suspect that this snapshot will not get deleted, since there is a know issue that a snapshot which has volumes dependent on him will not be able to get deleted on Juno version (I think also for earlier versions).
(In reply to Maor from comment #2)
> Natalie, Which Cinder version are you using?
> Basically we only support Cinder from Kilo version.
I tried to look for that info and didn't find it nor Ori/Natalie.
Also, it is not mentioned on http://www.ovirt.org/Features/Cinder_Integration
Cinder/Ceph was tested using openstack-cinder-2014.2.3-6.el7ost.noarch and if we tested over non-supported version we have a big problem here as we will probably need to retest.
BTW, if it is not supported let's block the option to use it as we do require the relevant pkgs on the hosts.
This is a known issue which we also mentioned at the Cinder RFE doc text:
https://bugzilla.redhat.com/1185826 - [RFE][oVirt] Add OpenStack Cinder storage domains with Ceph backend
Integrating oVirt with Cinder to use Red Hat Ceph Storage.
Currently, oVirt supports the integration of Cinder only from OpenStack Kilo and above."
We have also discussed this issue at another Cinder bug, please take a look at https://bugzilla.redhat.com/show_bug.cgi?id=1255221#c10
Basically I think we should not block this option since this is a Cinder issue at a specific version, we can't support all the Cinder versions with all their issues since there could be many versions, upstream, downstream, other vendors distributions of Cinder and so on.
One solution might be to provide the user an external VM/Container of the desired Cinder version which we can support and maintain, otherwise we can not guarantee other bugs we might find at Cinder on the way.
This is an implementation bug in a certain version(s) of Cinder. Since there's no reliable way of getting the version information from it (especially considering the multiplacy of distros), we don't really have a solution here other than clearly documenting it.
Maor - please add the required documentation.
As per comment 5, this doctext is already provided. Removing the flag.
Hello everybody, Just to bring everybody on the same page. This bug is obviously a cinder bug, but a funny thung about comment #5 is that no one really checked if this bug reproduces at kilo or not, we decided we support kilo based on... really don't know based on what, my point is finally is that all version associated bugs that where opened on Juno have reproduced on Kilo. Qe affords now are to reproduce those in liberty and check rather this an upstream bug or not.