Bug 1292218 - [Cinder] Unable to delete (an older) snapshot (that was committed) after a new snapshot was created, snapshot gets locked
[Cinder] Unable to delete (an older) snapshot (that was committed) after a ne...
Status: CLOSED CANTFIX
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage (Show other bugs)
3.6.1.3
Unspecified Unspecified
unspecified Severity high (vote)
: ovirt-3.6.3
: 3.6.3
Assigned To: Maor
Aharon Canan
storage
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-16 13:41 EST by Natalie Gavrielov
Modified: 2016-03-10 02:28 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-23 07:46:41 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
amureini: ovirt‑3.6.z?
ngavrilo: planning_ack?
ngavrilo: devel_ack?
ngavrilo: testing_ack?


Attachments (Terms of Use)
engine.log, vdsm.log, cinder logs (2.26 MB, application/x-gzip)
2015-12-16 13:42 EST, Natalie Gavrielov
no flags Details

  None (edit)
Description Natalie Gavrielov 2015-12-16 13:41:06 EST
Description of problem:

Unable to delete an older snapshot (that was committed) after a new snapshot was created. Snapshot is locked.  
 

Version-Release number of selected component:
rhevm-3.6.1.3-0.1.el6.noarch (build 3.6.1.4)

How reproducible:
100%

Steps to Reproduce:

1. Create a VM with 1 disk: Cinder or NFS.
2. Create a snapshot.
3. Preview and commit this snapshot.
4. Create another snapshot.
5. Delete the committed snapshot from steps 2 and 3.

Actual results:

1. Snapshot status is locked, never removed for both cinder and nfs vm's.
2. The following message appears in log (every ~10 seconds):
INFO  [org.ovirt.engine.core.bll.RemoveAllCinderDisksCommandCallBack] (DefaultQuartzScheduler_Worker-42) [] Waiting for child commands to complete
3. Only for NFS appears the following message in engine.log:
2015-12-16 20:06:47,502 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-48) [] Correlation ID: 67cc9730, Job ID: a4c33e05-f656-4a44-a9cc-d0eeb419be0d, Call Stack: null, Custom Event ID: -1, Message: Snapshot 'snap1' deletion for VM 'vm' has been completed.
*  Snapshot still appears in the UI and is locked.
** For cinder based vm there is no such message.

Expected results:

Delete snapshot operation to work (or if it's impossible to delete such a snapshot to have the delete button disabled).
Comment 1 Natalie Gavrielov 2015-12-16 13:42 EST
Created attachment 1106506 [details]
engine.log, vdsm.log, cinder logs
Comment 2 Maor 2015-12-17 04:38:56 EST
Natalie, Which Cinder version are you using?
Basically we only support Cinder from Kilo version.

You can log in into Cinder and try to delete that snapshot - I suspect that this snapshot will not get deleted, since there is a know issue that a snapshot which has volumes dependent on him will not be able to get deleted on Juno version (I think also for earlier versions).
Comment 3 Aharon Canan 2015-12-21 07:42:24 EST
(In reply to Maor from comment #2)
> Natalie, Which Cinder version are you using?
> Basically we only support Cinder from Kilo version.

Since when? 
I tried to look for that info and didn't find it nor Ori/Natalie.
Also, it is not mentioned on http://www.ovirt.org/Features/Cinder_Integration

Cinder/Ceph was tested using openstack-cinder-2014.2.3-6.el7ost.noarch and if we tested over non-supported version we have a big problem here as we will probably need to retest.
Comment 4 Aharon Canan 2015-12-21 07:49:50 EST
BTW, if it is not supported let's block the option to use it as we do require the relevant pkgs on the hosts.
Comment 5 Maor 2015-12-21 09:40:23 EST
This is a known issue which we also mentioned at the Cinder RFE doc text:
https://bugzilla.redhat.com/1185826 - [RFE][oVirt] Add OpenStack Cinder storage domains with Ceph backend
"Feature: 
Integrating oVirt with Cinder to use Red Hat Ceph Storage.
Currently, oVirt supports the integration of Cinder only from OpenStack Kilo and above."

We have also discussed this issue at another Cinder bug, please take a look at https://bugzilla.redhat.com/show_bug.cgi?id=1255221#c10

Basically I think we should not block this option since this is a Cinder issue at a specific version, we can't support all the Cinder versions with all their issues since there could be many versions, upstream, downstream, other vendors distributions of Cinder and so on.
One solution might be to provide the user an external VM/Container of the desired Cinder version which we can support and maintain, otherwise we can not guarantee other bugs we might find at Cinder on the way.
Comment 6 Allon Mureinik 2015-12-23 07:46:41 EST
This is an implementation bug in a certain version(s) of Cinder. Since there's no reliable way of getting the version information from it (especially considering the multiplacy of distros), we don't really have a solution here other than clearly documenting it.

Maor - please add the required documentation.
Comment 7 Allon Mureinik 2015-12-23 07:47:39 EST
As per comment 5, this doctext is already provided. Removing the flag.
Comment 8 Ori Gofen 2015-12-23 08:20:10 EST
Hello everybody, Just to bring everybody on the same page. This bug is obviously a cinder bug, but a funny thung about comment #5 is that no one really checked if this bug reproduces at kilo or not, we decided we support kilo based on... really don't know based on what, my point is finally is that all version associated bugs that where opened on Juno have reproduced on Kilo. Qe affords now are to reproduce those in liberty and check rather this an upstream bug or not.

Note You need to log in before you can comment on or make changes to this bug.