Bug 1101504 - Failed to delete snapshot with glusterfs/ lvm backend when volume in-use
Summary: Failed to delete snapshot with glusterfs/ lvm backend when volume in-use
Keywords:
Status: CLOSED DUPLICATE of bug 1056037
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 5.0 (RHEL 7)
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 5.0 (RHEL 7)
Assignee: RHOS Maint
QA Contact: Dafna Ron
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-05-27 11:04 UTC by bkopilov
Modified: 2016-04-26 15:00 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-05-27 11:24:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
cinder api.log (56.75 KB, text/x-log)
2014-05-27 11:17 UTC, bkopilov
no flags Details
cinder volume (11.89 KB, text/x-log)
2014-05-27 11:18 UTC, bkopilov
no flags Details
cinder scheduler.log (1.67 KB, text/x-log)
2014-05-27 11:19 UTC, bkopilov
no flags Details

Description bkopilov 2014-05-27 11:04:45 UTC
Description of problem:

Running automation tests from tempest  .
we see failures in lvm and glusterfs backend

tests steps :
#1 we create volume
#2 create instance from image
#3 attach volume to instance
#4 create snapshot from instance
#5 delete the snapshot - FAILS
def test_snapshot_create_with_volume_in_use(self):
        # Create a snapshot when volume status is in-use
        # Create a test instance
        server_name = data_utils.rand_name('instance-')
        resp, server = self.servers_client.create_server(server_name,
                                                         self.image_ref,
                                                         self.flavor_ref)
        self.addCleanup(self.servers_client.delete_server, server['id'])
        self.servers_client.wait_for_server_status(server['id'], 'ACTIVE')
        mountpoint = '/dev/%s' % CONF.compute.volume_device_name
        resp, body = self.volumes_client.attach_volume(
            self.volume_origin['id'], server['id'], mountpoint)
        self.assertEqual(202, resp.status)
        self.volumes_client.wait_for_volume_status(self.volume_origin['id'],
                                                   'in-use')
        self.addCleanup(self._detach, self.volume_origin['id'])
        # Snapshot a volume even if it's attached to an instance
        snapshot = self.create_snapshot(self.volume_origin['id'],
                                        force=True)
        # Delete the snapshot
        self.snapshots_client.delete_snapshot(snapshot['id'])
        self.assertEqual(202, resp.status)
        self.snapshots_client.wait_for_resource_deletion(snapshot['id'])
        self.snapshots.remove(snapshot)





Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
#1 we create volume
#2 create instance from image
#3 attach volume to instance
#4 create snapshot from instance
#5 delete the snapshot - FAILS

Actual results:


Expected results:
the snapshot should be deleted


Additional info:

Comment 1 bkopilov 2014-05-27 11:17:36 UTC
Created attachment 899473 [details]
cinder api.log

Comment 2 bkopilov 2014-05-27 11:18:32 UTC
Created attachment 899474 [details]
cinder volume

Comment 3 bkopilov 2014-05-27 11:19:20 UTC
Created attachment 899475 [details]
cinder scheduler.log

Comment 4 Dafna Ron 2014-05-27 11:24:13 UTC

*** This bug has been marked as a duplicate of bug 1056037 ***


Note You need to log in before you can comment on or make changes to this bug.