Bug 1434494 - Image-Volume Cache Caches Multiple Copies of Same Image on First Launch
Summary: Image-Volume Cache Caches Multiple Copies of Same Image on First Launch
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 8.0 (Liberty)
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: 11.0 (Ocata)
Assignee: Alan Bishop
QA Contact: Tzach Shefi
URL:
Whiteboard:
Depends On:
Blocks: 1377891 1434499 1434500
TreeView+ depends on / blocked
 
Reported: 2017-03-21 15:44 UTC by Alan Bishop
Modified: 2021-12-10 15:01 UTC (History)
8 users (show)

Fixed In Version: openstack-cinder-10.0.0-3.el7ost
Doc Type: Bug Fix
Doc Text:
Previously, concurrent requests to create a volume from the same image could result in multiple entries in the Block Storage service's image cache. This resulted in duplicated image cache entries for the same image, which wasted space. This update adds a synchronization lock to prevent this. The first request to create a volume from an image will be cached, and all other requests will use the cached image.
Clone Of: 1377891
: 1434499 (view as bug list)
Environment:
Last Closed: 2017-05-17 20:10:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1649636 0 None None None 2017-03-21 15:44:58 UTC
OpenStack gerrit 446590 0 None MERGED Prevent duplicate entries in the image cache 2020-09-30 03:19:48 UTC
OpenStack gerrit 448540 0 None MERGED Prevent duplicate entries in the image cache 2020-09-30 03:19:47 UTC
Red Hat Issue Tracker OSP-685 0 None None None 2021-12-10 15:01:09 UTC
Red Hat Product Errata RHEA-2017:1245 0 normal SHIPPED_LIVE Red Hat OpenStack Platform 11.0 Bug Fix and Enhancement Advisory 2017-05-17 23:01:50 UTC

Description Alan Bishop 2017-03-21 15:44:58 UTC
+++ This bug was initially created as a clone of Bug #1377891 +++

Description of problem:

If a customer enables Image-Volume Cache in Cinder, it works correctly if you launch a single instance using an image that has never been cached before.  The image will get cached and subsequent launches of instances will rely on that cached imaged.

However if you launch 2+ instances simultaneously, the volume cache will store a image cache copy for each instance, instead of one.

I suspect the cause is that when you launch 2 or more instances in Nova, they all check for a cached image that is not there and hence causes each nova instance to cache its own image.  When launching a group of instances, we need the check to happen once and ensure all threads for each instance are aware and only cache the image once. 

Version-Release number of selected component (if applicable):
8.0

How reproducible:
100%

Steps to Reproduce:
1.Enable Image-Volume Cache in Cinder
2.Launch 2+ instances simultaneously using an image that has never been cached
3.Each instance will end up caching the image in the Volume-Cache

Actual results:


Expected results:
One image should be cached per glance image

Additional info:

Comment 2 Alan Bishop 2017-03-24 20:23:55 UTC
Patch has been merged to stable/ocata

Comment 3 Tzach Shefi 2017-04-06 09:31:04 UTC
Verified on HA
openstack-cinder-10.0.0-4.el7ost.noarch

Get admin user/tenant ID

# crudini --set /etc/cinder/cinder.conf tripleo_iscsi image_volume_cache_enabled True
# crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_project_id 39bd74e6b7cd4e2c923d26ac5e75c198 
# crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_user_id a4f9bd9cffa94c969a0cdcdc5d65827e

#pcs resource restart openstack-cinder-volume
Or systemctl restart openstack-cinder-volume (if not HA) 

Uploaded a Cirros image to glance. 

Then booted up three volumes from same image
#for i in 1 2 3; do cinder create 1 --image cirros  ; done

All three volumes created fine, with only one image cache. 
On cinder list 4 lines, first one is image cache, we only have 1 such cached image as fixed by this bug, the other 3 lines are available volumes. 


# cinder list --all-tenants                                                                                                                                                                                
+--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+
| ID                                   | Tenant ID                        | Status    | Name                                       | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+
| 0ef914c9-0618-4864-b6cd-df65d56117b3 | 39bd74e6b7cd4e2c923d26ac5e75c198 | available | image-3ec64daa-2e57-42f4-abfd-e50435ab1fd5 | 1    | -           | false    |             |
| 1f5fa797-88f8-48fc-a604-0fe2251a1f97 | 39bd74e6b7cd4e2c923d26ac5e75c198 | available | -                                          | 1    | -           | true     |             |
| 6d07c625-d290-403f-93bd-d01e703b0083 | 39bd74e6b7cd4e2c923d26ac5e75c198 | available | -                                          | 1    | -           | true     |             |
| a68f63b4-7659-42c7-afca-106411ecfaf4 | 39bd74e6b7cd4e2c923d26ac5e75c198 | available | -                                          | 1    | -           | true     |             |
+--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+

Comment 4 errata-xmlrpc 2017-05-17 20:10:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1245


Note You need to log in before you can comment on or make changes to this bug.