Bug 1377891

Summary: Image-Volume Cache Caches Multiple Copies of Same Image on First Launch
Product: Red Hat OpenStack Reporter: Benjamin Schmaus <bschmaus>
Component: openstack-cinderAssignee: Alan Bishop <abishop>
Status: CLOSED ERRATA QA Contact: Tzach Shefi <tshefi>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 8.0 (Liberty)CC: abishop, eharney, mlopes, srevivo, tshefi
Target Milestone: ---Keywords: Triaged, ZStream
Target Release: 8.0 (Liberty)   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: openstack-cinder-7.0.3-5.el7ost Doc Type: Bug Fix
Doc Text:
Previously, concurrent requests to create a cinder volume from the same glance image could result in multiple entries in cinder's image cache. Consequently, duplicate cinder image cache entries for the same image wastes space. This update added a synchronization lock to prevent multiple entries in the image cache. The first request to create a cinder volume from a glance image will be cached, and all other requests will use the cached image. As a result, simultaneous requests to create a cinder volume from a glance image will not generate more than one entry in the cinder image cache.
Story Points: ---
Clone Of:
: 1434494 (view as bug list) Environment:
Last Closed: 2017-06-14 15:44:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1434494, 1434499, 1434500    
Bug Blocks:    

Description Benjamin Schmaus 2016-09-20 22:44:12 UTC
Description of problem:

If a customer enables Image-Volume Cache in Cinder, it works correctly if you launch a single instance using an image that has never been cached before.  The image will get cached and subsequent launches of instances will rely on that cached imaged.

However if you launch 2+ instances simultaneously, the volume cache will store a image cache copy for each instance, instead of one.

I suspect the cause is that when you launch 2 or more instances in Nova, they all check for a cached image that is not there and hence causes each nova instance to cache its own image.  When launching a group of instances, we need the check to happen once and ensure all threads for each instance are aware and only cache the image once. 

Version-Release number of selected component (if applicable):
8.0

How reproducible:
100%

Steps to Reproduce:
1.Enable Image-Volume Cache in Cinder
2.Launch 2+ instances simultaneously using an image that has never been cached
3.Each instance will end up caching the image in the Volume-Cache

Actual results:


Expected results:
One image should be cached per glance image

Additional info:

Comment 4 Eric Harney 2016-09-27 15:16:02 UTC
Simpler reproducer after creating an internal tenant and adding it to cinder.conf.  (cinder_internal_tenant_project_id, cinder_internal_tenant_user_id, and [lvm]/image_volume_cache_enabled=True)

(keystone_admin) # cinder create 1 --image cirros &
(keystone_admin) # cinder create 1 --image cirros &

# cinder list --all-tenants --fields tenant_id,name,status
+--------------------------------------+----------------------------------+--------------------------------------------+-----------+
|                  ID                  |            tenant_id             |                    name                    |   status  |
+--------------------------------------+----------------------------------+--------------------------------------------+-----------+
| 54aaed15-efc9-4852-a930-3b90f5562408 | fa440d5fa7554f5ca7e026dc3bd7726c |                     -                      | available |
| a2ba496b-ae9e-4e74-a72f-06d39f2903c1 | 49822bc694764de38aa0c7e65344603f | image-bb9d8448-025c-4e1f-8519-505b75a819e4 | available |
| baa37d18-6c56-4f9f-9aaf-2aeb89ce0851 | 49822bc694764de38aa0c7e65344603f | image-bb9d8448-025c-4e1f-8519-505b75a819e4 | available |
| c2f583cb-fb03-4288-b5c4-641c875f26bd | fa440d5fa7554f5ca7e026dc3bd7726c |                     -                      | available |
+--------------------------------------+----------------------------------+--------------------------------------------+-----------+


This is purely a Cinder bug.

Comment 18 Tzach Shefi 2017-06-04 11:55:48 UTC
Verified, on version: 
openstack-cinder-7.0.3-6.el7ost.noarch


Configure steps:

# crudini --set /etc/cinder/cinder.conf tripleo_iscsi image_volume_cache_enabled True

openstack project list | grep admin
| 827db839a5c14d139a687f3fdbb0c481 | admin   |

# crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_project_id 827db839a5c14d139a687f3fdbb0c481


# openstack user list | grep admin
| 491280cbc385437a843b656fb2f20bb3 | admin      |

# crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_user_id 491280cbc385437a843b656fb2f20bb3

Restart Cinder volume service. 

Uploaded an image to Glance
# glance image-create --disk-format qcow2 --container-format bare --file cirros-0.3.5-x86_64-disk.img --name cirros

Verification steps:

Start watch on another terminal 
# watch -n 5 -d cinder list --all-tenants

To create a few volumes at once:
$ for i in 1 2 3; do cinder create 1 --image 2a051fdb-f001-467b-9cbf-ac0e899d02c6  ; done

For the ^ input three volumes will be create, as seen below we have 4 lines. 
The third line is image-cache and as expected we only get one such line despite us booting 3 volumes from same image. 

$ cinder list --all-tenants
+--------------------------------------+----------------------------------+-----------+------------------+--------------------------------------------+------+-------------+----------+-------------+-------------+
|                  ID                  |            Tenant ID             |   Status  | Migration Status |                    Name                    | Size | Volume Type | Bootable | Multiattach | Attached to |
+--------------------------------------+----------------------------------+-----------+------------------+--------------------------------------------+------+-------------+----------+-------------+-------------+
| 1380c0ad-2c1d-48d8-88fe-16d7beb5914e | 827db839a5c14d139a687f3fdbb0c481 | available |        -         |                     -                      |  1   |      -      |   true   |    False    |             |
| 3d90b9d7-2897-4ce2-814d-674265d4dfce | 827db839a5c14d139a687f3fdbb0c481 | available |        -         |                     -                      |  1   |      -      |   true   |    False    |             |
| 8718e95e-7912-40ce-a4ec-153dc54330e5 | 827db839a5c14d139a687f3fdbb0c481 | available |        -         | image-2a051fdb-f001-467b-9cbf-ac0e899d02c6 |  1   |      -      |  false   |    False    |             |
| dc754c1a-0ebd-4d38-9cfe-03df2969c93d | 827db839a5c14d139a687f3fdbb0c481 | available |        -         |                     -                      |  1   |      -      |   true   |    False    |             |
+--------------------------------------+----------------------------------+-----------+------------------+--------------------------------------------+------+-------------+----------+-------------+-------------+

Comment 20 errata-xmlrpc 2017-06-14 15:44:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1467