+++ This bug was initially created as a clone of Bug #1434494 +++ +++ This bug was initially created as a clone of Bug #1377891 +++ Description of problem: If a customer enables Image-Volume Cache in Cinder, it works correctly if you launch a single instance using an image that has never been cached before. The image will get cached and subsequent launches of instances will rely on that cached imaged. However if you launch 2+ instances simultaneously, the volume cache will store a image cache copy for each instance, instead of one. I suspect the cause is that when you launch 2 or more instances in Nova, they all check for a cached image that is not there and hence causes each nova instance to cache its own image. When launching a group of instances, we need the check to happen once and ensure all threads for each instance are aware and only cache the image once. Version-Release number of selected component (if applicable): 8.0 How reproducible: 100% Steps to Reproduce: 1.Enable Image-Volume Cache in Cinder 2.Launch 2+ instances simultaneously using an image that has never been cached 3.Each instance will end up caching the image in the Volume-Cache Actual results: Expected results: One image should be cached per glance image Additional info:
Verified, only one image cache shows up on cinder list. # rpm -qa openstack-cinder openstack-cinder-9.1.3-1.el7ost.noarch On Cinder.conf enable and set these two cinder_internal_tenant_project_id = 5907b3d67..... cinder_internal_tenant_user_id = ....... *In my case used admin project and admin user ID's under backend section image_volume_cache_enabled=True Restart cinder volume service. Uploaded an image #watched -n 5 -d cinder list --all-tenants To create a few volumes at once: $ for i in 1 2 3; do cinder create 1 --image cirros ; done For the ^ input three volumes will be create, as seen below we have 4 lines. The forth line is image-cache and as expected we only get one such line despite us booting 3 volumes from same image, #cinder list --all-tenants +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | ID | Tenant ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | 2c133ff1-ae57-4e63-96e2-2f48c8c145e9 | 5907b3d677114debba23c4e417612ee8 | available | - | 1 | - | true | | | 4fffed5e-a2cb-4979-99db-6dbdc8c1073b | 5907b3d677114debba23c4e417612ee8 | available | - | 1 | - | true | | | 86e692e5-c219-47f5-a5a9-c238d0775f4e | 5907b3d677114debba23c4e417612ee8 | available | - | 1 | - | true | | | e167ab9a-fdc7-4439-bea4-630c6db44e92 | 5907b3d677114debba23c4e417612ee8 | available | image-382058da-44db-4e5d-a2ad-c8b92090dac4 | 1 | - | false | | +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-----------
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1591