Description of problem: If a customer enables Image-Volume Cache in Cinder, it works correctly if you launch a single instance using an image that has never been cached before. The image will get cached and subsequent launches of instances will rely on that cached imaged. However if you launch 2+ instances simultaneously, the volume cache will store a image cache copy for each instance, instead of one. I suspect the cause is that when you launch 2 or more instances in Nova, they all check for a cached image that is not there and hence causes each nova instance to cache its own image. When launching a group of instances, we need the check to happen once and ensure all threads for each instance are aware and only cache the image once. Version-Release number of selected component (if applicable): 8.0 How reproducible: 100% Steps to Reproduce: 1.Enable Image-Volume Cache in Cinder 2.Launch 2+ instances simultaneously using an image that has never been cached 3.Each instance will end up caching the image in the Volume-Cache Actual results: Expected results: One image should be cached per glance image Additional info:
Simpler reproducer after creating an internal tenant and adding it to cinder.conf. (cinder_internal_tenant_project_id, cinder_internal_tenant_user_id, and [lvm]/image_volume_cache_enabled=True) (keystone_admin) # cinder create 1 --image cirros & (keystone_admin) # cinder create 1 --image cirros & # cinder list --all-tenants --fields tenant_id,name,status +--------------------------------------+----------------------------------+--------------------------------------------+-----------+ | ID | tenant_id | name | status | +--------------------------------------+----------------------------------+--------------------------------------------+-----------+ | 54aaed15-efc9-4852-a930-3b90f5562408 | fa440d5fa7554f5ca7e026dc3bd7726c | - | available | | a2ba496b-ae9e-4e74-a72f-06d39f2903c1 | 49822bc694764de38aa0c7e65344603f | image-bb9d8448-025c-4e1f-8519-505b75a819e4 | available | | baa37d18-6c56-4f9f-9aaf-2aeb89ce0851 | 49822bc694764de38aa0c7e65344603f | image-bb9d8448-025c-4e1f-8519-505b75a819e4 | available | | c2f583cb-fb03-4288-b5c4-641c875f26bd | fa440d5fa7554f5ca7e026dc3bd7726c | - | available | +--------------------------------------+----------------------------------+--------------------------------------------+-----------+ This is purely a Cinder bug.
Verified, on version: openstack-cinder-7.0.3-6.el7ost.noarch Configure steps: # crudini --set /etc/cinder/cinder.conf tripleo_iscsi image_volume_cache_enabled True openstack project list | grep admin | 827db839a5c14d139a687f3fdbb0c481 | admin | # crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_project_id 827db839a5c14d139a687f3fdbb0c481 # openstack user list | grep admin | 491280cbc385437a843b656fb2f20bb3 | admin | # crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_user_id 491280cbc385437a843b656fb2f20bb3 Restart Cinder volume service. Uploaded an image to Glance # glance image-create --disk-format qcow2 --container-format bare --file cirros-0.3.5-x86_64-disk.img --name cirros Verification steps: Start watch on another terminal # watch -n 5 -d cinder list --all-tenants To create a few volumes at once: $ for i in 1 2 3; do cinder create 1 --image 2a051fdb-f001-467b-9cbf-ac0e899d02c6 ; done For the ^ input three volumes will be create, as seen below we have 4 lines. The third line is image-cache and as expected we only get one such line despite us booting 3 volumes from same image. $ cinder list --all-tenants +--------------------------------------+----------------------------------+-----------+------------------+--------------------------------------------+------+-------------+----------+-------------+-------------+ | ID | Tenant ID | Status | Migration Status | Name | Size | Volume Type | Bootable | Multiattach | Attached to | +--------------------------------------+----------------------------------+-----------+------------------+--------------------------------------------+------+-------------+----------+-------------+-------------+ | 1380c0ad-2c1d-48d8-88fe-16d7beb5914e | 827db839a5c14d139a687f3fdbb0c481 | available | - | - | 1 | - | true | False | | | 3d90b9d7-2897-4ce2-814d-674265d4dfce | 827db839a5c14d139a687f3fdbb0c481 | available | - | - | 1 | - | true | False | | | 8718e95e-7912-40ce-a4ec-153dc54330e5 | 827db839a5c14d139a687f3fdbb0c481 | available | - | image-2a051fdb-f001-467b-9cbf-ac0e899d02c6 | 1 | - | false | False | | | dc754c1a-0ebd-4d38-9cfe-03df2969c93d | 827db839a5c14d139a687f3fdbb0c481 | available | - | - | 1 | - | true | False | | +--------------------------------------+----------------------------------+-----------+------------------+--------------------------------------------+------+-------------+----------+-------------+-------------+
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1467