+++ This bug was initially created as a clone of Bug #1434499 +++ +++ This bug was initially created as a clone of Bug #1434494 +++ +++ This bug was initially created as a clone of Bug #1377891 +++ Description of problem: If a customer enables Image-Volume Cache in Cinder, it works correctly if you launch a single instance using an image that has never been cached before. The image will get cached and subsequent launches of instances will rely on that cached imaged. However if you launch 2+ instances simultaneously, the volume cache will store a image cache copy for each instance, instead of one. I suspect the cause is that when you launch 2 or more instances in Nova, they all check for a cached image that is not there and hence causes each nova instance to cache its own image. When launching a group of instances, we need the check to happen once and ensure all threads for each instance are aware and only cache the image once. Version-Release number of selected component (if applicable): 8.0 How reproducible: 100% Steps to Reproduce: 1.Enable Image-Volume Cache in Cinder 2.Launch 2+ instances simultaneously using an image that has never been cached 3.Each instance will end up caching the image in the Volume-Cache Actual results: Expected results: One image should be cached per glance image Additional info:
Verified, just waiting for it to reach ON_QA to change status. On a prefixed version # rpm -qa | grep openstack-cinder openstack-cinder-8.1.1-5.el7ost.noarch I set needed config: crudini --set /etc/cinder/cinder.conf tripleo_iscsi image_volume_cache_enabled True crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_project_id e71ea623e38741d8b918e371953d3b26 crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_user_id e0c11e660a424fc9ac49d7e5a2773d32 Restarted cinder volume service. Booting up 3 volumes from same image left a mess, three image cache were created. [root@dhcp-4-58 ~(keystone_admin)]# cinder list +--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | 1a798b7c-fa1f-4fa5-a28a-adfccbec816b | available | - | 1 | - | true | | | 64dbd110-5ae6-4c83-bb57-9b6b38b77f37 | available | image-e9e8470c-a493-4d6d-8817-a03c93ea2117 | 1 | - | false | | | 8447efd6-97b5-4a57-8aa9-2f7e6ecc7381 | available | image-e9e8470c-a493-4d6d-8817-a03c93ea2117 | 1 | - | false | | | 9cc9cb11-dd7b-4081-9790-e68691179387 | available | image-e9e8470c-a493-4d6d-8817-a03c93ea2117 | 1 | - | false | | | c4371a21-c54b-4a41-bea7-ab61fad59f23 | available | - | 1 | - | true | | | e2d92742-d908-4462-a8e2-54d77b498a9b | available | - | 1 | - | true | | +--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ Cherry picked the fix, installed restarted services. Deleted all volumes and cached images. Reran same cinder create command, this time as expected only 1 image cache is created for all the volumes. [root@dhcp-4-58 ~(keystone_admin)]# cinder list +--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | 03ece507-0573-45f5-8a3d-604f7bd1f31d | available | - | 1 | - | true | | | 45576619-9e2d-4d4f-b5c0-6e211aa479c6 | available | image-e9e8470c-a493-4d6d-8817-a03c93ea2117 | 1 | - | false | | | c4d9a275-8b0f-4443-b8ec-d5e9714e990b | available | - | 1 | - | true | | | d2ca79c0-d716-47b5-891f-3040f12efa21 | available | - | 1 | - | true | | +--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ Once bug lands ON_QA I'll check that fix is included and verify this formally.
Verified, on version: openstack-cinder-8.1.1-9.el7ost.noarch Configure steps: # crudini --set /etc/cinder/cinder.conf tripleo_iscsi image_volume_cache_enabled True # openstack project list | grep admin | 1886ce317c66428b8eb43b5ddf7a3230 | admin | # crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_project_id 1886ce317c66428b8eb43b5ddf7a3230 # openstack user list | grep admin | 9e98de9119b84f048c986d93fec49d9c | admin | # crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_user_id 9e98de9119b84f048c986d93fec49d9c Restart Cinder volume service. Uploaded an image to Glance # glance image-create --disk-format qcow2 --container-format bare --file cirros-0.3.5-x86_64-disk.img --name cirros Verification steps: Start watch on another terminal #watched -n 5 -d cinder list --all-tenants To create a few volumes at once: $ for i in 1 2 3; do cinder create 1 --image cirros ; done For the ^ input three volumes will be create, as seen below we have 4 lines. The first line is image-cache and as expected we only get one such line despite us booting 3 volumes from same image, +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | ID | Tenant ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | 73ab5b45-dba2-4689-b709-fdd7513bf68f | 1886ce317c66428b8eb43b5ddf7a3230 | available | image-d7f1365f-a776-48f6-87c3-326ccf2bdaa5 | 1 | - | false | | | d18d8472-da6d-4c6e-b623-6f82ef4996cf | 1886ce317c66428b8eb43b5ddf7a3230 | available | - | 1 | - | true | | | d5b4324a-c0fc-4e3b-bffa-6cc5a04c4f3e | 1886ce317c66428b8eb43b5ddf7a3230 | available | - | 1 | - | true | | | f821a7e7-e306-4901-99a1-ae38a312edb9 | 1886ce317c66428b8eb43b5ddf7a3230 | available | - | 1 | - | true | | +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1459