Bug 1555188
Summary: | Cinder's image volume cache sync lock prevents parallel image downloading | |||
---|---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Nilesh <nchandek> | |
Component: | openstack-cinder | Assignee: | Alan Bishop <abishop> | |
Status: | CLOSED ERRATA | QA Contact: | Avi Avraham <aavraham> | |
Severity: | high | Docs Contact: | Kim Nylander <knylande> | |
Priority: | medium | |||
Version: | 10.0 (Newton) | CC: | abishop, amcleod, cschwede, eharney, jgrosso, marjones, nchandek, srevivo, tshefi | |
Target Milestone: | z9 | Keywords: | Triaged, ZStream | |
Target Release: | 10.0 (Newton) | |||
Hardware: | All | |||
OS: | All | |||
Whiteboard: | ||||
Fixed In Version: | openstack-cinder-9.1.4-36.el7ost | Doc Type: | Bug Fix | |
Doc Text: |
Previously, Cinder used a synchronization lock to prevent duplicate entries in the volume image cache. The scope of this lock was too broad, and caused simultaneous requests to create a volume from an image to compete for the lock, even when the image cache was not enabled. As a result of this, simultaneous requests to create a volume from an image were serialized, and not run in parallel.
With this update, the scope of the synchronization lock is reduced, and takes effect only when the volume image cache is enabled. Simultaneous requests to create a volume from an image run in parallel when the volume image cache is disabled. When the volume image cache is enabled, locking in minimized to ensure only a single entry is created in the cache.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1572220 (view as bug list) | Environment: | ||
Last Closed: | 2018-09-17 16:57:43 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1575720, 1575758 |
Comment 1
Nilesh
2018-03-14 06:27:48 UTC
Just to be clear, is this referring to creating many instances from the same Glance image, in a boot-from-volume scenario? Yes, is this referring to creating many instances from the same Glance image. Upstream patch has already been accepted. Updated the description and removed FutureFeature keyword. The issue is actually a negative side effect of the fix for bug #1434499. Verified on: openstack-cinder-9.1.4-41.el7ost.noarch Uploaded ~400MB qcow2 image to Glance. With default #image_volume_cache_enabled = false Changed backend to nfs for larger volume capacity image needs a 10G vol. Created three 10G volumes all created simultaneously. #cinder create 10 --image rhel * times 3 +--------------------------------------+-------------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-------------+------+------+-------------+----------+-------------+ | 39e14288-bc2e-4d85-8fef-0e1bb917836c | downloading | - | 10 | - | false | | | 768169d1-05a6-4ee1-8346-1aa0ccc2008d | downloading | - | 10 | - | false | | | ae980556-e848-4f2f-ac7e-c33cf4bce97c | downloading | - | 10 | - | false | | After a short while, all 3 volumes become available in about the same time. Now with Set image_volume_cache_enabled = true cinder_internal_tenant_project_id = ea6a84721c25439bbee43448e658942b cinder_internal_tenant_user_id = 9206a87622184de58a7c00d32eae7bb3 Created same three volumes at once. +--------------------------------------+-------------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-------------+------+------+-------------+----------+-------------+ | 005a72e4-e7a7-4598-97e9-647d751e5052 | downloading | - | 10 | - | false | | | 7fb57ae5-9286-4bb6-a37c-58b48b72c786 | creating | - | 10 | - | false | | | af4ecf70-ea4c-437a-82a6-df2f9a5275b9 | creating | - | 10 | - | false | | | d865fef1-939b-4f9e-aefb-75c0152635b0 | deleting | - | 4 | - | false | | +--------------------------------------+-------------+------+------+-------------+----------+-------------+ Ignore deleting volume none relevant. A few seconds later: +--------------------------------------+-------------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-------------+------+------+-------------+----------+-------------+ | 005a72e4-e7a7-4598-97e9-647d751e5052 | downloading | - | 10 | - | false | | | 7fb57ae5-9286-4bb6-a37c-58b48b72c786 | downloading | - | 10 | - | false | | | af4ecf70-ea4c-437a-82a6-df2f9a5275b9 | downloading | - | 10 | - | false | | | d865fef1-939b-4f9e-aefb-75c0152635b0 | deleting | - | 4 | - | false | | A few second after all 3 are available +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | 005a72e4-e7a7-4598-97e9-647d751e5052 | available | - | 10 | - | true | | | 7fb57ae5-9286-4bb6-a37c-58b48b72c786 | available | - | 10 | - | true | | | af4ecf70-ea4c-437a-82a6-df2f9a5275b9 | available | - | 10 | - | true | | | d865fef1-939b-4f9e-aefb-75c0152635b0 | deleting | - | 4 | - | false | | During this time, we can see all volumes used same cached image: #tailf /var/log/cinder/volume.log | grep Temporary 2018-08-28 08:22:51.964 520487 DEBUG cinder.image.image_utils [req-28408103-1ca8-48be-9ccc-b460ff716c66 44a57f1bc9f74b3486c15a8d1f9fc313 5b6dd2c520754296a5a4fe0e2bc4f42e - default default] Temporary image a7ac7b17-b9f0-47bc-aeca-6d591cc05791 is fetched for user 44a57f1bc9f74b3486c15a8d1f9fc313. fetch /usr/lib/python2.7/site-packages/cinder/image/image_utils.py:630 2018-08-28 08:23:06.103 520487 DEBUG cinder.image.image_utils [req-d819c2e3-ca21-4020-b7b2-b6e73ecd4def 44a57f1bc9f74b3486c15a8d1f9fc313 5b6dd2c520754296a5a4fe0e2bc4f42e - default default] Temporary image a7ac7b17-b9f0-47bc-aeca-6d591cc05791 is fetched for user 44a57f1bc9f74b3486c15a8d1f9fc313. fetch /usr/lib/python2.7/site-packages/cinder/image/image_utils.py:630 2018-08-28 08:23:10.450 520487 DEBUG cinder.image.image_utils [req-58e4de83-f081-48fb-9f47-f2c68d0fdea2 44a57f1bc9f74b3486c15a8d1f9fc313 5b6dd2c520754296a5a4fe0e2bc4f42e - default default] Temporary image a7ac7b17-b9f0-47bc-aeca-6d591cc05791 is fetched for user 44a57f1bc9f74b3486c15a8d1f9fc313. fetch /usr/lib/python2.7/site-pa OK to verify. *** Bug 1625110 has been marked as a duplicate of this bug. *** I don't know what you mean by "already suggested" or what info you seek. But I can tell you the fix that allows for parallel downloading of images (regardless of whether the image cache is enabled) will be available for OSP-10 in openstack-cinder-9.1.4-36.el7ost (see "Fixed In Version" at the top of this page). Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2717 |