Bug 1641111 - cinder always check image_volume_cache_max_size_gb and image_volume_cache_max_count when either of them is specified.
Summary: cinder always check image_volume_cache_max_size_gb and image_volume_cache_max...
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z3
: 13.0 (Queens)
Assignee: Alan Bishop
QA Contact: Avi Avraham
Kim Nylander
URL:
Whiteboard:
Keywords: Triaged, ZStream
Depends On:
Blocks: 1622453 1642155
TreeView+ depends on / blocked
 
Reported: 2018-10-19 16:29 UTC by Alan Bishop
Modified: 2018-11-13 22:13 UTC (History)
7 users (show)

(edit)
The Block Storage service (cinder) uses two volume cache limit settings. When only one cache limit was configured, adding a new entry to the cache would always cause an existing entry to be ejected from the cache. Only a single entry would be cached, regardless of the configured cache limit. The Block Storage service now correctly handles volume cache limits.
Clone Of: 1622453
(edit)
Last Closed: 2018-11-13 22:13:36 UTC


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:3601 None None None 2018-11-13 22:13 UTC
OpenStack gerrit 603145 None None None 2018-10-19 16:29 UTC
Launchpad 1792944 None None None 2018-10-19 16:29 UTC

Description Alan Bishop 2018-10-19 16:29:41 UTC
+++ This bug was initially created as a clone of Bug #1622453 +++

Description of problem:
cinder always check image_volume_cache_max_size_gb and image_volume_cache_max_count when either of them is specified.

Steps to Reproduce:
1. Add  image_volume_cache_max_count parameter and set a value greater than 0
2. Try to create a volume and it can't create more than 1.
3.

Actual results:
Only 1 image cache is available.
And cinder shows following message.
~~~
Reclaiming image-volume cache space; removing cache entry 
~~~

Expected results:
Image cache should be created as the number of image_volume_cache_max_count

Comment 6 Tzach Shefi 2018-11-04 06:18:44 UTC
Alan, 
A bit confused with my results, what am I missing here? 

Configured image cache
image_volume_cache_max_count = 2 
restarted docker

Uploaded 3 different glance images.
Created three volumes one from each image, volumes were created fine. 

Despite setting max cache count to 2, after creating three volumes from three separate images, shouldn't I only see\have 2 cached images available? 
As I noticed three of them:

cinder list --all-tenants | grep image-
| 28b45bc5-48ed-40f0-8bf7-e9ce39e35dd0 | a426863856c54f79befc54a7f1bd1f16 | available | image-3d2baba5-977a-4683-898f-4e13c5070fe0 | 1    | -           | false    |                                      |
| 594ac77a-4443-4967-8abe-d48b22d750de | a426863856c54f79befc54a7f1bd1f16 | available | image-1c73e4d0-48a0-46de-a3bb-c89af4435bde | 1    | -           | false    |                                      |
| 7b0dba2a-acfd-4de6-9b28-266d2829bddd | a426863856c54f79befc54a7f1bd1f16 | available | image-d7e24948-6ec2-4154-9241-b75ea87146d6 | 1    | -           | false    |                                      |

Deleted all volumes and purged the cache, only to reproduce with same results.
Again three volumes created plus three cached images.

Comment 8 Alan Bishop 2018-11-05 16:39:57 UTC
Tzach,

It was good for me to have access to the deployment because the problem is subtle. Basically, you need to enable the image volume cache in the correct location in cinder.conf, and it seems like you are trying to do it in the iscsi backend. That won't work, and you can see there's a cinder-volume.log message that states the image volume cache is disabled.

When the cache is enabled, you can monitor its behavior by looking for additional DEBUG log messages. There's a message that tells you the cache's current size and count, and the configured max size and max count.

Comment 9 Alan Bishop 2018-11-05 17:00:00 UTC
Sorry, ignore my previous comment. I was looking at an older log file when the cache had not been enabled. I'll post another update once I finish analyzing the latest logs.

Comment 10 Alan Bishop 2018-11-05 19:28:46 UTC
Tzach,

The issue you describe arose due to confusion about the image volume cache options appearing twice in the cinder.conf. Cinder added support for a separate [backend_defaults] section which is where default values for settings appear when those settings can be overridden by a specific backend. Unfortunately, you modified the image_volume_cache_max_count in the original [DEFAULT] section, which is no longer relevant.

So, what you want to do is set the image_volume_cache_max_count value on L1627 in cinder.conf. That's the one in the [backend_defaults] section (which starts on L1409).

Comment 11 Tzach Shefi 2018-11-06 01:52:27 UTC
Thanks, learned something new about cinder.conf. 

So now I've set cache max count on line L1627, restarted docker
Still got 3 cached image volumes.

Scratch that I've even tried setting it directly on lVM backend section,the last two lines on cinder.conf, restarted docker, again three cached images. 

Now here is the fun part I just realized, which might explain things or be a bug, I'm not sure.  

This command:
#cinder create 1 --image cirros & cinder create 1 --image cirros.raw & cinder create 1 --image dsl

Produces 3 volumes + 3 cached images - not good. 
Then again maybe it's my use of "&" which complicates things.
Well it simplifies testing with a 1 liner but parallel. 
Still I mean a user could issue this command, no reason it shouldn't work right?

However these three when serialized:
#cinder create 1 --image cirros
#cinder create 1 --image cirros.raw
#cinder create 1 --image dsl

Produce 3 volumes + 1 cached image, which is better, great.
But shouldn't I still see the 2 last cached images, why does only 1 remain?
Maybe cause it's cirros qcow2 and raw ?

Comment 12 Alan Bishop 2018-11-06 02:51:40 UTC
Just lowering a limit does not trigger removing anything in order to be under the new limit. The cache limits are enforced only when a new entry is added to the cache.

The best way to answer your questions about the behavior you saw is to examine the c-vol log. Each time a new entry is added to the cache, there's a DEBUG message that reports on the current cache size and count, plus the cache limits. Then, if the cache is over the limits, you'll see more messages about entries being deleted, and cache size and count after deleting that entry.

Comment 13 Tzach Shefi 2018-11-13 18:43:11 UTC
Verified on:
openstack-cinder-12.0.4-2.el7ost.noarch

Set max cache count = 2

Created three volumes each from a different image, as expected only two images-xx remain under cinder list, meaning only two cached images. 

I noticed one of them being deleted and replaced with "third" cached image. 

Looks good to verify. 

(overcloud) [stack@undercloud-0 ~]$ glance image-list
+--------------------------------------+------------+
| ID                                   | Name       |
+--------------------------------------+------------+
| a039371a-a9fd-430a-988e-bb0734c7430a | cirros     |
| 280aec02-819e-42d8-87ce-9461c255e28b | cirros.raw |
| 42fa341c-ac67-4c70-83ab-e7d7eaf55daa | dsl        |
+--------------------------------------+------------+

Three create volumes:

  251  cinder create 1 --image cirros --volume-type tripleo_iscsi --display-name cirros
  252  cinder create 1 --image cirros.raw --volume-type tripleo_iscsi --display-name cirros.raw
  253  cinder create 1 --image dsl --volume-type tripleo_iscsi --display-name dsl.iso

The only two left cached images I've got:

| 83c15202-a8df-4336-9f33-bc8064f765ba | available | image-280aec02-819e-42d8-87ce-9461c255e28b | 1    | tripleo_iscsi  | false    |                                      |
| 9d527ce8-6614-42a1-8926-f13458e35b3f | available | image-42fa341c-ac67-4c70-83ab-e7d7eaf55daa |

Comment 15 errata-xmlrpc 2018-11-13 22:13:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:3601


Note You need to log in before you can comment on or make changes to this bug.