Bug 1560068 - Backport RBD volume stats patch from upstream
Summary: Backport RBD volume stats patch from upstream
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: beta
: 13.0 (Queens)
Assignee: Jon Bernard
QA Contact: Avi Avraham
Kim Nylander
URL:
Whiteboard:
Depends On:
Blocks: 1560069 1560070
TreeView+ depends on / blocked
 
Reported: 2018-03-23 20:16 UTC by Jon Bernard
Modified: 2019-02-17 03:11 UTC (History)
7 users (show)

Fixed In Version: openstack-cinder-12.0.1-0.20180418194611.c476898.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1560069 (view as bug list)
Environment:
Last Closed: 2018-06-27 13:48:15 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
OpenStack gerrit 541285 None MERGED RBD: Don't query Ceph on stats for exclusive pools 2020-07-09 05:36:04 UTC
OpenStack gerrit 561721 None MERGED RBD: Don't query Ceph on stats for exclusive pools 2020-07-09 05:36:04 UTC
Red Hat Bugzilla 1569091 None None None 2019-05-31 21:44:03 UTC
Red Hat Product Errata RHEA-2018:2086 None None None 2018-06-27 13:48:48 UTC

Internal Links: 1569091

Description Jon Bernard 2018-03-23 20:16:16 UTC
Gorka submitted a patch upstream recently to improve statistics collection for large numbers of RBD volumes in Cinder.  The patch is reasonably isolated and earlier versions or RHOS would benefit greatly from this.

Comment 7 Tzach Shefi 2018-05-06 10:44:08 UTC
Anything I can actually test to verify this, 
other than just report code has landed in RPM?

Comment 8 Gorka Eguileor 2018-05-07 09:01:43 UTC
To verify this we can create a couple of huge empty images directly on the RBD pool used by Cinder (not using Cinder) and checking the reported stats in Cinder when setting rbd_exclusive_cinder_pool to True and False.  You should see that when setting it to False those images that are not Cinder's are accounted for and they aren't when set to True.

Comment 12 Tzach Shefi 2018-05-16 09:06:00 UTC
Verified on:
openstack-cinder-12.0.1-0.20180418194613.c476898.el7ost.noarch

Gorka's tip 
If exclusive = false then you get allocated = size created in Cinder and provisioned = total size in RBD pool.

If exclusive = true you only get allocated = size created in Cinder
which seems to be what you are getting in your tests

Having understood this, system is acting as expected. 

With rbd_exclusive_cinder_pool = false   (default setting)
On controller I'd created two images (bypassing Cinder).  
#rbd create --size 10240 volumes/vol10G
#rbd create --size 12288 volumes/vol12G

#rbd -p volumes ls
   vol10G
   vol12G

As expected 
#cinder get-pools --detail
| allocated_capacity_gb       | 0   (-> as I bypassed Cinder)                                                                   | provisioned_capacity_gb     | 22.0  (10G+22G) 

Now I'd created a 1G vol from Cinder
#cinder create 1 

#cinder get-pools --detail
| allocated_capacity_gb       | 1   (that 1G Cinder vol)                                                                                                                                                     
| total_capacity_gb           | 36.12  (10G+22G+1G total capacity)        

Now when with rbd_exclusive_cinder_pool = true 
Plus needed service/docker restart. 

#cinder get-pools --detail
| allocated_capacity_gb    | 1 -> correct only the 1G Cinder vol. 

While "total_capacity_gb" is missing as expected -> 
https://git.openstack.org/cgit/openstack/cinder-specs/tree/specs/queens/provisioning-improvements.rst?id=d42c319ebaf7500e421d27d3fb7293933ec3c788#n327                                                                 

Good to verify.

Comment 14 errata-xmlrpc 2018-06-27 13:48:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:2086


Note You need to log in before you can comment on or make changes to this bug.