Bug 1430084

Summary: [RFE] Method to separate Gnocchi Metrics pool due to high IO requirements
Product: Red Hat OpenStack Reporter: Alex Krzos <akrzos>
Component: openstack-tripleoAssignee: Sébastien Han <shan>
Status: CLOSED DUPLICATE QA Contact: Yogev Rabl <yrabl>
Severity: unspecified Docs Contact: Derek <dcadzow>
Priority: unspecified    
Version: 10.0 (Newton)CC: aschultz, flucifre, jdurgin, johfulto, jomurphy, mburns, rhel-osp-director-maint, shan
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: scale_lab
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-04-16 13:30:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Alex Krzos 2017-03-07 20:18:37 UTC
Description of problem:
Gnocchi use s a lot of IOPs and thus can bog down any other ceph pools co-located on the same ceph storage that tripleo/director deploys.  We should be able to determine which ceph storage nodes are going to store which pool.  Ideally we can put Gnocchi metrics pool on its own set of storage nodes so the IOPs into the metrics pool doesn't slow down the other pools such as vms, images, backups, volumes, etc.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
I can include some graphs/data surrounding % io util on disks at specific scales with a specific Gnocchi archive-policy if that is needed.

Comment 1 Julien Danjou 2017-07-24 13:42:27 UTC
You can specify a pool for each project, but IIUC you want to have pools being created for different Ceph storage, which is not the case anymore.

So that's has nothing to do with telemetry, since we just need a pool name. The rest is Ceph deployment/configuration options.

Comment 2 Federico Lucifredi 2018-01-18 15:00:52 UTC
I read this as a request to have director configure the pool that Gnocci consumes up to Ceph specs. 

Seb,Josh can you indicate what is the next step, if any?

Comment 3 Josh Durgin 2018-01-19 02:21:00 UTC
(In reply to Federico Lucifredi from comment #2)
> I read this as a request to have director configure the pool that Gnocci
> consumes up to Ceph specs. 

That sounds like the next step to me.

Comment 4 Sébastien Han 2018-01-19 08:27:32 UTC
Correct, OSPd needs to add a pool for "metrics" and apply the appropriate configuration for Gnocchi.

Comment 5 John Fulton 2020-04-16 13:30:29 UTC
In OSP13/16 you can use director to deploy different tiers of storage for pools as described here:

 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/deploying_an_overcloud_with_containerized_red_hat_ceph/assembly_ceph-second-tier-storage

As per bz 1816989, in newer versions of ceph-ansible available in RHCSv3/4 which map respectively to OSPv13/16 this will be simpler to deploy.

So in the case of this bug your method would be to use CephPools to override gnocchi pool and apply the rule that matches with the faster devices (ssd)

In bz 1793525 we'll be testing methods like this so I'm closing this bug as a duplicate of it.

*** This bug has been marked as a duplicate of bug 1793525 ***