Bug 1432909 - Ceilometer Collector is a bottleneck for large scale clouds with Telemetry
Summary: Ceilometer Collector is a bottleneck for large scale clouds with Telemetry
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-ceilometer
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 12.0 (Pike)
Assignee: Julien Danjou
QA Contact: Sasha Smolyak
URL:
Whiteboard: scale_lab
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-03-16 11:40 UTC by Alex Krzos
Modified: 2020-05-14 15:45 UTC (History)
5 users (show)

Fixed In Version: openstack-ceilometer-9.0.2-0.20170925173740.1057885.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-13 21:17:04 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:3462 0 normal SHIPPED_LIVE Red Hat OpenStack Platform 12.0 Enhancement Advisory 2018-02-16 01:43:25 UTC

Description Alex Krzos 2017-03-16 11:40:34 UTC
Description of problem:
Recent Scale lab testing has found that Ceilometer-Collector was the bottleneck in taking messages out of the message bus and posting them to Gnocchi API such that they can then be processed by Gnocchi Metricd daemons.  

For the hardware tested, going above 2500 started to show sympotms of ceilometer-collector lagging in processing messages off the queue (Typically unbounded memory growth if prefetch is set to 0 (unlimited) or a Gnocchi Backlog that can never "reach" 0 backlogged work, though metricd can handle the capacity given to it.)

Version-Release number of selected component (if applicable):
Newton GA (OSP10)

How reproducible:
Always with enough hardware to host instances

Steps to Reproduce:
1. Deploy Cloud with Telemetry Services
2. Deploy many instances in the cloud
3.

Actual results:
Going above 2,500 instances can show "lag" in processed data although Metricd is handling the capacity. 

Expected results:
To scale above 2,500 instances

Additional info:
Ceilometer-collector is removed in OSP Pike (OSP12)  We need to test agent-notification to see if this bottleneck is moved further back or if a new bottleneck exists.

Perhaps a combination of higher ceilometer-collector workers + rabbit_qos_prefetch_count + executor_thread_pool_size can squeeze more scale out of the setup though time ran short on attempting to tune options.

Comment 1 Julien Danjou 2017-09-14 13:22:34 UTC
The collector has been deprecated in OSP12 and is not installed anymore.

Comment 5 Julien Danjou 2017-11-15 15:15:10 UTC
Ceilometer collector is no more deployed and installed in OSP12. The bottleneck is gone.

Comment 8 errata-xmlrpc 2017-12-13 21:17:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3462


Note You need to log in before you can comment on or make changes to this bug.