Hide Forgot
Description of problem: Recent Scale lab testing has found that Ceilometer-Collector was the bottleneck in taking messages out of the message bus and posting them to Gnocchi API such that they can then be processed by Gnocchi Metricd daemons. For the hardware tested, going above 2500 started to show sympotms of ceilometer-collector lagging in processing messages off the queue (Typically unbounded memory growth if prefetch is set to 0 (unlimited) or a Gnocchi Backlog that can never "reach" 0 backlogged work, though metricd can handle the capacity given to it.) Version-Release number of selected component (if applicable): Newton GA (OSP10) How reproducible: Always with enough hardware to host instances Steps to Reproduce: 1. Deploy Cloud with Telemetry Services 2. Deploy many instances in the cloud 3. Actual results: Going above 2,500 instances can show "lag" in processed data although Metricd is handling the capacity. Expected results: To scale above 2,500 instances Additional info: Ceilometer-collector is removed in OSP Pike (OSP12) We need to test agent-notification to see if this bottleneck is moved further back or if a new bottleneck exists. Perhaps a combination of higher ceilometer-collector workers + rabbit_qos_prefetch_count + executor_thread_pool_size can squeeze more scale out of the setup though time ran short on attempting to tune options.
The collector has been deprecated in OSP12 and is not installed anymore.
Ceilometer collector is no more deployed and installed in OSP12. The bottleneck is gone.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:3462