Bug 1573500 - Keystone still configured for single process [NEEDINFO]
Summary: Keystone still configured for single process
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: puppet-keystone
Version: 12.0 (Pike)
Hardware: Unspecified
OS: Unspecified
Target Milestone: zstream
: 12.0 (Pike)
Assignee: Harry Rybacki
QA Contact: Pavan
Whiteboard: aos-scalability-310
Depends On:
TreeView+ depends on / blocked
Reported: 2018-05-01 14:18 UTC by Alex Krzos
Modified: 2018-12-05 18:54 UTC (History)
13 users (show)

Fixed In Version: puppet-keystone-11.4.0-3.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2018-12-05 18:52:40 UTC
Target Upstream Version:
amcleod: needinfo? (hrybacki)

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
OpenStack gerrit 576453 0 None MERGED apache wsgi: Exchange defaults for workers and threads 2020-10-24 23:56:43 UTC
Red Hat Product Errata RHBA-2018:3789 0 None None None 2018-12-05 18:54:21 UTC

Description Alex Krzos 2018-05-01 14:18:29 UTC
Description of problem:
Keystone is configured as a single process on OSP11 / Ocata z5

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Install Undercloud and see how many Keystone admin/main api processes exist

Actual results:
Single process despite multi-core system

Expected results:
Several keystone processes to handle greater load against keystone on undercloud

Additional info:

Anything > 10 nodes will suffer a slow overcloud install due to the single keystone process only able to use a ~1.2cores while building an overcloud.  For Example, we just recently built an 82 node cluster (3 controllers, 10 cephs, 69 computes) and the deploy actually exceeded the 4 hour timeout due to keystone spending most of its time stuck on 1.2 cores.  This easily can be tuned to more cores and greatly speeds up the deployment process.  OSP10 even has this fix.

This was supposedly fixed in OSP11 before according to this bz:

[root@b04-h19-1029p ~]# cat /etc/version.json 
    "osp_series": "ocata",
    "osp_version": "11",
    "rhos_release": "11-director",
    "build": "z5",
    "uc_build_date": "20180501-140811"
[root@b04-h19-1029p ~]# rpm -qa | grep keystone
[root@b04-h19-1029p ~]# ps afx | grep keystone
 47316 pts/1    S+     0:00      \_ grep --color=auto keystone
 18547 ?        Sl     0:14  \_ keystone-admin  -DFOREGROUND
 18548 ?        Sl     0:09  \_ keystone-main   -DFOREGROUND
[root@b04-h19-1029p ~]# grep processes /etc/httpd/conf.d/10-keystone_wsgi_*
/etc/httpd/conf.d/10-keystone_wsgi_admin.conf:  WSGIDaemonProcess keystone_admin display-name=keystone-admin group=keystone processes=1 threads=12 user=keystone
/etc/httpd/conf.d/10-keystone_wsgi_main.conf:  WSGIDaemonProcess keystone_main display-name=keystone-main group=keystone processes=1 threads=12 user=keystone
[root@b04-h19-1029p ~]# lscpu | grep "^CPU(s):"
CPU(s):                64

I may have opened this against the incorrect component but I believe this is *fixed* in puppet-keystone but something is passing in the wrong parameters and I am not sure what component that is doing this.

Comment 8 Juan Antonio Osorio 2018-06-19 09:24:04 UTC
Alex, what's the undercloud.conf that you used to deploy?

Comment 9 Juan Antonio Osorio 2018-06-19 09:27:03 UTC
Alex, also, you are right, this is still an issue in OSP12/Pike. The fix landed for Queens though.

Comment 10 Alex Krzos 2018-06-21 12:58:57 UTC
(In reply to Juan Antonio Osorio from comment #8)
> Alex, what's the undercloud.conf that you used to deploy?

For OSP11 or OSP12?

The OSP11 undercloud.conf is available here:

Comment 11 Harry Rybacki 2018-06-25 19:09:37 UTC
Upstream review has merged. Moving RHBZ to POST.

Comment 12 Harry Rybacki 2018-11-08 16:57:54 UTC
Downstream build complete. Updatinf FIV and moving to MODIFIED.

Comment 21 errata-xmlrpc 2018-12-05 18:52:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.