RDO tickets are now tracked in Jira https://issues.redhat.com/projects/RDO/issues/
Bug 1008865 - keystone-all process reaches 100% CPU consumption
Summary: keystone-all process reaches 100% CPU consumption
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: RDO
Classification: Community
Component: openstack-keystone
Version: unspecified
Hardware: Unspecified
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Adam Young
QA Contact: yeylon@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-09-17 08:56 UTC by Nir Magnezi
Modified: 2019-09-10 14:12 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-03-19 22:38:11 UTC
Embargoed:


Attachments (Terms of Use)
packstack answer file (13.33 KB, text/plain)
2013-09-17 09:01 UTC, Nir Magnezi
no flags Details
keystone.log (75.23 KB, text/x-log)
2013-09-17 09:02 UTC, Nir Magnezi
no flags Details

Description Nir Magnezi 2013-09-17 08:56:30 UTC
Description of problem:
=======================
keystone-all process reaches 100% CPU consumption.

I created a new user group and tried to move all users to that group (via Horizon).
The action got stuck and I had to refresh the page.
From that point I started to noticed that each action takes a long time (around 10 sec at minimum) and that includes the simplest CLI commands such as "nova list" "cinder list" etc.
'top' shows that each time I perform an action - the keystone-all process CPU consumption climbs to 100% for ~10 seconds.
I removed all users from the user group I created and even restarted keystone - but the issue still reproduces.

Version-Release number of selected component (if applicable):
=============================================================
RDO Havana --> http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-6.noarch.rpm

openstack-packstack-2013.2.1-0.9.dev756.el6.noarch
openstack-keystone-2013.2-0.11.b3.el6.noarch

How reproducible:
=================
Reproduced several times with my setup.
Does not reproduce with every setup.

Steps to Reproduce:
===================
See problem description.

Actual results:
===============
keystone-all process reaches 100% CPU consumption for a least 10 sec, which causes actions to take a lot longer than usual.

Expected results:
=================
keystone-all process should not consume that amount of CPU.

Additional info:
================
Keystone log attached with debug and verbose enabled.

Comment 1 Nir Magnezi 2013-09-17 09:01:36 UTC
Created attachment 798678 [details]
packstack answer file

Comment 2 Nir Magnezi 2013-09-17 09:02:19 UTC
Created attachment 798679 [details]
keystone.log

Keystone log attached with debug and verbose enabled.

Comment 3 Nir Magnezi 2013-09-17 10:02:27 UTC
Ran mysql query: 
SELECT token.id AS token_id, token.expires AS token_expires, token.extra AS token_extra, token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id FROM token WHERE token.expires > NOW() AND token.valid = 0

copy & paste from mysql: 1294 rows in set (0.55 sec)

it took 55 seconds to get the full response (not a typo).

Comment 4 Alan Pevec 2013-09-17 10:04:29 UTC
So how do you measure 55s ?

Comment 5 Alan Pevec 2013-09-17 10:32:22 UTC
You can also trying profiling:
http://dev.mysql.com/doc/refman/5.1/en/show-profile.html

Comment 6 Adam Young 2013-10-08 17:28:54 UTC
Please see upstream bug related to MySQL setup:  

https://bugs.launchpad.net/keystone/+bug/1182481

And confirm that you have set up mysql approapriatly.  If that fixes it, please close this issue.

Comment 7 Nir Magnezi 2013-10-24 09:25:07 UTC
(In reply to Adam Young from comment #6)
> Please see upstream bug related to MySQL setup:  
> 
> https://bugs.launchpad.net/keystone/+bug/1182481
> 
> And confirm that you have set up mysql approapriatly.  If that fixes it,
> please close this issue.

Adam,

I followed your comment[1] and I still see high CPU utilization by keystone-all.

[1] https://bugs.launchpad.net/keystone/+bug/1182481/comments/3

Comment 8 Adam Young 2013-12-17 19:40:14 UTC
This should have been fixed by the upstream change to create the index on the token table.

Comment 10 ARVINDSHARMA 2015-08-20 22:57:21 UTC
Hello All,

We are using Juno on RHEL 7 and we are also facing the same issue :(

All requests takes long time...

Please help us :)


This is from OS controller node:

KiB Mem : 26376526+total, 23464988+free, 19374300 used,  9741064 buff/cache
KiB Swap:  4194300 total,  4194300 free,        0 used. 24385766+avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 5631 keystone  20   0 2838596 684948   7804 S 115.7  0.3  57:34.22 httpd
keystone is consuming 100% CPU

Comment 11 Adam Young 2016-03-19 22:38:11 UTC
Please see comment 8. If there is still a problem, please open a new bug with the context, logs, and steps to replicate/


Note You need to log in before you can comment on or make changes to this bug.