Bug 990584 - [Doc] Keystone SQL Backend does not remove expired tokens
Summary: [Doc] Keystone SQL Backend does not remove expired tokens
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: doc-Installation_and_Configuration_Guide
Version: 2.0 (Folsom)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z2
: 4.0
Assignee: Bruce Reeler
QA Contact: ecs-bugs
URL:
Whiteboard:
Depends On:
Blocks: 908355 1011091 1011093 1029671
TreeView+ depends on / blocked
 
Reported: 2013-07-31 13:39 UTC by Stephen Gordon
Modified: 2020-03-11 14:49 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-03-04 00:04:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Stephen Gordon 2013-07-31 13:39:15 UTC
Cloned for documentation impact, refer to Bug # 908355 for implementation details.

Comment 2 Summer Long 2013-10-23 03:53:40 UTC
Jeff Dexter:"> It was recently brought to my attention that the token db that keystone keeps does not rid itself of expired tokens. On the gss test sytem, i had over 140,000 tokens in the DB and that is for very small and inactive deployment.
Is there discussion about keystone keeping these tokens forever, or should it be purging them at some point, or some way to manage the DB other then to go in and delete expired tokens?"

Steve Gordon:"In RHELOSP 3 (Grizzly) they need to be removed manually from the database, in RHELOSP 4 (Havana) [1] it will be possible to instead use the command "keystone-manage token_flush" provided as a result of this upstream blueprint: https://blueprints.launchpad.net/keystone/+spec/keystone-manage-token-flush"

Comment 3 Summer Long 2013-11-26 01:52:36 UTC
Updating priority to match severity.

Comment 4 Jeremy Agee 2013-12-02 21:12:55 UTC
Current recommendations form devel is to use the keystone-manage token_flush command each minute to remove tokens. if this is not done on a frequent bases a few things can occur. 1) The database can fill up. 2)The database can have locking issues while token cleanup is occurring on a large dataset. This results in no new tokens getting issued during the sql table lock time.

We can suggest creating the following file and restarting the cron daemon.

/etc/cron.d/keystone
---------------------------------------------------------- 
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# Clean up expired tokens in the database
* * * * *     keystone    /usr/bin/keystone-manage token_flush >/var/log/keystone/cron.log 2>&1
----------------------------------------------------------

service crond restart

Comment 7 Martin Lopes 2014-01-14 06:30:20 UTC
This bug is being assigned to Bruce Reeler, who is now the designated docs specialist for OpenStack Identity Service.

Comment 8 Bruce Reeler 2014-02-19 07:14:53 UTC
From the dev bug it looks like the upstream patch to fix this is not yet accepted, and the dev bug has been moved to ver5.0.  So adding this "keystone-manage token_flush cmd has to be run every minute" to the ICG for now, will see what happens w.r.t. ver5.0, if still an issue for v5.0 will add to Configuration Reference.

For QA: See the note in section 9.4.1, added sentence:"It is recommended that this command be run approximately once per minute."


Note You need to log in before you can comment on or make changes to this bug.