Bug 990584 - [Doc] Keystone SQL Backend does not remove expired tokens
[Doc] Keystone SQL Backend does not remove expired tokens
Product: Red Hat OpenStack
Classification: Red Hat
Component: doc-Installation_and_Configuration_Guide (Show other bugs)
2.0 (Folsom)
Unspecified Unspecified
high Severity high
: z2
: 4.0
Assigned To: Bruce Reeler
: Documentation, Triaged
Depends On:
Blocks: 908355 1011091 1011093 1029671
  Show dependency treegraph
Reported: 2013-07-31 09:39 EDT by Stephen Gordon
Modified: 2017-01-17 22:48 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2014-03-03 19:04:32 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Stephen Gordon 2013-07-31 09:39:15 EDT
Cloned for documentation impact, refer to Bug # 908355 for implementation details.
Comment 2 Summer Long 2013-10-22 23:53:40 EDT
Jeff Dexter:"> It was recently brought to my attention that the token db that keystone keeps does not rid itself of expired tokens. On the gss test sytem, i had over 140,000 tokens in the DB and that is for very small and inactive deployment.
Is there discussion about keystone keeping these tokens forever, or should it be purging them at some point, or some way to manage the DB other then to go in and delete expired tokens?"

Steve Gordon:"In RHELOSP 3 (Grizzly) they need to be removed manually from the database, in RHELOSP 4 (Havana) [1] it will be possible to instead use the command "keystone-manage token_flush" provided as a result of this upstream blueprint: https://blueprints.launchpad.net/keystone/+spec/keystone-manage-token-flush"
Comment 3 Summer Long 2013-11-25 20:52:36 EST
Updating priority to match severity.
Comment 4 Jeremy Agee 2013-12-02 16:12:55 EST
Current recommendations form devel is to use the keystone-manage token_flush command each minute to remove tokens. if this is not done on a frequent bases a few things can occur. 1) The database can fill up. 2)The database can have locking issues while token cleanup is occurring on a large dataset. This results in no new tokens getting issued during the sql table lock time.

We can suggest creating the following file and restarting the cron daemon.


# Clean up expired tokens in the database
* * * * *     keystone    /usr/bin/keystone-manage token_flush >/var/log/keystone/cron.log 2>&1

service crond restart
Comment 7 Martin Lopes 2014-01-14 01:30:20 EST
This bug is being assigned to Bruce Reeler, who is now the designated docs specialist for OpenStack Identity Service.
Comment 8 Bruce Reeler 2014-02-19 02:14:53 EST
From the dev bug it looks like the upstream patch to fix this is not yet accepted, and the dev bug has been moved to ver5.0.  So adding this "keystone-manage token_flush cmd has to be run every minute" to the ICG for now, will see what happens w.r.t. ver5.0, if still an issue for v5.0 will add to Configuration Reference.

For QA: See the note in section 9.4.1, added sentence:"It is recommended that this command be run approximately once per minute."

Note You need to log in before you can comment on or make changes to this bug.