Bug 1127788 - [RFE] keystone-manage token_flush fails when there is a huge number of tokens to flush
Summary: [RFE] keystone-manage token_flush fails when there is a huge number of token...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-keystone
Version: 4.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Adam Young
QA Contact: yeylon@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-07 14:41 UTC by Eduard Barrera
Modified: 2016-04-27 04:21 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-03-28 17:52:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Eduard Barrera 2014-08-07 14:41:27 UTC
Description of problem:

We have a case were user is reporting a large database.

We suggested to do a 

#keystone-manage token_flush

mysql> select count(*) from token;

+----------+
| count(*) |
+----------+
|  1769653 |
+----------+
1 row in set (15.01 sec)


2014-08-07 09:23:24.201 6690 TRACE keystone.common.wsgi OperationalError: (OperationalError) (1206, 'The total number of locks exceeds the lock table size') 'INSERT INTO token (id, expires, extra, valid, user_id, trust_id) VALUES (%s, %s, %s, %s, %s, %s)' ('c276e3b5f2672a3195ff75dde4e70f96', datetime.datetime(2014, 8, 8, 7, 23, 24), '{"bind": null, "token_data": {"access": {"token": {"issued_at": "2014-08-07T07:23:24.155615", "expires": "2014-08-08T07:23:24Z", "id": "MIIC8QYJKoZIhvcNAQcCoIIC4jCCAt4CAQExCTAHBgUrDgMCGjCCAUcGCSqGSIb3DQEHAaCCATgEggE0eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wOC0wN1QwNzoyMzoyNC4xNTU2MTUiLCAiZXhwaXJlcyI6ICIyMDE0LTA4LTA4VDA3OjIzOjI0WiIsICJpZCI6ICJwbGFjZWhvbGRlciJ9LCAic2VydmljZUNhdGFsb2ciOiBbXSwgInVzZXIiOiB7InVzZXJuYW1lIjogImFkbWluIiwgInJvbGVzX2xpbmtzIjogW10sICJpZCI6ICJlN2FjMDcyZDk3MDk0YWRhYTM2YzVkYmUzOWJjNWQ1ZSIsICJyb2xlcyI6IFtdLCAibmFtZSI6ICJhZG1pbiJ9LCAibWV0YWRhdGEiOiB7ImlzX2FkbWluIjogMCwgInJvbGVzIjogW119fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAJryQKWxsPich1mUE7SDrUhF0eX5cUIeSeFcaPfFZDLttr0WAI8HpTiiUc4l6sVJGHq5Nr1b-SA9Ybg+Hp8aI0NwRqoTQxemvRCEwS+OAGObHpVLjiJelJO0B4YBwJcOZpegWgSZZT5cCnPYENcDuMotNG1Eb4GcAqNwYRtjA9748UDcRhkJtjrJTNGdg+CHvmOMngf0mN0ouNGnk58XrtLtLxlq6PvTY8vptUkuKronigTnfTXk53IjgZVCp+vEhddkjylSCX7EsQdGYnEUMFQkJY36snH0c1d9HuKEjhiityl2Fphe31crIPAAjl1TcCgtMNW4ItyzVeUDkeyxJbA=="}, "serviceCatalog": [], "user": {"username": "admin", "roles_links": [], "id": "e7ac072d97094adaa36c5dbe39bc5d5e", "roles": [], "name": "admin"}, "metadata": {"is_admin": 0, "roles": []}}}, "user": {"id": "e7ac072d97094adaa36c5dbe39bc5d5e", "enabled": true, "email": "admin@localdomain", "name": "admin", "tenantId": "eae2382282114f44a5331eb8020bdb5e"}, "key": "MIIC8QYJKoZIhvcNAQcCoIIC4jCCAt4CAQExCTAHBgUrDgMCGjCCAUcGCSqGSIb3DQEHAaCCATgEggE0eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wOC0wN1QwNzoyMzoyNC4xNTU2MTUiLCAiZXhwaXJlcyI6ICIyMDE0LTA4LTA4VDA3OjIzOjI0WiIsICJpZCI6ICJwbGFjZWhvbGRlciJ9LCAic2VydmljZUNhdGFsb2ciOiBbXSwgInVzZXIiOiB7InVzZXJuYW1lIjogImFkbWluIiwgInJvbGVzX2xpbmtzIjogW10sICJpZCI6ICJlN2FjMDcyZDk3MDk0YWRhYTM2YzVkYmUzOWJjNWQ1ZSIsICJyb2xlcyI6IFtdLCAibmFtZSI6ICJhZG1pbiJ9LCAibWV0YWRhdGEiOiB7ImlzX2FkbWluIjogMCwgInJvbGVzIjogW119fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAJryQKWxsPich1mUE7SDrUhF0eX5cUIeSeFcaPfFZDLttr0WAI8HpTiiUc4l6sVJGHq5Nr1b-SA9Ybg+Hp8aI0NwRqoTQxemvRCEwS+OAGObHpVLjiJelJO0B4YBwJcOZpegWgSZZT5cCnPYENcDuMotNG1Eb4GcAqNwYRtjA9748UDcRhkJtjrJTNGdg+CHvmOMngf0mN0ouNGnk58XrtLtLxlq6PvTY8vptUkuKronigTnfTXk53IjgZVCp+vEhddkjylSCX7EsQdGYnEUMFQkJY36snH0c1d9HuKEjhiityl2Fphe31crIPAAjl1TcCgtMNW4ItyzVeUDkeyxJbA==", "token_version": "v2.0", "tenant": null, "metadata": {"roles": []}}', 1, 'e7ac072d97094adaa36c5dbe39bc5d5e', None)
2014-08-07 09:23:24.201 6690 TRACE keystone.common.wsgi 
2014-08-07 09:23:54.171 25983 CRITICAL keystone [-] (OperationalError) (1206, 'The total number of locks exceeds the lock table size') 'DELETE FROM token WHERE token.expires < %s' (datetime.datetime(2014, 8, 7, 7, 11, 9, 264712),)
2014-08-07 09:29:01.332 26910 CRITICAL keystone [-] (OperationalError) (1206, 'The total number of locks exceeds the lock table size') 'DELETE FROM token WHERE token.expires < %s' (datetime.datetime(2014, 8, 7, 7, 24, 28, 200182),)

It was caused by the huge number off tokens to flush

mysql> select count(*) from token;

+----------+
| count(*) |
+----------+
|  1769653 |
+----------+
1 row in set (15.01 sec)

I guess because innodb_buffer_pool_size got exausted.


The workarround was to do it manually the necessary number of times from the database with a LIMIT




DELETE FROM token WHERE NOT DATE_SUB(CURDATE(),INTERVAL 2 DAY) <= expires LIMIT 10000;

Is it possible to add a --limit parameter to #keystone-manage token_flush


Thanks

Comment 2 Udi Kalifon 2014-08-07 14:50:07 UTC
We recommend to run the token flush once a minute by a cron job, to prevent the database from inflating. Packstack sets up this cron job:
*/1 * * * * /usr/bin/keystone-manage token_flush >/dev/null 2>&1

Comment 5 Adam Young 2016-01-12 15:33:30 UTC
Note that upstream Keystone is moving to Fernet tokens, which do not have to be persisted to the database.  Since their is a work-around (one time flush) for people with large Token tables, and the right solution is to have the token-flush run on a periodic basis, there is no support for a more complex flush mechanism from upstream Keystone development.

Comment 6 Adam Young 2016-03-28 17:52:35 UTC
Note that the token-flush has had a batch_size parameter for several releases:

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/token/persistence/backends/sql.py#n278

This was reported on 4.0, which is no longer accepting backports, but the feature desired is in later versions of the product.


Note You need to log in before you can comment on or make changes to this bug.