Bug 1470363 - [Neutron] Keystone authentication issues
[Neutron] Keystone authentication issues
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo (Show other bugs)
12.0 (Pike)
All All
unspecified Severity high
: ---
: ---
Assigned To: Brent Eagles
Arik Chernetsky
: Reopened
Depends On:
  Show dependency treegraph
Reported: 2017-07-12 15:53 EDT by Joe Talerico
Modified: 2018-07-02 09:07 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2018-04-26 14:18:17 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Joe Talerico 2017-07-12 15:53:58 EDT
I set openstack-tripleo - as I am not sure if this is a issue with :

A) How Neutron is creating tokens
B) How HAProxy is configured
C) How Keystone is configured (not enough workers?)

Also, with Pike Fernet tokens are also in-use instead of UUID (previously).

Description of problem:

Running through Performance scenarios we ran against the previous version(s) we see:
2017-07-12 18:20:31.007 122345 ERROR keystonemiddleware.auth_token [-] Bad response code while validating token: 408: RequestTimeout: Request Timeout (HTTP 408)
2017-07-12 18:20:31.007 122345 WARNING keystonemiddleware.auth_token [-] Identity response: <html><body><h1>408 Request Time-out</h1>
Your browser didn't send a complete request in time.
: RequestTimeout: Request Timeout (HTTP 408)
2017-07-12 18:20:31.007 122345 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Failed to fetch token data from identity server: ServiceError: Failed to fetch token data from identity server
2017-07-12 18:20:31.008 122345 INFO neutron.wsgi [-] "GET /v2.0/networks.json HTTP/1.1" status: 503  len: 362 time: 13.8438971

Version-Release number of selected component (if applicable):
[heat-admin@overcloud-controller-0 ~]$ rpm -qa | grep neutron
[heat-admin@overcloud-controller-0 ~]$ rpm -qa | grep keystone

How reproducible:

Steps to Reproduce:
1. Execute rally create-list-network scenario times:1000 concurrency: 64

Actual results:
|                                                   Response Times (sec)                                                    |
| Action                 | Min (sec) | Median (sec) | 90%ile (sec) | 95%ile (sec) | Max (sec) | Avg (sec) | Success | Count |
| neutron.create_network | 0.965     | 1.447        | 39.376       | 63.021       | 117.748   | 11.164    | 98.0%   | 1000  |
| neutron.list_networks  | 0.535     | 11.913       | 31.061       | 42.132       | 119.78    | 15.354    | 99.8%   | 982   |
| total                  | 2.35      | 16.511       | 63.54        | 85.171       | 192.939   | 26.519    | 98.0%   | 1000  |

Expected results:
100% success -- as we had with Ocata : http://kibana.scalelab.redhat.com/goto/a1fba39fcf85294f46be49656dcf5f45

Additional info:

I tried :
Bumping the following values thus far:
- timeout  http-request 20s (was 10) -- problem still exists.

- processes: 24 (was 12) -- problem still exists.
Comment 1 Joe Talerico 2017-07-13 10:19:56 EDT
keystone-admin: 24 processes (was 12)
keystone-main: 24 processes (was 12)
haproxy: timeout http-request 20s (was 10s) 

^ configuration helps -- much less BADREQ's (75% less)

Additional information prior to the changes, I would see:
Jul 12 15:35:44 localhost haproxy[709736]: [12/Jul/2017:19:35:24.951] keystone_admin keystone_admin/<NOSRV> -1/-1/-1/-1/20000 408 212 - - cR-- 910/42/3/0/3 0/0 "<BADREQ>"
Comment 2 Ryan O'Hara 2017-07-17 10:37:03 EDT
I talked with Joe quite a bit about this and here are my thoughts:

The 408 error here is happening because of the http-request timeout. It seems like increasing this up to 30s will make the 408 errors go away, but I am very skeptical to increase this timeout because it seems that is just masking the actual problem.

The neutron client issues a request, which goes through haproxy. The neutron server that receives this request in turn creates a keystone request which is also goes through haproxy, and that is where things go badly. Say that 'option http-request' is set to 20s. That means that the client (in this case neutron itself is the "client" since it is sending the request to keystone) has 20s to send a complete HTTP request to haproxy before a timeout occurs. Read all about it here [1].

Note that one of the more interesting things about this timeout is it only applies to the HTTP header. From [1]:

"Note that this timeout only applies to the header part of the request, and
not to any data. As soon as the empty line is received, this timeout is not
used anymore."

So it seems like neutron's HTTP request to keystone (via haproxy) is not even sending the header within the timeout. That seems strange.

1.  http://cbonte.github.io/haproxy-dconv/1.5/configuration.html#timeout%20http-request
Comment 3 Joe Talerico 2017-07-17 16:26:29 EDT
(In reply to Joe Talerico from comment #0)
> Expected results:
> 100% success -- as we had with Ocata :
> http://kibana.scalelab.redhat.com/goto/a1fba39fcf85294f46be49656dcf5f45
Rethinking the results in the kibana link above... 

Ocata TripleO deployed Neutron incorrectly [1] -- so my Ocata deployment had 32 Neutron workers (not 12).

Increasing the worker count to 32 for neutron across all 3 controllers I see [2]. No errors, however, list_networks is taking ~ double the amount of time it previously did (not 100% on this, since I only had a single run to compare against, I am getting more data for this). 

[1] https://review.openstack.org/#/c/481587/2
[2] http://kibana.scalelab.redhat.com/goto/aaa4ff5ba8be7a4137ad0bc2f50438dc
[3] http://kibana.scalelab.redhat.com/goto/452b7a3998cc110e336358f3ea0da324
Comment 5 Brent Eagles 2018-01-24 10:05:17 EST
This appears to be a worker count related issue so I'm closing a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1468018, which is current ON_DEV with a patch posted upstream (see https://review.openstack.org/536957)

*** This bug has been marked as a duplicate of bug 1468018 ***
Comment 6 Brent Eagles 2018-01-24 12:40:32 EST
Re-opening. Joe pointed out that this is probably not related to the worker count issue in ocata - this was found against pike.
Comment 8 Brent Eagles 2018-02-19 23:20:19 EST
@Joe, what are the chances of repeating this scenario? I would like to see if we capture a "slice" of the logs across the system for when this is happening.
Comment 9 Assaf Muller 2018-04-26 14:18:17 EDT
Let's re-open when anyone has access to HW and can reproduce, then we'll have folks jump on the live system.

Note You need to log in before you can comment on or make changes to this bug.