Bug 1703045 - Move Octavia deployment away from keystone public endpoint
Summary: Move Octavia deployment away from keystone public endpoint
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 14.0 (Rocky)
Hardware: x86_64
OS: Linux
low
low
Target Milestone: z4
: 14.0 (Rocky)
Assignee: Gregory Thiemonge
QA Contact: Bruna Bonguardo
URL:
Whiteboard:
Depends On:
Blocks: 1771472
TreeView+ depends on / blocked
 
Reported: 2019-04-25 11:17 UTC by Mauro Oddi
Modified: 2020-10-26 12:25 UTC (History)
7 users (show)

Fixed In Version: openstack-tripleo-heat-templates-9.3.1-0.20190513171765.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1771472 (view as bug list)
Environment:
Last Closed: 2019-11-06 16:47:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1835171 0 None None None 2019-07-26 14:39:26 UTC
OpenStack gerrit 669195 0 'None' MERGED Add internal keystone endpoint in octavia variables 2020-12-04 15:03:25 UTC
OpenStack gerrit 670820 0 'None' MERGED Use internal endpoints for configuration in octavia nodes 2020-12-04 15:03:25 UTC
OpenStack gerrit 674747 0 'None' MERGED Add internal keystone endpoint in octavia variables 2020-12-04 15:03:25 UTC
OpenStack gerrit 674748 0 'None' MERGED Use internal endpoints for configuration in octavia nodes 2020-12-04 15:03:25 UTC
OpenStack gerrit 674749 0 'None' MERGED Add internal keystone endpoint in octavia variables 2020-12-04 15:03:52 UTC
OpenStack gerrit 674750 0 'None' MERGED Add internal keystone endpoint in octavia variables 2020-12-04 15:03:26 UTC
OpenStack gerrit 674751 0 'None' MERGED Use internal endpoints for configuration in octavia nodes 2020-12-04 15:03:52 UTC
OpenStack gerrit 674752 0 'None' MERGED Use internal endpoints for configuration in octavia nodes 2020-12-04 15:03:27 UTC
Red Hat Product Errata RHBA-2019:3745 0 None None None 2019-11-06 16:48:10 UTC

Description Mauro Oddi 2019-04-25 11:17:19 UTC
Description of problem:
Current Octavia deployment creates API requests to the public endpoint of keystone, whereas the traffic could be considered internal (coming from the nodes having the services installed upon) or administrative traffic.

The deployment as is creates a problem in environments where the nodes running Octavia do not have direct access to the public endpoints or routing is asymmetric.

Version-Release number of selected component (if applicable):
Red Hat OpenStack Platform release 14.0.2 RC

openstack-tripleo-validations-9.3.1-0.20190119052820.f400181.el7ost.noarch
python2-tripleo-common-9.5.0-1.el7ost.noarch
python-tripleoclient-heat-installer-10.6.1-0.20190117233304.e780899.el7ost.noarch
openstack-tripleo-puppet-elements-9.0.1-1.el7ost.noarch
openstack-tripleo-image-elements-9.0.1-0.20181102144447.9f1c800.el7ost.noarch
openstack-tripleo-common-9.5.0-1.el7ost.noarch
ansible-role-tripleo-modify-image-1.0.1-0.20190226052419.9014df9.el7ost.noarch
openstack-tripleo-heat-templates-9.3.1-0.20190314162756.d0a6cb1.el7ost.noarch
puppet-tripleo-9.4.0-0.20190307172344.b5220a7.el7ost.noarch
openstack-tripleo-common-containers-9.5.0-1.el7ost.noarch

python2-octaviaclient-1.6.0-0.20180816134808.64d007f.el7ost.noarch
puppet-octavia-13.3.2-0.20181205202806.a89884b.el7ost.noarch
octavia-amphora-image-x86_64-14.0-20190402.1.el7ost.noarch

How reproducible:
always

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Yuri Obshansky 2019-06-06 21:34:55 UTC
I've encountered with similar problem when try to install 
OSP 13 with Spine-Leaf network topology + OVN + Octavia + 2 Networker nodes.
compose - 2019-01-10.1
openstack-tripleo-common-containers-8.6.6-8.el7ost.noarch
openstack-tripleo-common-8.6.6-8.el7ost.noarch
openstack-tripleo-ui-8.3.2-2.el7ost.noarch
python-tripleoclient-9.2.6-5.el7ost.noarch
openstack-tripleo-puppet-elements-8.0.1-1.el7ost.noarch
openstack-tripleo-validations-8.4.4-1.el7ost.noarch
puppet-tripleo-8.3.6-7.el7ost.noarch
ansible-tripleo-ipsec-8.1.1-0.20180308133440.8f5369a.el7ost.noarch
openstack-tripleo-image-elements-8.0.1-2.el7ost.noarch
openstack-tripleo-heat-templates-8.0.7-21.el7ost.noarch

puppet-octavia-12.4.0-7.el7ost.noarch
octavia-amphora-image-13.0-20190109.1.el7ost.noarch
python2-octaviaclient-1.4.0-1.1.el7ost.noarch
octavia-amphora-image-x86_64-13.0-20190109.1.el7ost.noarch

Roles configuration roles_data.yaml

- name: Controller0
...
  ServicesDefault:
...
    - OS::TripleO::Services::OctaviaApi
...

- name: Networker0
...
  ServicesDefault:
...
    - OS::TripleO::Services::OctaviaDeploymentConfig
    - OS::TripleO::Services::OctaviaHealthManager
    - OS::TripleO::Services::OctaviaHousekeeping
    - OS::TripleO::Services::OctaviaWorker

Deployment failed with error in mistral:
overcloud.AllNodesDeploySteps.WorkflowTasks_Step5_Execution:
  resource_type: OS::TripleO::WorkflowSteps
  physical_resource_id: c1120606-31b9-47bc-8cbf-d543beb08e1e
  status: CREATE_FAILED
  status_reason: |
    resources.WorkflowTasks_Step5_Execution: ERROR

More detailes about Octavia failure

2019-06-06 15:56:02.263 31313 INFO workflow_trace [req-fd58419e-01c7-49bc-93ba-af2f1fd87dac 78f7841a1c30427dbee402e4fd056f55 d70b2b1ee37b4acf8b250
e047e8db948 - default default] Workflow 'tripleo.overcloud.workflow_tasks.step5' [RUNNING -> ERROR, msg=Failure caused by error in tasks: octavia_
post_workflow

  octavia_post_workflow [task_ex_id=9b26f380-741d-4995-9d8e-1f2cc6e24133] -> Failure caused by error in tasks: config_octavia

  config_octavia [task_ex_id=29b1a410-2566-4a53-a64b-c0c5b0f5be81] -> Failed to run action [action_ex_id=6710acf0-b9fe-435d-beb4-9ba4e132a961, act
ion_cls='<class 'mistral.actions.action_factory.AnsiblePlaybookAction'>', attributes='{}', params='{u'remote_user': u'tripleo-admin', u'ssh_extra_
args': u'-o UserKnownHostsFile=/dev/null', u'inventory': {u'octavia_nodes': {u'hosts': {u'192.168.24.20': {}, u'192.168.24.36': {}}}}, u'become_us
er': u'root', u'extra_vars': {u'client_cert_path': u'/etc/octavia/certs/client.pem', u'mgmt_port_dev': u'o-hm0', u'lb_mgmt_subnet_name': u'lb-mgmt
-subnet', u'lb_mgmt_net_name': u'lb-mgmt-net', u'os_project_name': u'admin', u'os_username': u'admin', u'os_identity_api_version': u'3', u'amp_ima
ge_tag': u'amphora-image', u'ca_passphrase': u'mpTCUTuw7Kr3WTzGJX8grWsMr', u'auth_project_name': u'service', u'lb_mgmt_subnet_gateway': u'172.24.0
.1', u'lb_mgmt_subnet_pool_start': u'172.24.0.2', u'lb_mgmt_subnet_pool_end': u'172.24.255.254', u'os_auth_url': u'https://10.0.10.100:13000/v3',
u'ca_private_key_path': u'/etc/octavia/certs/private/cakey.pem', u'lb_sec_group_name': u'lb-mgmt-subnet', u'os_password': u'xUKFxkHwwqUkRgxAecCg4T
Qvw', u'os_auth_type': u'password', u'lb_mgmt_subnet_cidr': u'172.24.0.0/16', u'ca_cert_path': u'/etc/octavia/certs/ca_01.pem', u'generate_certs':
 True}, u'verbosity': 0, u'extra_env_variables': {u'ANSIBLE_SSH_RETRIES': u'3', u'ANSIBLE_HOST_KEY_CHECKING': u'False'}, u'ssh_private_key': u'---
--BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEA3ZA5ubbSQhIybnX1GaDWeqa4XEeo8rYgCNseSdB/rfw19ZJ7\nLqjopsadqcH8IrIdFN0MBWRlcYe0BAxP3O7C1CU5+Yl/RTa9cV
XP9S8m2WEhhwYY\np/wxLnxzFRaCjMWolRwuQZb+j/JSd6c5Z/+en502lVmybFv/QYXredT0ZtwG5GJb\nuLZ9LqVLYaGDZ/y9BWGZDztwVjWBmWlQG5PSU+MAgpjxNtj4AlGCVqHNRdnvLS4e
\nXw+XJXDs5IQMU+yFrqrNP8rVvW2CJwjJs42dN70QyFDUIMvRkYS+vDCOWFp9L5CI\nyjfEFZOir8O93hR4Yp3kY2kzr+6XZt5+yfl/qwIDAQABAoIBADTolgBVOgxxD/30\nyRzfnZgYa/oN
Wrjq6Od0e90gnvzLN4929VeFGlmJIGlDW5RleDBdQNugx+C+iSxW\nTFPz6C6E3T1Lqkq68a440bo5EkviuADoYcbroEr7iPfGMlKveLxyyFD9X7i8IxlT\no4/EKPrwrfIoQ0VBCsl402x6gA
p3mpyi5sUa2d2xDkIs4xmJ+qvHLKzRqg6YrNTu\n5FCRRhsARw/IcJXxLmMu0g67i9O1kMUGbs6XX8x5JMBigNY+i6RS97JTIu+ddXpv\njT7pOOVwY0EymWyDIKyb9pPnsuJjYw2JFaDEANU/
zpv6Vd6ojjpx32RSuqPWNZzI\nYhrEEGECgYEA9JSZkHPk9EPVLdPhUP9BETKUgto2/m7jpP51QRywS1OXT4AUzPFS\nN+uKsufKZ0meK0/LB35HP1Jet7OJeJShpXPMbG2sCWR13fGsiR4NZZ
8iW6gy6kIJ\nN2BBAHzDLRzFFdLkzq30BwfNMWNoIMp5PQsxmAkLSABGBgkOVOI4+hkCgYEA5+iC\nTS6921oQVkuUusswhy+V0pV9guD2AWqzyNy1KIOLa6bShbmrtrGRPAYG5HGHbbb9\n73
BeXuTBP8yFb2NOYNqLNvTRZ7OucEpY0oVdR65v+DMHZR/jVM6NEn4jfrVIClYX\nvCyHwsqRFe1NreBNIAHvdT6OfNYV4T3qWrsgCGMCgYBKVELbKK2DIn5OAB9wqzJO\nFK4XmlOuPWsHgKGH
2T0ml0/bxFQN+KUBA59SQak8fJ4KEaTlMRZcAx9v+qsjrx/1\nFV0h8q6e6B3+Bm1l+nEd2h/p9RMMKGd+ocz/Zes28ZBf0ojg2vLXlCJjCQ/jL0Vr\nLNS0nMMF7bdaLDRjzaB9OQKBgA8ZYV
p7H5tnisbDlwRudFNo8r1KRGjAEuRWuSvr\nytO/dNVmgDB6vUZg207oKYy4I5QuJOxxCYPuKvLncwykj5bYw9WpLPUuir3+6TeT\nvVYMcnfbgrC/2cJMzHyWv+LhFLavkk4LLC+vlrCxyav3
fa4G0jt0/jv8iGIo8NhF\ndLl3AoGBANVn9ySPH9JnyCgUhaTIJtJbL3CDGwmH8VdDrx4hgmML5/7d53cQDEX3\n7ml3GpCsfv+vWB/QgC9q0iMOL8reuNaaT1Unj+52v+jbD6GVNsx+54SrlI
1zF11d\nYYKn3NBgJI9XkIr5NK1VE48xiopCdVM5DNClC8YjJYDPb7Xp5JKt\n-----END RSA PRIVATE KEY-----\n', u'become': True, u'ssh_common_args': u'-o StrictHo
stKeyChecking=no', u'playbook': u'/usr/share/tripleo-common/playbooks/octavia-files.yaml'}']
 Unexpected error while running command.
TASK [octavia-overcloud-config : include_tasks] ********************************\nThursday 06 June 2019  15:55:57 -0400 (0:00:03.675)       0:00:03.754 ********* \nincluded: /usr/share/open
stack-tripleo-common/playbooks/roles/octavia-overcloud-config/tasks/network.yml for 192.168.24.20\n\nTASK [octavia-overcloud-config : create manag
ement network for load balancers] ***\nThursday 06 June 2019  15:55:57 -0400 (0:00:00.047)       0:00:03.802 ********* \nfatal: [192.168.24.20]: F
AILED! => {"changed": false, "cmd": "if [[ $(openstack network show lb-mgmt-net > /dev/null; echo $?) -eq 1 ]]; then\\n openstack network create -
f value -c id lb-mgmt-net\\n fi", "delta": "0:00:01.964978", "end": "2019-06-06 19:55:59.512877", "failed": true, "msg": "non-zero return code", "
rc": 1, "start": "2019-06-06 19:55:57.547899", "stderr": "Failed to discover available identity versions when contacting https://10.0.10.100:13000
/v3. Attempting to parse version from URL.\\nUnable to establish connection to https://10.0.10.100:13000/v3/auth/tokens: HTTPSConnectionPool(host=
\'10.0.10.100\', port=13000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError(\'<requests.packages.urllib3.connection
.VerifiedHTTPSConnection object at 0x7f6e2aff7910>: Failed to establish a new connection: [Errno 101] Network is unreachable\',))\\nFailed to disc
over available identity versions when contacting https://10.0.10.100:13000/v3. Attempting to parse version from URL.\\nUnable to establish connect
ion to https://10.0.10.100:13000/v3/auth/tokens: HTTPSConnectionPool(host=\'10.0.10.100\', port=13000): Max retries exceeded with url: /v3/auth/to
kens (Caused by NewConnectionError(\'<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f894e7f4910>: Failed to establish
a new connection: [Errno 101] Network is unreachable\',))", "stderr_lines": ["Failed to discover available identity versions when contacting https
://10.0.10.100:13000/v3. Attempting to parse version from URL.", "Unable to establish connection to https://10.0.10.100:13000/v3/auth/tokens: HTTP
SConnectionPool(host=\'10.0.10.100\', port=13000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError(\'<requests.packag
es.urllib3.connection.VerifiedHTTPSConnection object at 0x7f6e2aff7910>: Failed to establish a new connection: [Errno 101] Network is unreachable\
',))", "Failed to discover available identity versions when contacting https://10.0.10.100:13000/v3. Attempting to parse version from URL.", "Unab
le to establish connection to https://10.0.10.100:13000/v3/auth/tokens: HTTPSConnectionPool(host=\'10.0.10.100\', port=13000): Max retries exceede
d with url: /v3/auth/tokens (Caused by NewConnectionError(\'<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f894e7f4910
>: Failed to establish a new connection: [Errno 101] Network is unreachable\',))"], "stdout": "", "stdout_lines": []}\n\nPLAY RECAP **************
*******************************************************\n192.168.24.20              : ok=2    changed=0    unreachable=0    failed=1   \n\nThursda
y 06 June 2019  15:55:59 -0400 (0:00:02.385)       0:00:06.188 ********* \n=======================================================================
======== \n'
Stderr: u' [WARNING]: Could not match supplied host pattern, ignoring: undercloud\n'
2019-06-06 15:55:59.598 31393 ERROR mistral.executors.default_executor Traceback (most recent call last):
2019-06-06 15:55:59.598 31393 ERROR mistral.executors.default_executor   File "/usr/lib/python2.7/site-packages/mistral/executors/default_executor
.py", line 114, in run_action
2019-06-06 15:55:59.598 31393 ERROR mistral.executors.default_executor     result = action.run(action_ctx)
2019-06-06 15:55:59.598 31393 ERROR mistral.executors.default_executor   File "/usr/lib/python2.7/site-packages/tripleo_common/actions/ansible.py"
, line 541, in run
2019-06-06 15:55:59.598 31393 ERROR mistral.executors.default_executor     log_errors=processutils.LogErrors.ALL)
2019-06-06 15:55:59.598 31393 ERROR mistral.executors.default_executor   File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py",
 line 424, in execute
2019-06-06 15:55:59.598 31393 ERROR mistral.executors.default_executor     cmd=sanitized_cmd)
2019-06-06 15:55:59.598 31393 ERROR mistral.executors.default_executor ProcessExecutionError: Unexpected error while running command.
2019-06-06 15:55:59.598 31393 ERROR mistral.executors.default_executor Command: ansible-playbook /usr/share/tripleo-common/playbooks/octavia-files:

Comment 4 Emblem Parade 2019-08-05 23:37:22 UTC
I'm trying to apply this patch in RDO stein but am having trouble with "octavia-files.yaml". The problem is that it's part of the container images being downloaded, so replacing it on TripleO won't make a difference.

What I can do is replace it in the container image mount after it's deployed, and then restart the container service.

Could you tell me which exact container I need to update?

Comment 5 Emblem Parade 2019-08-05 23:44:18 UTC
It looks like it could be one of these container images:

* nova_scheduler
* mistral_executor
* mistral_event_engine
* mistral_engine
* mistral_api

Comment 6 Emblem Parade 2019-08-07 21:03:56 UTC
The patch does not work for me -- I get the same "Failed to establish a new connection" error.

It was hard to apply the patch, but I patched all the container images mentioned above as well as the TripeO machine.

Also note that I think you are missing something in the patch: in "octava-files.yaml" you should probably also patch line 22 to use os_int_auth_url. In any case, the patch does not work even with my addition.

Please advise on next steps! This is a blocker for a project at Red Hat's CTO office.

Comment 10 errata-xmlrpc 2019-11-06 16:47:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3745


Note You need to log in before you can comment on or make changes to this bug.