Bug 1683329 - OpenStack doesn't deploy properly with latest OCP 3.11.82 ERRATA
Summary: OpenStack doesn't deploy properly with latest OCP 3.11.82 ERRATA
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.11.0
Hardware: Unspecified
OS: Linux
urgent
urgent
Target Milestone: ---
: 3.11.z
Assignee: Tzu-Mainn Chen
QA Contact: weiwei jiang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-26 15:47 UTC by Pedro Amoedo
Modified: 2023-09-14 05:24 UTC (History)
11 users (show)

Fixed In Version: openshift-ansible-3.11.91-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-16 07:46:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:3695 0 None None None 2020-09-16 07:47:04 UTC

Description Pedro Amoedo 2019-02-26 15:47:44 UTC
Description of problem:

With latest OCP v3.11.82 ERRATA[1], which includes various specific fixes for OpenStack like variable "openshift_openstack_node_subnet_name"[2], the OpenStack ansible playbook (/usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/provision.yml) got stuck at the following error:

TASK [openshift_openstack : Handle the Stack (create/delete)] ******************************************************************************************************************************************************************************************************************************************************************************************************************************Monday 25 February 2019  13:23:08 +0000 (0:00:00.033)       0:00:04.547 *******
FAILED - RETRYING: Handle the Stack (create/delete) (5 retries left).
FAILED - RETRYING: Handle the Stack (create/delete) (4 retries left).
FAILED - RETRYING: Handle the Stack (create/delete) (3 retries left).
FAILED - RETRYING: Handle the Stack (create/delete) (2 retries left).
FAILED - RETRYING: Handle the Stack (create/delete) (1 retries left).
fatal: [localhost]: FAILED! => {"attempts": 5, "changed": false, "msg": "(400) Client Error for url: https://<obfuscated>, "response": {"code": 400, "error": {"message": "The specified reference \"subnet\" (in interface.Properties.subnet_id) is incorrect.", "traceback": null, "type": "InvalidTemplateReference"}, "explanation": "The server could not comply with the request since it is either malformed or otherwise incorrect.", "title": "Bad Request"}}
...ignoring

Version-Release number of selected component (if applicable):
OSP 13
OCP 3.11.82

How reproducible:

Running /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/provision.yml ansible playbook on top of OSP 13.

NOTE: I will provide more inventory details privately.

Steps to Reproduce:
1.
2.
3.

Actual results:

OpenStack not correctly deployed (see attached ansible log).

Expected results:

OpenStack successfully deployed with same ansible variables (like with previous OCP 3.11.69)

Additional info:

[1] - https://access.redhat.com/errata/RHBA-2019:0326
[2] - https://bugzilla.redhat.com/show_bug.cgi?id=1667270

Comment 11 weiwei jiang 2019-03-15 08:34:45 UTC
Hi, OpenShift QE side have no resources currently cover provider_network scenarios, since this need enable neutron internal dns.

Comment 16 errata-xmlrpc 2020-09-16 07:46:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 3.11.286 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3695

Comment 17 Red Hat Bugzilla 2023-09-14 05:24:30 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.