Bug 1846862 - Support for amphora to ovn-octavia upgrades
Summary: Support for amphora to ovn-octavia upgrades
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.4
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 4.6.0
Assignee: Luis Tomas Bolivar
QA Contact: GenadiC
URL:
Whiteboard:
Depends On:
Blocks: 1847181
TreeView+ depends on / blocked
 
Reported: 2020-06-15 07:06 UTC by Luis Tomas Bolivar
Modified: 2020-10-27 16:07 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Release Note
Doc Text:
Now it is possible to upgrade OpenShift clusters that have been deployed before OVN Octavia driver was made available on the OpenStack side. Only action needed is to ensure ovn octavia driver is available and then trigger the migration by removing the next annotation into the kuryr-config ConfigMap at openshift-kuryr namespace: - networkoperator.openshift.io/kuryr-octavia-provider: default Note this will have an impact on the connectivity that goes through those services as the loadbalancer that backs them up needs to be recreated (remove amphora load balancer and create ovn load balancer). Thus a few seconds of downtime is expected.
Clone Of:
Environment:
Last Closed: 2020-10-27 16:06:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
kuryr-controller logs during octavia upgrade (63.62 KB, text/plain)
2020-07-21 14:05 UTC, rlobillo
no flags Details
NP test results after Octavia upgrade (1.03 MB, application/gzip)
2020-07-21 14:06 UTC, rlobillo
no flags Details
conformance test results after octavia upgrade (551.04 KB, application/gzip)
2020-07-21 14:07 UTC, rlobillo
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift kuryr-kubernetes pull 283 0 None closed Bug 1846862: Add support for amphora to ovn-octavia upgrade 2020-10-15 19:14:33 UTC
OpenStack gerrit 732735 0 None MERGED Add support for amphora to ovn-octavia upgrade 2020-10-15 19:14:22 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:07:22 UTC

Description Luis Tomas Bolivar 2020-06-15 07:06:15 UTC
If ovn-octavia driver is available, there should be an option to use it on existing deployments so that the resource consumption problem of having one amphora VM per service is avoided

Comment 4 rlobillo 2020-07-21 14:05:51 UTC
Created attachment 1701896 [details]
kuryr-controller logs during octavia upgrade

Comment 5 rlobillo 2020-07-21 14:06:59 UTC
Created attachment 1701897 [details]
NP test results after Octavia upgrade

Comment 6 rlobillo 2020-07-21 14:07:44 UTC
Created attachment 1701898 [details]
conformance test results after octavia upgrade

Comment 7 rlobillo 2020-07-21 14:08:50 UTC
Tested on OCP4.6.0-0.nightly-2020-07-15-091743 over OSP16 (RHOS-16.1-RHEL-8-20200701.n.0) with OVN.

# 1. disable ovn-octavia on all the controllers:

Modify octavia.conf with enabled_provider_drivers=amphora: The Octavia Amphora driver.,octavia: Deprecated alias of the Octavia Amphora driver.


[root@controller-0 ~]# vi /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf
[root@controller-0 ~]# docker restart octavia_worker
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
7b466c70d77806a31b7a7dc9877fd9560a7a624d15bfd8d4fc4c5b70ad10c424
[root@controller-0 ~]# docker restart octavia_housekeeping
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
bce35cb4431d9dfa64a5e9a90ebdc6a68cf700ea2dabf8ddcb87d5a81804629d
[root@controller-0 ~]# docker restart octavia_api
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
4337252d0f34cc505dbb22a86b8651cc558fb6da4ff87d8d47e72de4371dba78
[root@controller-0 ~]# docker restart octavia_health_manager
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
c7b64a7a7e147336e6b4a09a8ae45ab813f99f5e01a7cf48d1a1a244660ce2b6

[root@controller-1 ~]# vi /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf
[root@controller-1 ~]# docker restart octavia_worker && docker restart octavia_housekeeping && docker restart octavia_api && docker restart octavia_health_manager
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
3d2987c0aa65931958e88e7f405747559cc8bb3f695aba312506a14850c1dbe5
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
e68f833fd9351f330cefbeade00cfba66a8c332ab319f436fcb60d8a4c927a9a
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
015385bbd8efc2d7057e5e1e0370f847b01831d05033b7715dd521e5ef6b6cf2
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
94de52ab929c32236a1b64b22fb30090b481763c0cfd66317d7c27142d8f18d2

[root@controller-2 ~]# vi /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf
[root@controller-2 ~]# docker restart octavia_worker && docker restart octavia_housekeeping && docker restart octavia_api && docker restart octavia_health_manager
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
036fce7d254a7350c0795b6b4e5f6344e03ccb49da405213e452898d71748ab8
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
8f87a6ca3c0d84dea34002b5a67da61d443b6541cdb303b0c02ce1b255cf23e2
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
ebb1846e064fca2e3496620c7eb6ed7d2f2310d8ce2329dd89971d3e4dcf50e6
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
2e09d88716795030b611ce3d0b3b4bd9d7b2d272c64456c6914c3bc98f167b1d

(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer provider list
+---------+-------------------------------------------------+
| name    | description                                     |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver.                     |
| octavia | Deprecated alias of the Octavia Amphora driver. |
+---------+-------------------------------------------------+

# 2. OCP4.6 installed with amphora provider:

[stack@undercloud-0 ~]$ 
[stack@undercloud-0 ~]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2020-07-15-091743   True        False         24h     Cluster version is 4.6.0-0.nightly-2020-07-15-091743

# 3. Check environment:

oc new-project test
oc run --image kuryr/demo demo
oc run --image kuryr/demo demo-allowed-caller
oc run --image kuryr/demo demo-caller
oc expose pod/demo --port 80 --target-port 8080

[stack@undercloud-0 ~]$ oc get all
NAME                      READY   STATUS    RESTARTS   AGE
pod/demo                  1/1     Running   0          70m
pod/demo-allowed-caller   1/1     Running   0          70m
pod/demo-caller           1/1     Running   0          70m

NAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/demo   ClusterIP   172.30.1.244   <none>        80/TCP    70m
[stack@undercloud-0 ~]$ oc rsh pod/demo-caller curl 172.30.1.244
demo: HELLO! I AM ALIVE!!!


(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer show test/demo
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| created_at          | 2020-07-16T13:44:40                  |
| description         |                                      |
| flavor_id           | None                                 |
| id                  | 2ff07f99-5fc0-4d6e-8a7d-9b63d7aab899 |
| listeners           | 6d0075d5-82f6-4521-912a-bf2983bac511 |
| name                | test/demo                            |
| operating_status    | ONLINE                               |
| pools               | 439b45ac-e9d0-4071-b849-a601d5fdf522 |
| project_id          | 4bf77b3c42f24d78b3e08e12e25d192d     |
| provider            | amphora                              |
| provisioning_status | ACTIVE                               |
| updated_at          | 2020-07-16T13:46:39                  |
| vip_address         | 172.30.1.244                         |
| vip_network_id      | 6ca9f29d-9847-42c5-8cd3-7aef1ede6e08 |
| vip_port_id         | c53a7618-e404-41ca-8d41-e94626f92a60 |
| vip_qos_policy_id   | None                                 |
| vip_subnet_id       | 9cdc4b67-cd0b-46e4-9fe8-ba73d21404a4 |
+---------------------+--------------------------------------+

# 4. Run NP and Conformance tests. All expected tests passing:

- np_results_OCP4.6withAmphoras.tgz
- conformance_results_OCP4.6withAmphoras.tgz


# 5. Keep periodical curl:

[stack@undercloud-0 ~]$ while (true); do test=$(date; oc rsh pod/demo-caller curl 172.30.1.244); echo $test; sleep 1; done
Fri Jul 17 11:09:21 EDT 2020 demo: HELLO! I AM ALIVE!!!
Fri Jul 17 11:09:23 EDT 2020 demo: HELLO! I AM ALIVE!!!
Fri Jul 17 11:09:24 EDT 2020 demo: HELLO! I AM ALIVE!!!
Fri Jul 17 11:09:26 EDT 2020 demo: HELLO! I AM ALIVE!!!
Fri Jul 17 11:09:27 EDT 2020 demo: HELLO! I AM ALIVE!!!
Fri Jul 17 11:09:28 EDT 2020 demo: HELLO! I AM ALIVE!!!


# 6. Restore ovn-octavia provider:

Modify octavia.conf with enabled_provider_drivers=amphora: The Octavia Amphora driver.,octavia: Deprecated alias of the Octavia Amphora driver.,ovn: Octavia OVN driver.


[root@controller-0 ~]# vi /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf
[root@controller-0 ~]# docker restart octavia_worker && docker restart octavia_housekeeping && docker restart octavia_api && docker restart octavia_health_manager
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
7b466c70d77806a31b7a7dc9877fd9560a7a624d15bfd8d4fc4c5b70ad10c424
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
bce35cb4431d9dfa64a5e9a90ebdc6a68cf700ea2dabf8ddcb87d5a81804629d
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
4337252d0f34cc505dbb22a86b8651cc558fb6da4ff87d8d47e72de4371dba78
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
c7b64a7a7e147336e6b4a09a8ae45ab813f99f5e01a7cf48d1a1a244660ce2b6

[root@controller-1 ~]# vi /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf
[root@controller-1 ~]# docker restart octavia_worker && docker restart octavia_housekeeping && docker restart octavia_api && docker restart octavia_health_manager
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
3d2987c0aa65931958e88e7f405747559cc8bb3f695aba312506a14850c1dbe5
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
e68f833fd9351f330cefbeade00cfba66a8c332ab319f436fcb60d8a4c927a9a
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
015385bbd8efc2d7057e5e1e0370f847b01831d05033b7715dd521e5ef6b6cf2
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
94de52ab929c32236a1b64b22fb30090b481763c0cfd66317d7c27142d8f18d2

[root@controller-2 ~]# vi /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf
[root@controller-2 ~]# docker restart octavia_worker && docker restart octavia_housekeeping && docker restart octavia_api && docker restart octavia_health_manager
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
036fce7d254a7350c0795b6b4e5f6344e03ccb49da405213e452898d71748ab8
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
8f87a6ca3c0d84dea34002b5a67da61d443b6541cdb303b0c02ce1b255cf23e2
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
ebb1846e064fca2e3496620c7eb6ed7d2f2310d8ce2329dd89971d3e4dcf50e6
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
2e09d88716795030b611ce3d0b3b4bd9d7b2d272c64456c6914c3bc98f167b1d

(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer provider list
+---------+-------------------------------------------------+
| name    | description                                     |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver.                     |
| octavia | Deprecated alias of the Octavia Amphora driver. |
| ovn     | Octavia OVN driver.                             |
+---------+-------------------------------------------------+

# 7. Execute upgrade instructions (https://github.com/openshift/openshift-docs/pull/22878/files?short_path=fd5ff84#diff-fd5ff84e354300166f38470df667938d)

# 7.1 Check ovn-octavia driver is available:

(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer provider list
+---------+-------------------------------------------------+
| name    | description                                     |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver.                     |
| octavia | Deprecated alias of the Octavia Amphora driver. |
| ovn     | Octavia OVN driver.                             |
+---------+-------------------------------------------------+

# 7.2 Edit the kuryr-config ConfigMap to trigger the recreation. You simply need to delete the annotation that states the default kuryr-octavia-provider is being used.

Before:

kind: ConfigMap
metadata:
  annotations:
    networkoperator.openshift.io/kuryr-octavia-provider: default
    networkoperator.openshift.io/kuryr-octavia-version: v2.13
  creationTimestamp: "2020-06-24T14:53:02Z"

After:

kind: ConfigMap
metadata:
  annotations:
    networkoperator.openshift.io/kuryr-octavia-version: v2.13
  creationTimestamp: "2020-07-15T12:28:08Z"


(undercloud) [stack@undercloud-0 ~]$ oc -n openshift-kuryr edit cm kuryr-config
configmap/kuryr-config edited

# 7.3 Wait until Cluster Network Operator reconcile loop is executed and detects the modification, triggering the recreation of the kuryr-controller and kuryr-cni pods

# 7.4 Check kuryr-config ConfigMap annotation has been readded but indicating the ovn driver is being used:

kind: ConfigMap
metadata:
  annotations:
    networkoperator.openshift.io/kuryr-octavia-provider: ovn
    networkoperator.openshift.io/kuryr-octavia-version: v2.13
  creationTimestamp: "2020-07-15T12:28:08Z"

Above change happened around below timestamp:
(overcloud) [stack@undercloud-0 ~]$ date
Fri Jul 17 11:26:48 EDT 2020


# 7.5 Wait until all the loadbalancers have been recreated. Only one amphora loadbalancer must remain (the one created by Cluster Network Operator) and the rest should be of ovn type:

(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer list -f yaml
- id: f41605ec-8dad-4edb-9bbd-520df7c0e55f
  name: ostest-8lk72-kuryr-api-loadbalancer
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: amphora
  provisioning_status: ACTIVE
  vip_address: 172.30.0.1
- id: f073806a-591b-4749-9218-cbd5a3ae7fe9
  name: openshift-kube-storage-version-migrator-operator/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.58.30
- id: 2f3cc972-1e42-4552-adc8-38fca729823c
  name: openshift-kube-scheduler-operator/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.134.52
- id: 920e5e6c-3faa-4312-846f-f64054c82314
  name: openshift-config-operator/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.95.123
- id: 42824c4a-7ae3-48cc-8218-25ac75d9bad5
  name: openshift-etcd-operator/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.223.26
- id: 8685f77b-7aaf-4b53-8737-3c867f568c9b
  name: openshift-service-ca-operator/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.212.163
- id: 32ec39ef-9bb8-4687-b6a9-b702fafa046f
  name: openshift-kube-apiserver-operator/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.247.219
- id: 51d5c1e5-e77b-4a6b-a377-3175e7434b56
  name: openshift-authentication-operator/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.65.188
- id: c834430a-206d-4e7b-8e24-6c823a37e23a
  name: openshift-dns-operator/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.129.210
- id: 177b8f42-09f1-4e7a-835c-59912fc07709
  name: openshift-apiserver-operator/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.48.60
- id: 5c0d3858-e7a1-4ad9-ba7b-ac5867d524fd
  name: openshift-controller-manager-operator/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.19.7
- id: a082af25-7c07-40d1-ad8e-75530623af5f
  name: openshift-machine-config-operator/machine-config-daemon
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.47.191
- id: 1e9f4aba-1b08-4232-a96a-c805a16d4014
  name: openshift-kube-controller-manager-operator/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.251.181
- id: fd05186e-3a8c-4cb9-8cd5-d6169e67dedd
  name: openshift-etcd/etcd
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.201.157
- id: 7982c876-4bc5-4212-8e51-fe91e81d2672
  name: openshift-console/downloads
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.125.143
- id: 949fd8dc-b4ef-4704-9b2b-33896c739b74
  name: openshift-cluster-version/cluster-version-operator
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.216.149
- id: 7016bd98-068a-478a-a634-bed201568993
  name: openshift-operator-lifecycle-manager/olm-operator-metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.8.82
- id: 64e364da-26ae-4aa9-a1e8-a594d69b67fd
  name: openshift-kube-controller-manager/kube-controller-manager
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.13.186
- id: 25431cab-cfcd-4ea7-a591-d8e4a6f601d8
  name: openshift-operator-lifecycle-manager/catalog-operator-metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.66.172
- id: 7330a0b5-be03-4b64-8605-971b2012d7c6
  name: openshift-multus/multus-admission-controller
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.10.81
- id: e04011db-8442-4bab-b1c2-dccc2548cce7
  name: openshift-kube-scheduler/scheduler
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.156.51
- id: 5f030eff-8395-4ceb-bdfe-632f49a084cf
  name: openshift-apiserver/api
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.246.39
- id: 81d50100-fd8c-49e4-9781-75cf7e34026f
  name: openshift-kube-apiserver/apiserver
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.82.54
- id: e1f100ca-9aa7-4356-8482-9549fdeb4c7e
  name: openshift-dns/dns-default
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.0.10
- id: 37d615d2-af39-4102-83d1-245ef43e2cd0
  name: openshift-operator-lifecycle-manager/packageserver-service
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.100.7
- id: f3d15898-68d8-4b5c-9fcd-efc3f812fefd
  name: openshift-insights/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.251.144
- id: e5331c59-7399-490f-8485-be5438f295ac
  name: openshift-controller-manager/controller-manager
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.93.195
- id: 01898116-d473-4154-b126-845cf1eb2d0c
  name: openshift-marketplace/marketplace-operator-metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.112.93
- id: 22d0d650-a8c3-478f-92dd-259a93b3e8dc
  name: openshift-cluster-storage-operator/csi-snapshot-controller-operator-metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.158.201
- id: 843a552f-7af4-451b-88de-f1336041b9f4
  name: e2e-services-3272/externalname-service
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.56.91
- id: 3be4296a-7e42-4f9a-a312-6be2b0548b0f
  name: openshift-console-operator/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.137.36
- id: 232311bd-52d6-47a8-a368-b5e94a47c109
  name: openshift-monitoring/grafana
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.33.186
- id: f65691f4-2436-4417-a404-1a339deadcd7
  name: openshift-marketplace/redhat-operators
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.236.181
- id: f5957d12-21a0-4876-9d78-19cc3ddc8af2
  name: openshift-image-registry/image-registry
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.248.189
- id: 13f61237-e219-4d1d-bf99-715e65579420
  name: openshift-cloud-credential-operator/cco-metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.253.246
- id: 79bb146f-d6a3-4e9d-a4cf-fe46ae8bf664
  name: openshift-marketplace/redhat-marketplace
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.233.197
- id: 40d38fd4-4da5-49a7-bf8f-b262b71097d4
  name: openshift-machine-api/machine-api-operator
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.227.81
- id: 29981f26-58e7-4fad-aab6-03664ab2f075
  name: openshift-ingress-operator/metrics
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.176.83
- id: 75d2c24f-3d4e-4a28-9f60-09be0f123cbd
  name: openshift-monitoring/prometheus-k8s
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.118.46
- id: 1343f96f-b03c-4865-b853-b43e8e58fa8b
  name: openshift-marketplace/certified-operators
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.81.217
- id: 55db61bc-6f7d-4c50-9dcf-4162b8153c9a
  name: openshift-machine-api/cluster-autoscaler-operator
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.186.8
- id: bb88a98a-c7c5-4685-b704-71057ec61aa0
  name: openshift-console/console
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.193.236
- id: 725271c5-e67f-4ba4-8d7c-4e7b7aec82ef
  name: openshift-machine-api/machine-api-controllers
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.113.43
- id: 5751463b-5e54-4595-bd3c-11effe2c9b44
  name: openshift-marketplace/community-operators
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.4.212
- id: 0fb08a96-20c3-407d-9303-948ea7946648
  name: openshift-authentication/oauth-openshift
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.220.222
- id: eab30336-34c0-4ebd-94a3-410ebf6df18f
  name: openshift-monitoring/alertmanager-main
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.62.209
- id: 7be48bcd-170d-4bee-9aac-e102ca414701
  name: openshift-ingress/router-internal-default
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.82.118
- id: 115589e1-c5b6-4095-8e5d-a2239a674491
  name: openshift-monitoring/prometheus-adapter
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.119.80
- id: 2a8afe2a-1e2c-4392-b299-18e1488d298a
  name: openshift-machine-api/machine-api-operator-webhook
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.14.45
- id: 5f7aa449-9adf-4471-b004-d30b80224239
  name: test/demo
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.1.244
- id: b81c017f-302e-4594-b8a7-fd87f1c79f61
  name: openshift-monitoring/thanos-querier
  project_id: 4bf77b3c42f24d78b3e08e12e25d192d
  provider: ovn
  provisioning_status: ACTIVE
  vip_address: 172.30.34.253



# 8. Check that service previously created is still operative with ovn provider:

(overcloud) [stack@undercloud-0 ~]$ oc rsh pod/demo-caller curl 172.30.1.244
demo: HELLO! I AM ALIVE!!!

with an OVN LB:

(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer show test/demo
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| created_at          | 2020-07-17T15:28:14                  |
| description         |                                      |
| flavor_id           | None                                 |
| id                  | 5f7aa449-9adf-4471-b004-d30b80224239 |
| listeners           | a151dac0-f903-4455-8998-3ea051e7a1f5 |
| name                | test/demo                            |
| operating_status    | ONLINE                               |
| pools               | 010cd765-35d9-4bc7-8b88-b49ebdbc7737 |
| project_id          | 4bf77b3c42f24d78b3e08e12e25d192d     |
| provider            | ovn                                  |
| provisioning_status | ACTIVE                               |
| updated_at          | 2020-07-17T15:28:40                  |
| vip_address         | 172.30.1.244                         |
| vip_network_id      | 6ca9f29d-9847-42c5-8cd3-7aef1ede6e08 |
| vip_port_id         | f0443731-fd84-46d2-8940-653a12415ed3 |
| vip_qos_policy_id   | None                                 |
| vip_subnet_id       | 9cdc4b67-cd0b-46e4-9fe8-ba73d21404a4 |
+---------------------+--------------------------------------+

Downtime for the service is two minutes and 10 seconds:

Fri Jul 17 11:27:18 EDT 2020 demo: HELLO! I AM ALIVE!!!^M
Fri Jul 17 11:27:20 EDT 2020 demo: HELLO! I AM ALIVE!!!^M
Fri Jul 17 11:27:21 EDT 2020 curl: (7) Failed to connect to 172.30.1.244 port 80: Operation timed out^M
Fri Jul 17 11:29:31 EDT 2020 demo: HELLO! I AM ALIVE!!!^M
Fri Jul 17 11:29:33 EDT 2020 demo: HELLO! I AM ALIVE!!!^M

Kuryr controller logs during upgrade attached (kuryr_controller_duringOctaviaUpgrade.log)

openshift-kuryr pods recreated by CNO and no restarts observed since then:

(overcloud) [stack@undercloud-0 ~]$ oc get all -n openshift-kuryr
NAME                                    READY   STATUS    RESTARTS   AGE
pod/kuryr-cni-d2smv                     1/1     Running   0          70s
pod/kuryr-cni-hvvpj                     1/1     Running   0          106s
pod/kuryr-cni-rz2kv                     0/1     Running   0          7s
pod/kuryr-cni-s8ntb                     1/1     Running   0          2m18s
pod/kuryr-cni-w5bgj                     1/1     Running   0          2m52s
pod/kuryr-cni-z5brg                     1/1     Running   0          30s
pod/kuryr-controller-5887d9995f-bb44w   1/1     Running   0          3m4s

# 9. Run NP and Conformance tests. All expected tests passing:

- np_results_OCP4.6afterOctaviaUpgrade.tgz
- conformance_results_OCP4.6afterOctaviaUpgrade.tgz

Comment 9 errata-xmlrpc 2020-10-27 16:06:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.