PR https://github.com/openshift/release/pull/12497 added an upgrade job that requires a LB. We're seeing on occasion that the destroy command fails to delete the LB and this causes it in turn to leave orphaned resources after it hits the timeout: Destroying 2ivf5i6t-67d14-g4krv Destroying cluster using openshift-install DEBUG OpenShift Installer unreleased-master-4123-g69f0bbc18e8c6b1a6e278c54efa2def9b210033a DEBUG Built from commit 69f0bbc18e8c6b1a6e278c54efa2def9b210033a DEBUG Removing interfaces from custom router DEBUG Deleting openstack containers DEBUG Deleting openstack floating ips DEBUG Deleting openstack load balancers DEBUG Deleting openstack routers DEBUG Deleting openstack networks DEBUG Deleting openstack ports DEBUG Deleting openstack base image DEBUG Deleting OpenStack volumes DEBUG Deleting openstack server groups DEBUG Deleting openstack subnet-pools DEBUG Deleting openstack trunks DEBUG Deleting openstack servers DEBUG Deleting openstack subnets DEBUG Deleting openstack security-groups DEBUG Exiting deleting openstack floating ips DEBUG Exiting deleting openstack trunks DEBUG Exiting deleting openstack ports DEBUG Exiting removal of interfaces from custom router DEBUG goroutine deleteFloatingIPs complete DEBUG goroutine deleteTrunks complete DEBUG goroutine deletePorts complete DEBUG Exiting deleting openstack subnet-pools DEBUG goroutine deleteSubnetPools complete DEBUG Deleting Subnet: "58eff673-8824-4ccd-a5ee-e4620c296039" DEBUG Exiting deleting openstack base image DEBUG goroutine deleteImages complete DEBUG Deleting network: "d26e7be0-7276-4d56-84c6-f3d47099bde6" DEBUG Exiting deleting openstack server groups DEBUG goroutine deleteServerGroups complete DEBUG Exiting deleting openstack security-groups DEBUG goroutine deleteSecurityGroups complete DEBUG Exiting deleting openstack routers DEBUG goroutine deleteRouters complete DEBUG Deleting Subnet "58eff673-8824-4ccd-a5ee-e4620c296039" failed: Expected HTTP response code [] when accessing [DELETE https://kaizen.massopen.cloud:13696/v2.0/subnets/58eff673-8824-4ccd-a5ee-e4620c296039], but got 409 instead DEBUG {"NeutronError": {"message": "Unable to complete operation on subnet 58eff673-8824-4ccd-a5ee-e4620c296039: One or more ports have an IP allocation from this subnet.", "type": "SubnetInUse", "detail": ""}} DEBUG Exiting deleting openstack subnets DEBUG Deleting Network "d26e7be0-7276-4d56-84c6-f3d47099bde6" failed: Expected HTTP response code [] when accessing [DELETE https://kaizen.massopen.cloud:13696/v2.0/networks/d26e7be0-7276-4d56-84c6-f3d47099bde6], but got 409 instead DEBUG {"NeutronError": {"message": "Unable to complete operation on network d26e7be0-7276-4d56-84c6-f3d47099bde6. There are one or more ports still in use on the network.", "type": "NetworkInUse", "detail": ""}} DEBUG Exiting deleting openstack networks DEBUG Exiting deleting openstack load balancers DEBUG goroutine deleteLoadBalancers complete DEBUG Exiting deleting openstack servers DEBUG goroutine deleteServers complete DEBUG Exiting deleting OpenStack volumes DEBUG goroutine deleteVolumes complete DEBUG Exiting deleting openstack containers DEBUG goroutine deleteContainers complete DEBUG Deleting openstack subnets DEBUG Deleting openstack networks DEBUG Deleting Subnet: "58eff673-8824-4ccd-a5ee-e4620c296039" DEBUG Deleting network: "d26e7be0-7276-4d56-84c6-f3d47099bde6" DEBUG Deleting Network "d26e7be0-7276-4d56-84c6-f3d47099bde6" failed: Expected HTTP response code [] when accessing [DELETE https://kaizen.massopen.cloud:13696/v2.0/networks/d26e7be0-7276-4d56-84c6-f3d47099bde6], but got 409 instead DEBUG {"NeutronError": {"message": "Unable to complete operation on network d26e7be0-7276-4d56-84c6-f3d47099bde6. There are one or more ports still in use on the network.", "type": "NetworkInUse", "detail": ""}} DEBUG Exiting deleting openstack networks DEBUG Deleting Subnet "58eff673-8824-4ccd-a5ee-e4620c296039" failed: Expected HTTP response code [] when accessing [DELETE https://kaizen.massopen.cloud:13696/v2.0/subnets/58eff673-8824-4ccd-a5ee-e4620c296039], but got 409 instead DEBUG {"NeutronError": {"message": "Unable to complete operation on subnet 58eff673-8824-4ccd-a5ee-e4620c296039: One or more ports have an IP allocation from this subnet.", "type": "SubnetInUse", "detail": ""}} DEBUG Exiting deleting openstack subnets DEBUG Deleting openstack networks DEBUG Deleting openstack subnets DEBUG Deleting network: "d26e7be0-7276-4d56-84c6-f3d47099bde6" DEBUG Deleting Subnet: "58eff673-8824-4ccd-a5ee-e4620c296039" DEBUG Deleting Network "d26e7be0-7276-4d56-84c6-f3d47099bde6" failed: Expected HTTP response code [] when accessing [DELETE https://kaizen.massopen.cloud:13696/v2.0/networks/d26e7be0-7276-4d56-84c6-f3d47099bde6], but got 409 instead DEBUG {"NeutronError": {"message": "Unable to complete operation on network d26e7be0-7276-4d56-84c6-f3d47099bde6. There are one or more ports still in use on the network.", "type": "NetworkInUse", "detail": ""}} DEBUG Exiting deleting openstack networks DEBUG Deleting Subnet "58eff673-8824-4ccd-a5ee-e4620c296039" failed: Expected HTTP response code [] when accessing [DELETE https://kaizen.massopen.cloud:13696/v2.0/subnets/58eff673-8824-4ccd-a5ee-e4620c296039], but got 409 instead DEBUG {"NeutronError": {"message": "Unable to complete operation on subnet 58eff673-8824-4ccd-a5ee-e4620c296039: One or more ports have an IP allocation from this subnet.", "type": "SubnetInUse", "detail": ""}} DEBUG Exiting deleting openstack subnets DEBUG Deleting openstack networks DEBUG Deleting openstack subnets DEBUG Deleting network: "d26e7be0-7276-4d56-84c6-f3d47099bde6" DEBUG Deleting Network "d26e7be0-7276-4d56-84c6-f3d47099bde6" failed: Expected HTTP response code [] when accessing [DELETE https://kaizen.massopen.cloud:13696/v2.0/networks/d26e7be0-7276-4d56-84c6-f3d47099bde6], but got 409 instead DEBUG {"NeutronError": {"message": "Unable to complete operation on network d26e7be0-7276-4d56-84c6-f3d47099bde6. There are one or more ports still in use on the network.", "type": "NetworkInUse", "detail": ""}} DEBUG Exiting deleting openstack networks DEBUG Deleting Subnet: "58eff673-8824-4ccd-a5ee-e4620c296039" DEBUG Deleting Subnet "58eff673-8824-4ccd-a5ee-e4620c296039" failed: Expected HTTP response code [] when accessing [DELETE https://kaizen.massopen.cloud:13696/v2.0/subnets/58eff673-8824-4ccd-a5ee-e4620c296039], but got 409 instead DEBUG {"NeutronError": {"message": "Unable to complete operation on subnet 58eff673-8824-4ccd-a5ee-e4620c296039: One or more ports have an IP allocation from this subnet.", "type": "SubnetInUse", "detail": ""}} DEBUG Exiting deleting openstack subnets ^C The destroy command doesn't complete because the LB still exists, even though the deleteLoadBalancers function returned: moc-ci ❯ openstack loadbalancer show a221f92521de34116b17c44aaf204070 +---------------------+--------------------------------------------------------------+ | Field | Value | +---------------------+--------------------------------------------------------------+ | admin_state_up | True | | availability_zone | | | created_at | 2021-01-14T13:37:52 | | description | Kubernetes external service a221f92521de34116b17c44aaf204070 | | flavor_id | | | id | 47faf9c3-bceb-46a3-8380-f3f19c34e569 | | listeners | 04d67458-c07d-4895-9ed1-8b32d7a8d66e | | name | a221f92521de34116b17c44aaf204070 | | operating_status | ONLINE | | pools | 51f138a8-bfe6-4ee8-8593-2fb3097849ae | | project_id | 593227d1d5d04cba8847d5b6b742e0a7 | | provider | octavia | | provisioning_status | ACTIVE | | updated_at | 2021-01-14T13:39:53 | | vip_address | 10.0.0.244 | | vip_network_id | d26e7be0-7276-4d56-84c6-f3d47099bde6 | | vip_port_id | b1f8fe0a-6a46-46f1-a2f7-342216066846 | | vip_qos_policy_id | None | | vip_subnet_id | 58eff673-8824-4ccd-a5ee-e4620c296039 | +---------------------+--------------------------------------------------------------+
Looking at the code of deleteLoadBalancers() [1], octavia doesn't support Tags until API v2.5. MOC only has v2.0: curl https://kaizen.massopen.cloud:13876 {"versions": [{"status": "SUPPORTED", "updated": "2014-12-11T00:00:00Z", "id": "v1", "links": [{"href": "https://kaizen.massopen.cloud:13876/v1", "rel": "self"}]}, {"status": "CURRENT", "updated": "2017-06-22T00:00:00Z", "id": "v2.0", "links": [{"href": "https://kaizen.massopen.cloud:13876/v2.0", "rel": "self"}]}]} We thus need to set the clusterID in the LB description when creating it otherwise it's not picked by the `destroy` command and causes the job cleanup to hang. [1] https://github.com/openshift/installer/blob/9162bd29bf7a50bc927a151f0372c5ee05a592a5/pkg/destroy/openstack/openstack.go#L910
For the in-tree cloud provider, the LB description is hardcoded to "Kubernetes external service <service>" where <service> is the name of the service in k8s. Also, the in-tree cloud provider doesn't allow setting tags on the LB. From an openstack point of vue, nothing ties the LB to the cluster. https://github.com/openshift/kubernetes/blob/442a69c/staging/src/k8s.io/legacy-cloud-providers/openstack/openstack_loadbalancer.go#L446 For now we should perhaps document this as a known issue - "Always delete the services in openshift before destroying the cluster" - and make sure that the CI job cleans up the LB before attempting destroying the cluster.
Checked with 4.7.0-0.nightly-2021-02-06-084550 and the load balancers get destroyed well, so moved to verified. ./openshift-install 4.7.0-0.nightly-2021-02-06-084550 02-08 17:33:35 built from commit c0489117068cb00c5222bb0762a87605f41ebe04 02-08 17:33:35 release image registry.ci.openshift.org/ocp/release@sha256:271a2f30dfd8837d7da480108d2abb019a209be4d40b6e8f916195b3cca35ec7 # cat /tmp/tmp.DdLKXreMRl/.openshift_install.log | grep -i balancer level=debug msg=Deleting openstack load balancers level=debug msg=Deleting LoadBalancer "dc71a5cd-bc01-4b00-8af1-6d36b5993df3" level=debug msg=Deleting LoadBalancer "db996089-4720-49df-8e9f-491d92335299" level=debug msg=Deleting LoadBalancer "b2f3c5fa-bcb1-46db-8df8-c644127bb8f4" level=debug msg=Deleting LoadBalancer "1877dc6b-e923-4998-82ef-c476e0ba5b8b" level=debug msg=Deleting LoadBalancer "0e4d843b-8ed7-48e0-a1ce-09a5565eb2d6" level=debug msg=Deleting LoadBalancer "07af97ff-14fb-4cbf-a7ee-05c0bde1861c" level=debug msg=Deleting LoadBalancer "40ab2263-19bd-4d2c-8f0d-fc9e9d58f973" level=debug msg=Deleting LoadBalancer "03c3ca2d-40c3-44aa-bd21-9442436272c6" level=debug msg=Deleting LoadBalancer "d39e0a74-a516-4d33-9413-eac2ceb944ef" level=debug msg=Deleting LoadBalancer "19142d2b-4cf2-4372-aa80-719651ce68c3" level=debug msg=Deleting LoadBalancer "a7de2a23-25d9-4387-bcf9-be5daccdf061" level=debug msg=Deleting LoadBalancer "9f666e50-269a-415d-8617-3020a46e04f3" level=debug msg=Deleting LoadBalancer "a754f098-bd6f-47ea-8470-1102e33cb777" level=debug msg=Deleting LoadBalancer "9e3e4588-705a-41b1-a289-86ab36e902c2" level=debug msg=Deleting LoadBalancer "004711ac-8ecd-4a19-a824-35bf9d9752eb" level=debug msg=Deleting LoadBalancer "fa7140eb-b492-46f3-bb19-05d1ed2830a7" level=debug msg=Deleting LoadBalancer "b5aee9c7-9d83-48ac-94ea-cfcb05b9576e" level=debug msg=Deleting LoadBalancer "849e0e11-c887-4ec3-9af1-9d1ab30fda7c" level=debug msg=Deleting LoadBalancer "d3949a0b-3486-40d9-98a3-78ef91cd453d" level=debug msg=Deleting LoadBalancer "878407c5-e238-4177-b96b-cb396b6b09eb" level=debug msg=Deleting LoadBalancer "81841a87-349d-4037-a0b2-baea34793081" level=debug msg=Deleting LoadBalancer "5f284555-34c2-4870-8813-985fb624d225" level=debug msg=Deleting LoadBalancer "78e245a9-cf8e-48b3-b8aa-89c39ad7d80c" level=debug msg=Deleting LoadBalancer "9b3b270c-2399-4942-b9eb-3ecb3a0faf65" level=debug msg=Deleting LoadBalancer "9545694a-4d3f-4276-8d3e-75aef68ef64d" level=debug msg=Deleting LoadBalancer "37f8d943-6dcc-447f-912b-eae7b504159c" level=debug msg=Deleting LoadBalancer "220aa335-5bc8-483c-8c5a-cd6bafcfa6db" level=debug msg=Deleting LoadBalancer "c01498e2-a7fd-407d-8df3-c2a0d0e193c6" level=debug msg=Deleting LoadBalancer "75d8ba15-cd73-4057-8773-0738cb1d441d" level=debug msg=Deleting LoadBalancer "6766c68b-7aea-4054-803c-1a5c1febceca" level=debug msg=Deleting LoadBalancer "1c67d820-dc2e-447b-973c-947d64e2fe73" level=debug msg=Deleting LoadBalancer "044cd0e1-4f8e-4e87-a693-9fdaf382e360" level=debug msg=Deleting LoadBalancer "2b72d78d-1053-478d-90d1-d76c97dae6b8" level=debug msg=Deleting LoadBalancer "bc8baf58-448f-4b64-a02f-ffee11924618" level=debug msg=Deleting LoadBalancer "41a92b32-3e6b-4fc7-8f99-4f38d7c4cc4a" level=debug msg=Deleting LoadBalancer "90fd8641-ba9f-4795-a88e-a4881c19cf14" level=debug msg=Deleting LoadBalancer "96b04e9a-3134-4b6b-be70-d1f790f9dcef" level=debug msg=Deleting LoadBalancer "806211f8-f461-4b88-a8f9-e7b807136b28" level=debug msg=Deleting LoadBalancer "6a221b74-9b1e-4df5-b369-62f05581eee2" level=debug msg=Deleting LoadBalancer "bab4c822-3a96-4613-be96-cdffdef145d3" level=debug msg=Deleting LoadBalancer "751d4d80-7b85-46ff-9bb9-69d28538d17d" level=debug msg=Deleting LoadBalancer "87f2b39b-6d63-45fd-a2ad-7ff11aa3dffe" level=debug msg=Deleting LoadBalancer "e63d7ab0-ced5-48d0-8da7-077333efae80" level=debug msg=Deleting LoadBalancer "4e7b4594-05c5-481b-a029-46f3bb374525" level=debug msg=Deleting LoadBalancer "db9f2eb8-cba7-41b2-95b0-564378e6b7ed" level=debug msg=Deleting LoadBalancer "c3381636-a914-4bcc-b900-8cd58a73612c" level=debug msg=Deleting LoadBalancer "cfe742b9-1caf-4e49-ac53-2e1d71ac22a6" level=debug msg=Deleting LoadBalancer "74f44e90-79ed-4e35-9104-ad719bfe7d2c" level=debug msg=Deleting LoadBalancer "f1765f9b-84b8-4c6a-9f18-b898b75fce7a" level=debug msg=Deleting LoadBalancer "9fcbc0ed-a85d-43a4-8303-48fc1b0cf426" level=debug msg=Deleting LoadBalancer "1862e746-878a-400d-870a-334d170650c9" level=debug msg=Deleting LoadBalancer "606ad090-960b-4a27-8055-170fe8094500" level=debug msg=Deleting LoadBalancer "91d2af72-75d3-4b1a-b31a-e8d7014ba82c" level=debug msg=Deleting LoadBalancer "923cf976-2ee3-4420-a674-e350c059c3a7" level=debug msg=Deleting LoadBalancer "2eb50ded-bf8a-4f5c-81a1-5a377e1e55d2" level=debug msg=Exiting deleting openstack load balancers level=debug msg=Deleting openstack load balancers level=debug msg=Deleting LoadBalancer "2b72d78d-1053-478d-90d1-d76c97dae6b8" level=debug msg=Deleting load balancer "2b72d78d-1053-478d-90d1-d76c97dae6b8" failed: Expected HTTP response code [] when accessing [DELETE https://10.46.22.24:13876/v2.0/lbaas/loadbalancers/2b72d78d-1053-478d-90d1-d76c97dae6b8?cascade=true], but got 409 instead level=debug msg={"debuginfo": null, "faultcode": "Client", "faultstring": "Invalid state PENDING_DELETE of loadbalancer resource 2b72d78d-1053-478d-90d1-d76c97dae6b8"} level=debug msg=Exiting deleting openstack load balancers level=debug msg=Deleting openstack load balancers level=debug msg=Exiting deleting openstack load balancers level=debug msg=goroutine deleteLoadBalancers complete # # Before destroy: (shiftstack) [stack@undercloud-0 ~]$ openstack loadbalancer list +--------------------------------------+-----------------------------------------------------------------------------+----------------------------------+----------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+-----------------------------------------------------------------------------+----------------------------------+----------------+---------------------+----------+ | dc71a5cd-bc01-4b00-8af1-6d36b5993df3 | wj47ios208kr-lvl7m-kuryr-api-loadbalancer | 75604224364d40f0b076625b139dc6e3 | 172.30.0.1 | ACTIVE | amphora | | db996089-4720-49df-8e9f-491d92335299 | openshift-authentication-operator/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.73.40 | ACTIVE | amphora | | b2f3c5fa-bcb1-46db-8df8-c644127bb8f4 | openshift-network-diagnostics/network-check-target | 75604224364d40f0b076625b139dc6e3 | 172.30.110.244 | ACTIVE | amphora | | 1877dc6b-e923-4998-82ef-c476e0ba5b8b | openshift-kube-apiserver-operator/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.143.84 | ACTIVE | amphora | | 0e4d843b-8ed7-48e0-a1ce-09a5565eb2d6 | openshift-controller-manager-operator/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.24.54 | ACTIVE | amphora | | 07af97ff-14fb-4cbf-a7ee-05c0bde1861c | openshift-kube-storage-version-migrator-operator/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.59.130 | ACTIVE | amphora | | 40ab2263-19bd-4d2c-8f0d-fc9e9d58f973 | openshift-cluster-storage-operator/cluster-storage-operator-metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.1.61 | ACTIVE | amphora | | 03c3ca2d-40c3-44aa-bd21-9442436272c6 | openshift-cluster-storage-operator/csi-snapshot-controller-operator-metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.233.155 | ACTIVE | amphora | | d39e0a74-a516-4d33-9413-eac2ceb944ef | openshift-kube-scheduler-operator/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.75.251 | ACTIVE | amphora | | 19142d2b-4cf2-4372-aa80-719651ce68c3 | openshift-config-operator/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.226.75 | ACTIVE | amphora | | a7de2a23-25d9-4387-bcf9-be5daccdf061 | openshift-kube-controller-manager-operator/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.144.249 | ACTIVE | amphora | | 9f666e50-269a-415d-8617-3020a46e04f3 | openshift-service-ca-operator/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.226.245 | ACTIVE | amphora | | a754f098-bd6f-47ea-8470-1102e33cb777 | openshift-insights/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.124.131 | ACTIVE | amphora | | 9e3e4588-705a-41b1-a289-86ab36e902c2 | openshift-apiserver-operator/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.46.187 | ACTIVE | amphora | | 004711ac-8ecd-4a19-a824-35bf9d9752eb | openshift-etcd-operator/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.10.12 | ACTIVE | amphora | | fa7140eb-b492-46f3-bb19-05d1ed2830a7 | openshift-controller-manager/controller-manager | 75604224364d40f0b076625b139dc6e3 | 172.30.8.186 | ACTIVE | amphora | | b5aee9c7-9d83-48ac-94ea-cfcb05b9576e | openshift-cluster-storage-operator/csi-snapshot-webhook | 75604224364d40f0b076625b139dc6e3 | 172.30.113.54 | ACTIVE | amphora | | 849e0e11-c887-4ec3-9af1-9d1ab30fda7c | openshift-multus/multus-admission-controller | 75604224364d40f0b076625b139dc6e3 | 172.30.96.76 | ACTIVE | amphora | | d3949a0b-3486-40d9-98a3-78ef91cd453d | openshift-machine-config-operator/machine-config-daemon | 75604224364d40f0b076625b139dc6e3 | 172.30.229.79 | ACTIVE | amphora | | 878407c5-e238-4177-b96b-cb396b6b09eb | openshift-marketplace/marketplace-operator-metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.190.61 | ACTIVE | amphora | | 81841a87-349d-4037-a0b2-baea34793081 | openshift-ingress-operator/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.238.116 | ACTIVE | amphora | | 5f284555-34c2-4870-8813-985fb624d225 | openshift-cloud-credential-operator/cco-metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.27.69 | ACTIVE | amphora | | 78e245a9-cf8e-48b3-b8aa-89c39ad7d80c | openshift-machine-api/cluster-autoscaler-operator | 75604224364d40f0b076625b139dc6e3 | 172.30.198.53 | ACTIVE | amphora | | 9b3b270c-2399-4942-b9eb-3ecb3a0faf65 | openshift-operator-lifecycle-manager/catalog-operator-metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.227.171 | ACTIVE | amphora | | 9545694a-4d3f-4276-8d3e-75aef68ef64d | openshift-dns-operator/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.184.246 | ACTIVE | amphora | | 37f8d943-6dcc-447f-912b-eae7b504159c | openshift-operator-lifecycle-manager/olm-operator-metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.239.150 | ACTIVE | amphora | | 220aa335-5bc8-483c-8c5a-cd6bafcfa6db | openshift-cluster-version/cluster-version-operator | 75604224364d40f0b076625b139dc6e3 | 172.30.65.227 | ACTIVE | amphora | | c01498e2-a7fd-407d-8df3-c2a0d0e193c6 | openshift-kube-controller-manager/kube-controller-manager | 75604224364d40f0b076625b139dc6e3 | 172.30.65.221 | ACTIVE | amphora | | 75d8ba15-cd73-4057-8773-0738cb1d441d | openshift-machine-api/machine-api-operator | 75604224364d40f0b076625b139dc6e3 | 172.30.142.107 | ACTIVE | amphora | | 6766c68b-7aea-4054-803c-1a5c1febceca | openshift-kube-apiserver/apiserver | 75604224364d40f0b076625b139dc6e3 | 172.30.238.140 | ACTIVE | amphora | | 1c67d820-dc2e-447b-973c-947d64e2fe73 | openshift-etcd/etcd | 75604224364d40f0b076625b139dc6e3 | 172.30.84.66 | ACTIVE | amphora | | 044cd0e1-4f8e-4e87-a693-9fdaf382e360 | openshift-kube-scheduler/scheduler | 75604224364d40f0b076625b139dc6e3 | 172.30.107.246 | ACTIVE | amphora | | 2b72d78d-1053-478d-90d1-d76c97dae6b8 | openshift-dns/dns-default | 75604224364d40f0b076625b139dc6e3 | 172.30.0.10 | ACTIVE | amphora | | bc8baf58-448f-4b64-a02f-ffee11924618 | openshift-apiserver/check-endpoints | 75604224364d40f0b076625b139dc6e3 | 172.30.151.85 | ACTIVE | amphora | | 41a92b32-3e6b-4fc7-8f99-4f38d7c4cc4a | openshift-apiserver/api | 75604224364d40f0b076625b139dc6e3 | 172.30.161.253 | ACTIVE | amphora | | 90fd8641-ba9f-4795-a88e-a4881c19cf14 | openshift-oauth-apiserver/api | 75604224364d40f0b076625b139dc6e3 | 172.30.170.11 | ACTIVE | amphora | | 96b04e9a-3134-4b6b-be70-d1f790f9dcef | openshift-machine-api/machine-api-controllers | 75604224364d40f0b076625b139dc6e3 | 172.30.38.203 | ACTIVE | amphora | | 806211f8-f461-4b88-a8f9-e7b807136b28 | openshift-machine-api/machine-api-operator-webhook | 75604224364d40f0b076625b139dc6e3 | 172.30.73.95 | ACTIVE | amphora | | 6a221b74-9b1e-4df5-b369-62f05581eee2 | openshift-console-operator/metrics | 75604224364d40f0b076625b139dc6e3 | 172.30.8.170 | ACTIVE | amphora | | bab4c822-3a96-4613-be96-cdffdef145d3 | openshift-ingress-canary/ingress-canary | 75604224364d40f0b076625b139dc6e3 | 172.30.240.131 | ACTIVE | amphora | | 751d4d80-7b85-46ff-9bb9-69d28538d17d | openshift-ingress/router-internal-default | 75604224364d40f0b076625b139dc6e3 | 172.30.174.248 | ACTIVE | amphora | | 87f2b39b-6d63-45fd-a2ad-7ff11aa3dffe | openshift-monitoring/prometheus-adapter | 75604224364d40f0b076625b139dc6e3 | 172.30.53.71 | ACTIVE | amphora | | e63d7ab0-ced5-48d0-8da7-077333efae80 | openshift-authentication/oauth-openshift | 75604224364d40f0b076625b139dc6e3 | 172.30.181.48 | ACTIVE | amphora | | 4e7b4594-05c5-481b-a029-46f3bb374525 | openshift-console/downloads | 75604224364d40f0b076625b139dc6e3 | 172.30.195.224 | ACTIVE | amphora | | db9f2eb8-cba7-41b2-95b0-564378e6b7ed | openshift-image-registry/image-registry | 75604224364d40f0b076625b139dc6e3 | 172.30.177.64 | ACTIVE | amphora | | c3381636-a914-4bcc-b900-8cd58a73612c | openshift-monitoring/alertmanager-main | 75604224364d40f0b076625b139dc6e3 | 172.30.126.254 | ACTIVE | amphora | | cfe742b9-1caf-4e49-ac53-2e1d71ac22a6 | openshift-marketplace/redhat-marketplace | 75604224364d40f0b076625b139dc6e3 | 172.30.54.87 | ACTIVE | amphora | | 74f44e90-79ed-4e35-9104-ad719bfe7d2c | openshift-marketplace/redhat-operators | 75604224364d40f0b076625b139dc6e3 | 172.30.104.111 | ACTIVE | amphora | | f1765f9b-84b8-4c6a-9f18-b898b75fce7a | openshift-marketplace/community-operators | 75604224364d40f0b076625b139dc6e3 | 172.30.126.9 | ACTIVE | amphora | | 9fcbc0ed-a85d-43a4-8303-48fc1b0cf426 | openshift-marketplace/certified-operators | 75604224364d40f0b076625b139dc6e3 | 172.30.65.191 | ACTIVE | amphora | | 1862e746-878a-400d-870a-334d170650c9 | openshift-monitoring/prometheus-k8s | 75604224364d40f0b076625b139dc6e3 | 172.30.213.113 | ACTIVE | amphora | | 606ad090-960b-4a27-8055-170fe8094500 | openshift-monitoring/thanos-querier | 75604224364d40f0b076625b139dc6e3 | 172.30.127.185 | ACTIVE | amphora | | 91d2af72-75d3-4b1a-b31a-e8d7014ba82c | openshift-monitoring/grafana | 75604224364d40f0b076625b139dc6e3 | 172.30.57.253 | ACTIVE | amphora | | 923cf976-2ee3-4420-a674-e350c059c3a7 | openshift-console/console | 75604224364d40f0b076625b139dc6e3 | 172.30.50.226 | ACTIVE | amphora | | 2eb50ded-bf8a-4f5c-81a1-5a377e1e55d2 | openshift-operator-lifecycle-manager/packageserver-service | 75604224364d40f0b076625b139dc6e3 | 172.30.18.118 | ACTIVE | amphora | +--------------------------------------+-----------------------------------------------------------------------------+----------------------------------+----------------+---------------------+----------+ # # Bfter destroy (shiftstack) [stack@undercloud-0 ~]$ openstack loadbalancer list (shiftstack) [stack@undercloud-0 ~]$
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633