Bug 1718375
Summary: | Rolling out OVS daemonset changes takes a very long time. | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Greg Blomquist <gblomqui> |
Component: | Master | Assignee: | David Eads <deads> |
Status: | CLOSED ERRATA | QA Contact: | zhou ying <yinzhou> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.1.0 | CC: | aos-bugs, cdc, danw, dcbw, deads, eparis, jokerman, mifiedle, mmccomas, pmuller, sponnaga, wking, xxia |
Target Milestone: | --- | Keywords: | OSE41z_next |
Target Release: | 4.1.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | 4.1.2 | ||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | 1714699 | Environment: | |
Last Closed: | 2019-06-19 06:45:34 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1714699 | ||
Bug Blocks: |
Description
Greg Blomquist
2019-06-07 15:08:15 UTC
*** Bug 1719423 has been marked as a duplicate of this bug. *** https://openshift-release.svc.ci.openshift.org/releasestream/4.1.0-0.ci/release/4.1.0-0.ci-2019-06-11-232837 has the fix (although I'm not moving to ON_QA, since that's supposed to happen via ART tooling): $ oc adm release info --changelog ~/.local/lib/go/src --changes-from quay.io/openshift-release-dev/ocp-release:4.1.1 registry.svc.ci.openshift.org/ocp/release:4.1.0-0.ci-2019-06-11-232837 ... ### [cli, cli-artifacts, deployer, hyperkube, hypershift, node, tests](https://github.com/openshift/origin) * [Bug 1718375](https://bugzilla.redhat.com/show_bug.cgi?id=1718375): hardcode a small list of mappings to allow SDN to rebootstrap [#23086](https://github.com/openshift/origin/pull/23086) * [Full changelog](https://github.com/openshift/origin/compare/4b47d33946729fd09e16da3562355922aedc89b8...3915504e7f6f91c47b242aeb80802f143757b1d7) ... re: comment 3 Whether by tooling or manually, it should go ON_QA when available in a downstream (nightly) build. Confirmed with latest payload: 4.1.0-0.nightly-2019-06-12-205455, can not reproduce it, most times the ovs pods will restart in 3 mins: Steps: `oc set env ds/ovs key=value6 -n openshift-sdn` `oc -n openshift-sdn get pods -o wide -l app=ovs -w` NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovs-2lnmh 1/1 Running 0 4m9s 10.0.150.238 ip-10-0-150-238.us-east-2.compute.internal <none> <none> ovs-7t9px 1/1 Running 0 63s 10.0.136.231 ip-10-0-136-231.us-east-2.compute.internal <none> <none> ovs-9c9qr 1/1 Terminating 0 2m16s 10.0.172.221 ip-10-0-172-221.us-east-2.compute.internal <none> <none> ovs-9vz6m 1/1 Running 0 3m29s 10.0.134.221 ip-10-0-134-221.us-east-2.compute.internal <none> <none> ovs-f7nfg 1/1 Running 0 97s 10.0.157.39 ip-10-0-157-39.us-east-2.compute.internal <none> <none> ovs-hhg2n 1/1 Running 0 2m48s 10.0.173.160 ip-10-0-173-160.us-east-2.compute.internal <none> <none> ovs-9c9qr 0/1 Terminating 0 2m45s 10.0.172.221 ip-10-0-172-221.us-east-2.compute.internal <none> <none> ovs-9c9qr 0/1 Terminating 0 2m56s 10.0.172.221 ip-10-0-172-221.us-east-2.compute.internal <none> <none> ovs-9c9qr 0/1 Terminating 0 2m56s 10.0.172.221 ip-10-0-172-221.us-east-2.compute.internal <none> <none> ovs-2lqdw 0/1 Pending 0 0s <none> <none> <none> <none> ovs-2lqdw 0/1 Pending 0 0s <none> ip-10-0-172-221.us-east-2.compute.internal <none> <none> ovs-2lqdw 0/1 ContainerCreating 0 0s 10.0.172.221 ip-10-0-172-221.us-east-2.compute.internal <none> <none> ovs-2lqdw 1/1 Running 0 2s 10.0.172.221 ip-10-0-172-221.us-east-2.compute.internal <none> <none> ovs-hhg2n 1/1 Terminating 0 3m30s 10.0.173.160 ip-10-0-173-160.us-east-2.compute.internal <none> <none> ovs-hhg2n 0/1 Terminating 0 4m1s 10.0.173.160 ip-10-0-173-160.us-east-2.compute.internal <none> <none> ovs-hhg2n 0/1 Terminating 0 4m10s 10.0.173.160 ip-10-0-173-160.us-east-2.compute.internal <none> <none> ovs-hhg2n 0/1 Terminating 0 4m10s 10.0.173.160 ip-10-0-173-160.us-east-2.compute.internal <none> <none> ovs-lngh7 0/1 Pending 0 0s <none> <none> <none> <none> ovs-lngh7 0/1 Pending 0 0s <none> ip-10-0-173-160.us-east-2.compute.internal <none> <none> ovs-lngh7 0/1 ContainerCreating 0 0s 10.0.173.160 ip-10-0-173-160.us-east-2.compute.internal <none> <none> ovs-lngh7 1/1 Running 0 2s 10.0.173.160 ip-10-0-173-160.us-east-2.compute.internal <none> <none> ovs-2lnmh 1/1 Terminating 0 5m33s 10.0.150.238 ip-10-0-150-238.us-east-2.compute.internal <none> <none> ovs-2lnmh 0/1 Terminating 0 6m3s 10.0.150.238 ip-10-0-150-238.us-east-2.compute.internal <none> <none> ovs-2lnmh 0/1 Terminating 0 6m4s 10.0.150.238 ip-10-0-150-238.us-east-2.compute.internal <none> <none> ovs-2lnmh 0/1 Terminating 0 6m4s 10.0.150.238 ip-10-0-150-238.us-east-2.compute.internal <none> <none> ovs-6f4j9 0/1 Pending 0 0s <none> <none> <none> <none> ovs-6f4j9 0/1 Pending 0 0s <none> ip-10-0-150-238.us-east-2.compute.internal <none> <none> ovs-6f4j9 0/1 ContainerCreating 0 0s 10.0.150.238 ip-10-0-150-238.us-east-2.compute.internal <none> <none> ovs-6f4j9 1/1 Running 0 1s 10.0.150.238 ip-10-0-150-238.us-east-2.compute.internal <none> <none> ovs-9vz6m 1/1 Terminating 0 5m25s 10.0.134.221 ip-10-0-134-221.us-east-2.compute.internal <none> <none> ovs-9vz6m 0/1 Terminating 0 5m56s 10.0.134.221 ip-10-0-134-221.us-east-2.compute.internal <none> <none> ovs-9vz6m 0/1 Terminating 0 5m57s 10.0.134.221 ip-10-0-134-221.us-east-2.compute.internal <none> <none> ovs-9vz6m 0/1 Terminating 0 5m57s 10.0.134.221 ip-10-0-134-221.us-east-2.compute.internal <none> <none> ovs-5gqm8 0/1 Pending 0 0s <none> <none> <none> <none> ovs-5gqm8 0/1 Pending 0 0s <none> ip-10-0-134-221.us-east-2.compute.internal <none> <none> ovs-5gqm8 0/1 ContainerCreating 0 0s 10.0.134.221 ip-10-0-134-221.us-east-2.compute.internal <none> <none> ovs-5gqm8 1/1 Running 0 1s 10.0.134.221 ip-10-0-134-221.us-east-2.compute.internal <none> <none> ovs-7t9px 1/1 Terminating 0 3m32s 10.0.136.231 ip-10-0-136-231.us-east-2.compute.internal <none> <none> ovs-7t9px 0/1 Terminating 0 4m3s 10.0.136.231 ip-10-0-136-231.us-east-2.compute.internal <none> <none> ovs-7t9px 0/1 Terminating 0 4m10s 10.0.136.231 ip-10-0-136-231.us-east-2.compute.internal <none> <none> ovs-7t9px 0/1 Terminating 0 4m10s 10.0.136.231 ip-10-0-136-231.us-east-2.compute.internal <none> <none> ovs-4dd74 0/1 Pending 0 0s <none> <none> <none> <none> ovs-4dd74 0/1 Pending 0 0s <none> ip-10-0-136-231.us-east-2.compute.internal <none> <none> ovs-4dd74 0/1 ContainerCreating 0 0s 10.0.136.231 ip-10-0-136-231.us-east-2.compute.internal <none> <none> ovs-4dd74 1/1 Running 0 1s 10.0.136.231 ip-10-0-136-231.us-east-2.compute.internal <none> <none> ovs-f7nfg 1/1 Terminating 0 4m45s 10.0.157.39 ip-10-0-157-39.us-east-2.compute.internal <none> <none> ovs-f7nfg 0/1 Terminating 0 5m16s 10.0.157.39 ip-10-0-157-39.us-east-2.compute.internal <none> <none> ovs-f7nfg 0/1 Terminating 0 5m17s 10.0.157.39 ip-10-0-157-39.us-east-2.compute.internal <none> <none> ovs-f7nfg 0/1 Terminating 0 5m17s 10.0.157.39 ip-10-0-157-39.us-east-2.compute.internal <none> <none> ovs-vvhhr 0/1 Pending 0 0s <none> <none> <none> <none> ovs-vvhhr 0/1 Pending 0 0s <none> ip-10-0-157-39.us-east-2.compute.internal <none> <none> ovs-vvhhr 0/1 ContainerCreating 0 0s 10.0.157.39 ip-10-0-157-39.us-east-2.compute.internal <none> <none> ovs-vvhhr 1/1 Running 0 1s 10.0.157.39 ip-10-0-157-39.us-east-2.compute.internal <none> <none> Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1382 |