Bug 2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected.
Summary: [OVN-K] If pod creation fails, retry doesn't work as expected.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.10
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.10.0
Assignee: Surya Seetharaman
QA Contact: Anurag saxena
URL:
Whiteboard:
Depends On:
Blocks: 2053310
TreeView+ depends on / blocked
 
Reported: 2022-01-22 21:16 UTC by Surya Seetharaman
Modified: 2022-03-10 16:42 UTC (History)
0 users

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2053310 (view as bug list)
Environment:
Last Closed: 2022-03-10 16:41:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift ovn-kubernetes pull 926 0 None Merged Bug 2043961: Fix pod-creation-retry 2022-02-10 22:25:43 UTC
Red Hat Product Errata RHSA-2022:0056 0 None None None 2022-03-10 16:42:03 UTC

Description Surya Seetharaman 2022-01-22 21:16:23 UTC
Description of problem:

In the steps of addLogicalPort, if we fail at any of the intermediary steps, usually the pod add event gets added to the oc.retryPods cache and the request is retried until it succeeds.

Currently this is not happening and only some pods are being retried while others aren't being retried.


Version-Release number of selected component (if applicable):


How reproducible: Always reproducible on OCP & KIND


Steps to Reproduce:
1. Create cluster and a bunch of pods and make them fail in one of the addLogicalPort steps.
2. Only some of the pods get retried while others are silently getting removed from the retryPods queue/cache.
3.


Additional info:

Originally observed when trying to debug migration jobs on PR https://github.com/openshift/cluster-network-operator/pull/1254. Later found out that its reproducible on KIND as well.

Sample Run:
[surya@hidden-temple Downloads]$ omg get pods -n openshift-apiserver -owide
NAME                        READY  STATUS   RESTARTS  AGE    IP           NODE
apiserver-865c74cbcd-2c2wt  2/2    Running  1         1h5m   10.129.2.14  ip-10-0-136-71.us-west-2.compute.internal
apiserver-865c74cbcd-cb7p8  2/2    Running  3         55m    10.131.0.17  ip-10-0-137-79.us-west-2.compute.internal
apiserver-865c74cbcd-rfcts  0/2    Running  1         1h10m               ip-10-0-237-148.us-west-2.compute.internal

2 API server pods came up eventually through retries while the 3rd one was never retried.

https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_cluster-network-operator/1288/pull-ci-openshift-cluster-network-operator-master-e2e-network-migration/1484625495885615104

Comment 4 Surya Seetharaman 2022-01-22 23:13:55 UTC
https://github.com/ovn-org/ovn-kubernetes/pull/2765

Comment 12 errata-xmlrpc 2022-03-10 16:41:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0056


Note You need to log in before you can comment on or make changes to this bug.