Bug 1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7
Summary: network-check-target causes upgrade to fail from 4.6.18 to 4.7
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.7
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.8.0
Assignee: Federico Paolinelli
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-02-23 17:37 UTC by Joseph Callen
Modified: 2021-07-27 22:48 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-27 22:47:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-network-operator pull 1004 0 None open Bug 1931997: mark network-check-target non critical 2021-03-05 10:29:39 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 22:48:02 UTC

Description Joseph Callen 2021-02-23 17:37:24 UTC
Description of problem:

The CI cluster for vSphere running in AWS stopped upgrading due to the network-check-target pod for a single worker that would not start.


namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-55-206.us-west-2.compute.internal/kube-scheduler/kube-scheduler/logs/current.log:2021-02-23T16:30:00.602724318Z I0223 16:30:00.602688       1 factory.go:321] "Unable to schedule pod; no fit; waiting" pod="openshift-network-diagnostics/network-check-target-hn8nr" err="0/9 nodes are available: 1 Insufficient memory, 8 node(s) didn't match Pod's node affinity."


$ oc get pod
NAME                                    READY   STATUS    RESTARTS   AGE
network-check-source-7b56ddbc7b-s5zxm   1/1     Running   0          98m
...
network-check-target-hn8nr              0/1     Pending   0          98m


workaround:

11077  2/23/2021 12:15  oc adm cordon ip-10-0-52-209.us-west-2.compute.internal
11078  2/23/2021 12:16  oc adm drain --force --ignore-daemonsets --delete-local-data ip-10-0-52-209.us-west-2.compute.internal
11079  2/23/2021 12:17  oc debug node/ip-10-0-52-209.us-west-2.compute.internal
systemctl reboot




Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Michael Gugino 2021-02-23 17:39:22 UTC
The pod in question has a priority of 0, so it will not preempt other pods.  This pod runs as part of a daemonset.

Other system daemonsets run with a very high priority.  For example, MachineConfigDaemon:

  priority: 2000001000
  priorityClassName: system-node-critical

If this component is required to run on each host, we should increase the priority.  If this component is optional, we should determine a way to not block upgrades with the rollout of this component for capacity reasons.

Comment 4 zhaozhanqi 2021-03-22 06:19:16 UTC
Verified this bug on 4.8.0-0.nightly-2021-03-21-224928

$ oc get ds -n openshift-network-diagnostics -o yaml | grep crit
      networkoperator.openshift.io/non-critical: ""

Comment 5 kevin 2021-03-22 07:41:44 UTC
hello, can this fix backport to OCP 4.7.X?

Comment 8 errata-xmlrpc 2021-07-27 22:47:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.