Bug 1945739

Summary: Endurance cluster has notready and schedulingdisabled nodes after upgrade
Product: OpenShift Container Platform Reporter: Ben Parees <bparees>
Component: NodeAssignee: Elana Hashman <ehashman>
Node sub component: Kubelet QA Contact: Sunil Choudhary <schoudha>
Status: CLOSED DUPLICATE Docs Contact:
Severity: medium    
Priority: medium CC: aos-bugs, nagrawal, rphillips
Version: 4.6   
Target Milestone: ---   
Target Release: 4.8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1946306 (view as bug list) Environment:
Last Closed: 2021-04-27 21:27:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1946306    

Description Ben Parees 2021-04-01 19:53:42 UTC
Description of problem:
Cluster was upgraded from
4.6.0-0.nightly-2021-03-21-131139
to
4.6.0-0.nightly-2021-03-27-052141

Cluster now has 2 unready nodes and one w/ scheduling disabled:

$ oc get nodes
NAME                                         STATUS                     ROLES    AGE   VERSION
ip-10-0-136-59.us-east-2.compute.internal    NotReady                   worker   20d   v1.19.0+263ee0d
ip-10-0-147-192.us-east-2.compute.internal   Ready                      master   20d   v1.19.0+a5a0987
ip-10-0-178-43.us-east-2.compute.internal    NotReady                   worker   20d   v1.19.0+263ee0d
ip-10-0-191-180.us-east-2.compute.internal   Ready                      master   20d   v1.19.0+a5a0987
ip-10-0-214-241.us-east-2.compute.internal   Ready                      master   20d   v1.19.0+a5a0987
ip-10-0-246-183.us-east-2.compute.internal   Ready,SchedulingDisabled   worker   20d   v1.19.0+263ee0d



Version-Release number of selected component (if applicable):
4.6.0-0.nightly-2021-03-27-052141

How reproducible:
unknown


Additional info:


kubelet appears to have died on 2 nodes:

Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Tue, 30 Mar 2021 02:14:51 -0400   Tue, 30 Mar 2021 02:13:27 -0400   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Tue, 30 Mar 2021 02:14:51 -0400   Tue, 30 Mar 2021 02:13:27 -0400   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Tue, 30 Mar 2021 02:14:51 -0400   Tue, 30 Mar 2021 02:13:27 -0400   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Tue, 30 Mar 2021 02:14:51 -0400   Tue, 30 Mar 2021 02:15:32 -0400   NodeStatusUnknown   Kubelet stopped posting node status.



Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Tue, 30 Mar 2021 13:05:29 -0400   Tue, 30 Mar 2021 13:06:20 -0400   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Tue, 30 Mar 2021 13:05:29 -0400   Tue, 30 Mar 2021 13:06:20 -0400   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Tue, 30 Mar 2021 13:05:29 -0400   Tue, 30 Mar 2021 13:06:20 -0400   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Tue, 30 Mar 2021 13:05:29 -0400   Tue, 30 Mar 2021 13:06:20 -0400   NodeStatusUnknown   Kubelet stopped posting node status.



and one node is failing to drain:
                    machineconfiguration.openshift.io/reason:
                      failed to drain node (5 tries): timed out waiting for the condition: [error when evicting pod "pod-submit-status-2-10": pods "pod-submit-s...