Bug 1861275 - [BM][IPI] kube-controller-manager-openshift-master* pods in a CrashLoopBackOff state
Summary: [BM][IPI] kube-controller-manager-openshift-master* pods in a CrashLoopBackOf...
Keywords:
Status: CLOSED DUPLICATE of bug 1858498
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-controller-manager
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.6.0
Assignee: Tomáš Nožička
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-28 08:12 UTC by Yurii Prokulevych
Modified: 2020-07-28 14:06 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-28 14:06:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Yurii Prokulevych 2020-07-28 08:12:35 UTC
Description of problem:
-----------------------
kube-controller-manager-openshift-master-* pods on 3 masters are in a CrashLoopBackOff.

oc get po -n openshift-kube-controller-manager
NAME                                         READY   STATUS             RESTARTS   AGE
installer-2-openshift-master-0               0/1     Completed          0          63m
installer-3-openshift-master-2               0/1     Completed          0          63m
installer-4-openshift-master-0               0/1     Completed          0          62m
installer-4-openshift-master-1               0/1     Completed          0          62m
installer-4-openshift-master-2               0/1     Completed          0          61m
installer-5-openshift-master-0               0/1     Completed          0          58m
installer-5-openshift-master-2               0/1     Completed          0          57m
installer-6-openshift-master-0               0/1     Completed          0          56m
installer-6-openshift-master-1               0/1     Completed          0          57m
installer-6-openshift-master-2               0/1     Completed          0          56m
installer-7-openshift-master-0               0/1     Completed          0          55m
installer-7-openshift-master-1               0/1     Completed          0          53m
installer-7-openshift-master-2               0/1     Completed          0          54m
installer-8-openshift-master-0               0/1     Completed          0          53m
installer-8-openshift-master-1               0/1     Completed          0          51m
installer-8-openshift-master-2               0/1     Completed          0          52m
kube-controller-manager-openshift-master-0   3/4     CrashLoopBackOff   9          53m
kube-controller-manager-openshift-master-1   3/4     CrashLoopBackOff   12         8m53s
kube-controller-manager-openshift-master-2   3/4     CrashLoopBackOff   12         52m
revision-pruner-2-openshift-master-0         0/1     Completed          0          63m
revision-pruner-3-openshift-master-2         0/1     Completed          0          62m
revision-pruner-4-openshift-master-0         0/1     Completed          0          61m
revision-pruner-4-openshift-master-1         0/1     Completed          0          62m
revision-pruner-4-openshift-master-2         0/1     Completed          0          61m
revision-pruner-5-openshift-master-0         0/1     Completed          0          57m
revision-pruner-5-openshift-master-2         0/1     Completed          0          57m
revision-pruner-6-openshift-master-0         0/1     Completed          0          56m
revision-pruner-6-openshift-master-1         0/1     Completed          0          56m
revision-pruner-6-openshift-master-2         0/1     Completed          0          55m
revision-pruner-7-openshift-master-0         0/1     Completed          0          54m
revision-pruner-7-openshift-master-1         0/1     Completed          0          53m
revision-pruner-7-openshift-master-2         0/1     Completed          0          54m
revision-pruner-8-openshift-master-0         0/1     Completed          0          52m
revision-pruner-8-openshift-master-1         0/1     Completed          0          51m
revision-pruner-8-openshift-master-2         0/1     Completed          0          52m

Problem is with `kube-controller-manager-recovery-controller` container
...
  kube-controller-manager-recovery-controller:
    Container ID:  cri-o://6150eaa9a48d4dd1e2209834b833e0f51d9e7cda52a6076cb51a346548ce4959
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:74e4c487ba8ecc2c94f70e13242d6dc35791dcdcee5cfb2f30540535ea6f492f
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:74e4c487ba8ecc2c94f70e13242d6dc35791dcdcee5cfb2f30540535ea6f492f
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/bash
      -euxo
      pipefail
      -c
    Args:
      timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 9443 \))" ]; do sleep 1; done'

      exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=$$
POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2

    State:       Waiting
      Reason:    CrashLoopBackOff
    Last State:  Terminated
      Reason:    Error
      Message:   43                [::ffff:127.0.0.1]:54906
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:50652
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:56182
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:54816
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:53864
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:33360
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:52776
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:33094
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:44188
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:43930
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:52902
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:47664
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:53996
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:54260
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:57754
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:53476
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:52096
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:55096
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:42180
ESTAB      0      0       [::ffff:127.0.0.1]:9443                [::ffff:127.0.0.1]:50602              ' ']'
+ sleep 1

      Exit Code:    124
      Started:      Tue, 28 Jul 2020 11:01:02 +0300
      Finished:     Tue, 28 Jul 2020 11:04:02 +0300
    Ready:          False
    Restart Count:  9
    Requests:
      cpu:     5m
      memory:  50Mi
    Environment:
      POD_NAMESPACE:  openshift-kube-controller-manager (v1:metadata.namespace)
    Mounts:
      /etc/kubernetes/static-pod-certs from cert-dir (rw)
      /etc/kubernetes/static-pod-resources from resource-dir (rw)


Version-Release number of selected component (if applicable):
-------------------------------------------------------------
4.5.3

Steps to Reproduce:
-------------------
1. Deploy BM IPI cluster in a disconnected environment with IPv6 provisioning network


Actual results:
---------------
Deployment succeeds but `kube-controller-manager` operator moves between `Degraded` not-Degraded states due to pods `kube-controller-manager-openshift-master-*` being in a `CrashLoopBackOff` state

Expected results:
-----------------
Pods `kube-controller-manager-openshift-master-*` are in running state

Comment 2 Tomáš Nožička 2020-07-28 14:06:06 UTC

*** This bug has been marked as a duplicate of bug 1858498 ***


Note You need to log in before you can comment on or make changes to this bug.