Bug 1867931 - [BM][IPI] ovnkube-node pods in CrashLoopBackOff - failed to get default gateway interface
Summary: [BM][IPI] ovnkube-node pods in CrashLoopBackOff - failed to get default gatew...
Keywords:
Status: CLOSED DUPLICATE of bug 1866464
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Ben Bennett
QA Contact: Anurag saxena
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-11 08:32 UTC by Yurii Prokulevych
Modified: 2020-08-11 10:28 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-08-11 10:28:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Yurii Prokulevych 2020-08-11 08:32:48 UTC
Description of problem:
-----------------------
Installation of single-stack IPv6 BM IPI fails. There are just 2 cluster operators reporting state:

oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication
cloud-credential                                     True        False         False      61m
cluster-autoscaler
config-operator
console
csi-snapshot-controller
dns
etcd
image-registry
ingress
insights
kube-apiserver
kube-controller-manager
kube-scheduler
kube-storage-version-migrator
machine-api
machine-approver
machine-config
marketplace
monitoring
network                                              False       True          True       50m
node-tuning
openshift-apiserver
openshift-controller-manager
openshift-samples
operator-lifecycle-manager
operator-lifecycle-manager-catalog
operator-lifecycle-manager-packageserver
service-ca
storage

oc describe co network
Name:         network
Namespace:
Labels:       <none>
Annotations:  network.operator.openshift.io/last-seen-state:
                {"DaemonsetStates":[{"Namespace":"openshift-ovn-kubernetes","Name":"ovnkube-node","LastSeenStatus":{"currentNumberScheduled":3,"numberMiss...
API Version:  config.openshift.io/v1
Kind:         ClusterOperator
Metadata:
  Creation Timestamp:  2020-08-11T07:22:41Z
  Generation:          1
  Managed Fields:
    API Version:  config.openshift.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
      f:status:
        .:
        f:extension:
        f:versions:
    Manager:      cluster-version-operator
    Operation:    Update
    Time:         2020-08-11T07:22:41Z
    API Version:  config.openshift.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:network.operator.openshift.io/last-seen-state:
      f:status:
        f:conditions:
        f:relatedObjects:
    Manager:         cluster-network-operator
    Operation:       Update
    Time:            2020-08-11T08:24:49Z
  Resource Version:  14775
  Self Link:         /apis/config.openshift.io/v1/clusteroperators/network
  UID:               30964321-028b-44d5-960a-ba75e1e25fa0
Spec:
Status:
  Conditions:
    Last Transition Time:  2020-08-11T07:38:32Z
    Message:               DaemonSet "openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-lnhgh is in CrashLoopBackOff State
DaemonSet "openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-ccbbv is in CrashLoopBackOff State
DaemonSet "openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-f4mjg is in CrashLoopBackOff State
DaemonSet "openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2020-08-11T07:33:33Z
    Reason:                RolloutHung
    Status:                True
    Type:                  Degraded
    Last Transition Time:  2020-08-11T07:33:26Z
    Status:                True
    Type:                  Upgradeable
    Last Transition Time:  2020-08-11T07:33:32Z
    Message:               DaemonSet "openshift-multus/network-metrics-daemon" is not available (awaiting 3 nodes)
DaemonSet "openshift-multus/multus-admission-controller" is waiting for other operators to become ready
DaemonSet "openshift-ovn-kubernetes/ovnkube-master-metrics" is not available (awaiting 3 nodes)
DaemonSet "openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 3 nodes)
DaemonSet "openshift-ovn-kubernetes/ovnkube-node-metrics" is not available (awaiting 3 nodes)
    Reason:                Deploying
    Status:                True
    Type:                  Progressing
    Last Transition Time:  2020-08-11T07:33:32Z
    Message:               The network is starting up
    Reason:                Startup
    Status:                False
    Type:                  Available
  Extension:               <nil>
  Related Objects:
    Group:
    Name:       applied-cluster
    Namespace:  openshift-network-operator
    Resource:   configmaps
    Group:      apiextensions.k8s.io
    Name:       network-attachment-definitions.k8s.cni.cncf.io
    Resource:   customresourcedefinitions
    Group:      apiextensions.k8s.io
    Name:       ippools.whereabouts.cni.cncf.io
    Resource:   customresourcedefinitions
    Group:      apiextensions.k8s.io
    Name:       overlappingrangeipreservations.whereabouts.cni.cncf.io
    Resource:   customresourcedefinitions
    Group:
    Name:       openshift-multus
    Resource:   namespaces
    Group:      rbac.authorization.k8s.io
    Name:       multus
    Resource:   clusterroles
    Group:
    Name:       multus
    Namespace:  openshift-multus
    Resource:   serviceaccounts
    Group:      rbac.authorization.k8s.io
    Name:       multus
    Resource:   clusterrolebindings
    Group:      rbac.authorization.k8s.io
    Name:       multus-whereabouts
    Resource:   clusterrolebindings
    Group:      rbac.authorization.k8s.io
    Name:       whereabouts-cni
    Resource:   clusterroles
    Group:
    Name:       cni-binary-copy-script
    Namespace:  openshift-multus
    Resource:   configmaps
    Group:      apps
    Name:       multus
    Namespace:  openshift-multus
    Resource:   daemonsets
    Group:
    Name:       metrics-daemon-sa
    Namespace:  openshift-multus
    Resource:   serviceaccounts
    Group:      rbac.authorization.k8s.io
    Name:       metrics-daemon-role
    Resource:   clusterroles
    Group:      rbac.authorization.k8s.io
    Name:       metrics-daemon-sa-rolebinding
    Resource:   clusterrolebindings
    Group:      apps
    Name:       network-metrics-daemon
    Namespace:  openshift-multus
    Resource:   daemonsets
    Group:      monitoring.coreos.com
    Name:       monitor-network
    Namespace:  openshift-multus
    Resource:   servicemonitors
    Group:
    Name:       network-metrics-service
    Namespace:  openshift-multus
    Resource:   services
    Group:      rbac.authorization.k8s.io
    Name:       prometheus-k8s
    Namespace:  openshift-multus
    Resource:   roles
    Group:      rbac.authorization.k8s.io
    Name:       prometheus-k8s
    Namespace:  openshift-multus
    Resource:   rolebindings
    Group:
    Name:       multus-admission-controller
    Namespace:  openshift-multus
    Resource:   services
    Group:      rbac.authorization.k8s.io
    Name:       multus-admission-controller-webhook
    Resource:   clusterroles
    Group:      rbac.authorization.k8s.io
    Name:       multus-admission-controller-webhook
    Resource:   clusterrolebindings
    Group:      admissionregistration.k8s.io
    Name:       multus.openshift.io
    Resource:   validatingwebhookconfigurations
    Group:
    Name:       openshift-service-ca                                                                                                                                                             [96/1886]
    Namespace:  openshift-network-operator
    Resource:   configmaps
    Group:      apps
    Name:       multus-admission-controller
    Namespace:  openshift-multus
    Resource:   daemonsets
    Group:      monitoring.coreos.com
    Name:       monitor-multus-admission-controller
    Namespace:  openshift-multus
    Resource:   servicemonitors
    Group:      rbac.authorization.k8s.io
    Name:       prometheus-k8s
    Namespace:  openshift-multus
    Resource:   roles
    Group:      rbac.authorization.k8s.io
    Name:       prometheus-k8s
    Namespace:  openshift-multus
    Resource:   rolebindings
    Group:      monitoring.coreos.com
    Name:       prometheus-k8s-rules
    Namespace:  openshift-multus
    Resource:   prometheusrules
    Group:
    Name:       openshift-ovn-kubernetes
    Resource:   namespaces
    Group:      apiextensions.k8s.io
    Name:       egressfirewalls.k8s.ovn.org
    Resource:   customresourcedefinitions
    Group:      apiextensions.k8s.io
    Name:       egressips.k8s.ovn.org
    Resource:   customresourcedefinitions
    Group:
    Name:       ovn-kubernetes-node
    Namespace:  openshift-ovn-kubernetes
    Resource:   serviceaccounts
    Group:      rbac.authorization.k8s.io
    Name:       openshift-ovn-kubernetes-node
    Resource:   clusterroles
    Group:      rbac.authorization.k8s.io
    Name:       openshift-ovn-kubernetes-node
    Resource:   clusterrolebindings
    Group:
    Name:       ovn-kubernetes-controller
    Namespace:  openshift-ovn-kubernetes
    Resource:   serviceaccounts
    Group:      rbac.authorization.k8s.io
    Name:       openshift-ovn-kubernetes-controller
    Resource:   clusterroles
    Group:      rbac.authorization.k8s.io
    Name:       openshift-ovn-kubernetes-controller
    Resource:   clusterrolebindings
    Group:      rbac.authorization.k8s.io
    Name:       openshift-ovn-kubernetes-sbdb
    Namespace:  openshift-ovn-kubernetes
    Resource:   roles
    Group:      rbac.authorization.k8s.io
    Name:       openshift-ovn-kubernetes-sbdb
    Namespace:  openshift-ovn-kubernetes
    Resource:   rolebindings
    Group:      rbac.authorization.k8s.io
    Name:       openshift-ovn-kubernetes-metrics
    Resource:   clusterroles
    Group:
    Name:       openshift-ovn-kubernetes-metrics
    Namespace:  openshift-ovn-kubernetes
    Resource:   serviceaccounts
    Group:      rbac.authorization.k8s.io
    Name:       openshift-ovn-kubernetes-metrics
    Resource:   clusterrolebindings
    Group:
    Name:       ovnkube-config
    Namespace:  openshift-ovn-kubernetes
    Resource:   configmaps
    Group:
    Name:       ovnkube-db
    Namespace:  openshift-ovn-kubernetes
    Resource:   services
    Group:      apps
    Name:       ovs-node
    Namespace:  openshift-ovn-kubernetes
    Resource:   daemonsets
    Group:      network.operator.openshift.io
    Name:       ovn
    Namespace:  openshift-ovn-kubernetes
    Resource:   operatorpkis
    Group:      monitoring.coreos.com
    Name:       master-rules
    Namespace:  openshift-ovn-kubernetes
    Resource:   prometheusrules
    Group:      monitoring.coreos.com
    Name:       networking-rules
    Namespace:  openshift-ovn-kubernetes
    Resource:   prometheusrules
    Group:      monitoring.coreos.com
    Name:       monitor-ovn-master-metrics
    Namespace:  openshift-ovn-kubernetes
    Resource:   servicemonitors
    Group:
    Name:       ovn-kubernetes-master-metrics
    Namespace:  openshift-ovn-kubernetes
    Resource:   services
    Group:      monitoring.coreos.com
    Name:       monitor-ovn-node-metrics
    Namespace:  openshift-ovn-kubernetes
    Resource:   servicemonitors
    Group:
    Name:       ovn-kubernetes-node-metrics
    Namespace:  openshift-ovn-kubernetes
    Resource:   services
    Group:      rbac.authorization.k8s.io
    Name:       prometheus-k8s
    Namespace:  openshift-ovn-kubernetes
    Resource:   roles
    Group:      rbac.authorization.k8s.io
    Name:       prometheus-k8s
    Namespace:  openshift-ovn-kubernetes
    Resource:   rolebindings
    Group:      policy
    Name:       ovn-raft-quorum-guard
    Namespace:  openshift-ovn-kubernetes
    Resource:   poddisruptionbudgets
    Group:      apps
    Name:       ovnkube-master
    Namespace:  openshift-ovn-kubernetes
    Resource:   daemonsets
    Group:      apps
    Name:       ovnkube-master-metrics
    Namespace:  openshift-ovn-kubernetes
    Resource:   daemonsets
    Group:      apps
    Name:       ovnkube-node
    Namespace:  openshift-ovn-kubernetes
    Resource:   daemonsets
    Group:      apps
    Name:       ovnkube-node-metrics
    Namespace:  openshift-ovn-kubernetes
    Resource:   daemonsets
    Group:
    Name:       openshift-network-operator
    Resource:   namespaces
Events:         <none>

oc get po -n openshift-ovn-kubernetes -o wide
NAME                           READY   STATUS              RESTARTS   AGE   IP                    NODE                                              NOMINATED NODE   READINESS GATES
ovnkube-master-7t9cd           4/4     Running             2          54m   fd2e:6f44:5dd8::123   master-0-1.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovnkube-master-kwh8f           4/4     Running             0          54m   fd2e:6f44:5dd8::145   master-0-2.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovnkube-master-metrics-hblwf   0/1     ContainerCreating   0          54m   fd2e:6f44:5dd8::123   master-0-1.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovnkube-master-metrics-nbkcv   0/1     ContainerCreating   0          54m   fd2e:6f44:5dd8::13c   master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovnkube-master-metrics-wq7bc   0/1     ContainerCreating   0          54m   fd2e:6f44:5dd8::145   master-0-2.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovnkube-master-x5t4q           4/4     Running             0          54m   fd2e:6f44:5dd8::13c   master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovnkube-node-ccbbv             1/2     CrashLoopBackOff    15         54m   fd2e:6f44:5dd8::13c   master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovnkube-node-f4mjg             1/2     CrashLoopBackOff    15         54m   fd2e:6f44:5dd8::123   master-0-1.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovnkube-node-lnhgh             1/2     CrashLoopBackOff    15         54m   fd2e:6f44:5dd8::145   master-0-2.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovnkube-node-metrics-h62zl     0/1     ContainerCreating   0          54m   fd2e:6f44:5dd8::13c   master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovnkube-node-metrics-pgfls     0/1     ContainerCreating   0          54m   fd2e:6f44:5dd8::123   master-0-1.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovnkube-node-metrics-q2b7c     0/1     ContainerCreating   0          54m   fd2e:6f44:5dd8::145   master-0-2.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovs-node-nscjp                 1/1     Running             0          54m   fd2e:6f44:5dd8::13c   master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovs-node-z8lm4                 1/1     Running             0          54m   fd2e:6f44:5dd8::123   master-0-1.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>
ovs-node-zsrjh                 1/1     Running             0          54m   fd2e:6f44:5dd8::145   master-0-2.ocp-edge-cluster-0.qe.lab.redhat.com   <none>           <none>

oc describe po ovnkube-node-ccbbv  -n openshift-ovn-kubernetes
Name:                 ovnkube-node-ccbbv
Namespace:            openshift-ovn-kubernetes
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com/fd2e:6f44:5dd8::13c
Start Time:           Tue, 11 Aug 2020 07:33:33 +0000
Labels:               app=ovnkube-node
                      component=network
                      controller-revision-hash=f8b799587
                      kubernetes.io/os=linux
                      openshift.io/component=network
                      pod-template-generation=1
                      type=infra
Annotations:          <none>
Status:               Running
IP:                   fd2e:6f44:5dd8::13c
IPs:
  IP:           fd2e:6f44:5dd8::13c
Controlled By:  DaemonSet/ovnkube-node
Containers:
  ovn-controller:
    Container ID:  cri-o://c6d5cc4c96916f2c64037b46c1898af6189881f66fcbe5b0c6cfd4cdabe1c440
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e4e5b6369bf17839a0e8e1bfb2b5bf6cce7b4631d83ec6c06d7ec329f8e4457
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e4e5b6369bf17839a0e8e1bfb2b5bf6cce7b4631d83ec6c06d7ec329f8e4457
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/bash
      -c
      set -e
      if [[ -f "/env/${K8S_NODE}" ]]; then
        set -o allexport
        source "/env/${K8S_NODE}"
        set +o allexport
      fi
      echo "$(date -Iseconds) - starting ovn-controller"
      exec ovn-controller unix:/var/run/openvswitch/db.sock -vfile:off \
        --no-chdir --pidfile=/var/run/ovn/ovn-controller.pid \
        -p /ovn-cert/tls.key -c /ovn-cert/tls.crt -C /ovn-ca/ca-bundle.crt \
        -vconsole:"${OVN_LOG_LEVEL}"

    State:          Running
      Started:      Tue, 11 Aug 2020 07:33:50 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     10m
      memory:  300Mi
    Environment:
      OVN_LOG_LEVEL:  info
      K8S_NODE:        (v1:spec.nodeName)
    Mounts:
      /env from env-overrides (rw)
      /etc/openvswitch from etc-openvswitch (rw)
      /etc/ovn/ from etc-openvswitch (rw)
      /ovn-ca from ovn-ca (rw)
      /ovn-cert from ovn-cert (rw)
      /run/openvswitch from run-openvswitch (rw)
      /run/ovn/ from run-ovn (rw)
      /var/lib/openvswitch from var-lib-openvswitch (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from ovn-kubernetes-node-token-lq9tv (ro)
  ovnkube-node:
    Container ID:  cri-o://64ce52ccf000a8bde204d968f04f6515eb4862db00c8a3f53f1600829ea5677a
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e4e5b6369bf17839a0e8e1bfb2b5bf6cce7b4631d83ec6c06d7ec329f8e4457
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e4e5b6369bf17839a0e8e1bfb2b5bf6cce7b4631d83ec6c06d7ec329f8e4457
    Port:          29103/TCP
    Host Port:     29103/TCP
    Command:
      /bin/bash
      -c
      set -xe
      if [[ -f "/env/${K8S_NODE}" ]]; then
        set -o allexport
        source "/env/${K8S_NODE}"
        set +o allexport
      fi
      echo "I$(date "+%m%d %H:%M:%S.%N") - waiting for db_ip addresses"
      cp -f /usr/libexec/cni/ovn-k8s-cni-overlay /cni-bin-dir/
      ovn_config_namespace=openshift-ovn-kubernetes
      echo "I$(date "+%m%d %H:%M:%S.%N") - disable conntrack on geneve port"
      iptables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK
      iptables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK
      retries=0
      while true; do
        db_ip=$(kubectl get ep -n ${ovn_config_namespace} ovnkube-db -o jsonpath='{.subsets[0].addresses[0].ip}')
        if [[ -n "${db_ip}" ]]; then
          break
        fi
        (( retries += 1 ))
        if [[ "${retries}" -gt 40 ]]; then
          echo "E$(date "+%m%d %H:%M:%S.%N") - db endpoint never came up"
          exit 1
        fi
        echo "I$(date "+%m%d %H:%M:%S.%N") - waiting for db endpoint"
        sleep 5
      done
      echo "I$(date "+%m%d %H:%M:%S.%N") - starting ovnkube-node db_ip ${db_ip}"
      hybrid_overlay_flags=
      if [[ -n "" ]]; then
        hybrid_overlay_flags="--enable-hybrid-overlay --no-hostsubnet-nodes=kubernetes.io/os=windows"
        if [[ -n "" ]]; then
          hybrid_overlay_flags="${hybrid_overlay_flags} --hybrid-overlay-cluster-subnets="
        fi
        if [[ -n "" ]]; then
          hybrid_overlay_flags="${hybrid_overlay_flags} --hybrid-overlay-vxlan-port="
        fi
      fi

      exec /usr/bin/ovnkube --init-node "${K8S_NODE}" \
        --nb-address "ssl:[fd2e:6f44:5dd8::123]:9641,ssl:[fd2e:6f44:5dd8::13c]:9641,ssl:[fd2e:6f44:5dd8::145]:9641" \
        --sb-address "ssl:[fd2e:6f44:5dd8::123]:9642,ssl:[fd2e:6f44:5dd8::13c]:9642,ssl:[fd2e:6f44:5dd8::145]:9642" \
        --nb-client-privkey /ovn-cert/tls.key \
        --nb-client-cert /ovn-cert/tls.crt \
        --nb-client-cacert /ovn-ca/ca-bundle.crt \
        --nb-cert-common-name "ovn" \
        --sb-client-privkey /ovn-cert/tls.key \
        --sb-client-cert /ovn-cert/tls.crt \
        --sb-client-cacert /ovn-ca/ca-bundle.crt \
        --sb-cert-common-name "ovn" \
        --config-file=/run/ovnkube-config/ovnkube.conf \
        --loglevel "${OVN_KUBE_LOG_LEVEL}" \
        --inactivity-probe="${OVN_CONTROLLER_INACTIVITY_PROBE}" \
        ${hybrid_overlay_flags} \
        --gateway-mode shared \
        --gateway-interface br-ex \
        --metrics-bind-address "127.0.0.1:29103"

    State:       Waiting
      Reason:    CrashLoopBackOff
    Last State:  Terminated
      Reason:    Error
      Message:   ut=15 set Open_vSwitch . external_ids:ovn-remote="ssl:[fd2e:6f44:5dd8::123]:9642,ssl:[fd2e:6f44:5dd8::13c]:9642,ssl:[fd2e:6f44:5dd8::145]:9642"
I0811 08:26:07.531837  117553 ovs.go:157] exec(1): /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-encap-type=geneve external_ids:ovn-encap-ip=fd2e:6f44:5dd8::13c external_ids:ovn-r>
I0811 08:26:07.536485  117553 ovs.go:160] exec(1): stdout: ""
I0811 08:26:07.536572  117553 ovs.go:161] exec(1): stderr: ""
I0811 08:26:07.538963  117553 node.go:205] Node master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com ready for ovn initialization with subnet fd01:0:0:1::/64
I0811 08:26:07.539152  117553 ovs.go:157] exec(2): /usr/bin/ovs-appctl --timeout=15 -t /var/run/ovn/ovn-controller.4436.ctl connection-status
I0811 08:26:07.542105  117553 ovs.go:160] exec(2): stdout: "connected\n"
I0811 08:26:07.542181  117553 ovs.go:161] exec(2): stderr: ""
I0811 08:26:07.542225  117553 node.go:119] Node master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com connection status = connected
I0811 08:26:07.542250  117553 ovs.go:157] exec(3): /usr/bin/ovs-vsctl --timeout=15 -- br-exists br-int
I0811 08:26:07.546717  117553 ovs.go:160] exec(3): stdout: ""
I0811 08:26:07.546775  117553 ovs.go:161] exec(3): stderr: ""
I0811 08:26:07.546816  117553 ovs.go:157] exec(4): /usr/bin/ovs-ofctl dump-aggregate br-int
I0811 08:26:07.550185  117553 ovs.go:160] exec(4): stdout: "NXST_AGGREGATE reply (xid=0x4): packet_count=0 byte_count=0 flow_count=7\n"
I0811 08:26:07.550300  117553 ovs.go:161] exec(4): stderr: ""
I0811 08:26:07.550413  117553 factory.go:668] Added *v1.Service event handler 1
I0811 08:26:07.550451  117553 factory.go:668] Added *v1.Endpoints event handler 2
I0811 08:26:07.550479  117553 factory.go:668] Added *v1.Service event handler 3
F0811 08:26:07.550667  117553 ovnkube.go:129] failed to get default gateway interface

      Exit Code:    1
      Started:      Tue, 11 Aug 2020 08:26:06 +0000
      Finished:     Tue, 11 Aug 2020 08:26:07 +0000
    Ready:          False
    Restart Count:  15
    Requests:
      cpu:      10m
      memory:   300Mi
    Readiness:  exec [test -f /etc/cni/net.d/10-ovn-kubernetes.conf] delay=5s timeout=1s period=5s #success=1 #failure=3
    Environment:
      KUBERNETES_SERVICE_PORT:          6443
      KUBERNETES_SERVICE_HOST:          api-int.ocp-edge-cluster-0.qe.lab.redhat.com
      OVN_CONTROLLER_INACTIVITY_PROBE:  30000
      OVN_KUBE_LOG_LEVEL:               4
      K8S_NODE:                          (v1:spec.nodeName)
    Mounts:
      /cni-bin-dir from host-cni-bin (rw)
      /env from env-overrides (rw)
      /etc/cni/net.d from host-cni-netd (rw)
      /etc/openvswitch from etc-openvswitch (rw)
      /etc/ovn/ from etc-openvswitch (rw)
      /host from host-slash (ro)
      /ovn-ca from ovn-ca (rw)
      /ovn-cert from ovn-cert (rw)
      /run/netns from host-run-netns (ro)
      /run/openvswitch from run-openvswitch (rw)
      /run/ovn-kubernetes/ from host-run-ovn-kubernetes (rw)
      /run/ovn/ from run-ovn (rw)
      /run/ovnkube-config/ from ovnkube-config (rw)
      /var/lib/cni/networks/ovn-k8s-cni-overlay from host-var-lib-cni-networks-ovn-kubernetes (rw)
      /var/lib/openvswitch from var-lib-openvswitch (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from ovn-kubernetes-node-token-lq9tv (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  host-slash:
    Type:          HostPath (bare host directory volume)
    Path:          /
    HostPathType:
  host-run-netns:
    Type:          HostPath (bare host directory volume)
    Path:          /run/netns
    HostPathType:
  var-lib-openvswitch:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/openvswitch/data
    HostPathType:
  etc-openvswitch:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/openvswitch/etc
    HostPathType:
  run-openvswitch:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/openvswitch
    HostPathType:
  run-ovn:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/ovn
    HostPathType:
  host-run-ovn-kubernetes:
    Type:          HostPath (bare host directory volume)
    Path:          /run/ovn-kubernetes
    HostPathType:
  host-cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/cni/bin
    HostPathType:
  host-cni-netd:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/multus/cni/net.d
    HostPathType:
  host-var-lib-cni-networks-ovn-kubernetes:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/cni/networks/ovn-k8s-cni-overlay
    HostPathType:
  ovnkube-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      ovnkube-config
    Optional:  false
  env-overrides:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      env-overrides
    Optional:  true
  ovn-ca:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      ovn-ca
    Optional:  false
  ovn-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ovn-cert
    Optional:    false
  ovn-kubernetes-node-token-lq9tv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ovn-kubernetes-node-token-lq9tv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     op=Exists
Events:
  Type     Reason     Age                    From                                                      Message
  ----     ------     ----                   ----                                                      -------
  Normal   Scheduled  54m                    default-scheduler                                         Successfully assigned openshift-ovn-kubernetes/ovnkube-node-ccbbv to master-0-0.ocp-edge-cluster-0>
  Normal   Pulling    54m                    kubelet, master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com  Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e4e5b6369bf17839a0e8e1bfb2b5>
  Normal   Pulled     54m                    kubelet, master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com  Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e4e5b6369bf17839>
  Normal   Created    54m                    kubelet, master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com  Created container ovn-controller
  Normal   Started    54m                    kubelet, master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com  Started container ovn-controller
  Warning  Unhealthy  54m (x6 over 54m)      kubelet, master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com  Readiness probe failed:
  Normal   Pulled     53m (x4 over 54m)      kubelet, master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com  Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e4e5b6369bf17839a0e8e1bfb2>
  Normal   Created    53m (x4 over 54m)      kubelet, master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com  Created container ovnkube-node
  Normal   Started    53m (x4 over 54m)      kubelet, master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com  Started container ovnkube-node
  Warning  BackOff    4m54s (x226 over 54m)  kubelet, master-0-0.ocp-edge-cluster-0.qe.lab.redhat.com  Back-off restarting failed container

Routing table from master node:
[core@master-0-0 ~]$ ip r
[core@master-0-0 ~]$ ip -6 r
::1 dev lo proto kernel metric 256 pref medium
fd00:1101::a97e:4aba:21d7:b54f dev enp4s0 proto kernel metric 100 pref medium
fd00:1101::/64 dev enp4s0 proto ra metric 100 pref medium
fd2e:6f44:5dd8::13c dev br-ex proto kernel metric 800 pref medium
fd2e:6f44:5dd8::/64 dev br-ex proto ra metric 800 pref medium
fe80::/64 dev enp4s0 proto kernel metric 100 pref medium
fe80::/64 dev genev_sys_6081 proto kernel metric 256 pref medium
fe80::/64 dev br-ex proto kernel metric 800 pref medium
default via fe80::5054:ff:fe7d:1535 dev br-ex proto ra metric 800 pref medium




Version-Release number of selected component (if applicable):
-------------------------------------------------------------
4.6.0-0.nightly-2020-08-11-032013

Steps to Reproduce:
-------------------
1. Deploy single-stack IPv6 env

Actual results:
---------------
Deployment fails

Expected results:
-----------------
Deployment succeeds


Additional info:
----------------
Virtual setup: 3masters + 2workers

Comment 2 Stephen Benjamin 2020-08-11 10:28:14 UTC

*** This bug has been marked as a duplicate of bug 1866464 ***

Comment 3 Stephen Benjamin 2020-08-11 10:28:46 UTC
IPv6 installs are broken due to OVN changes, the above linked bug is where they are fixing it.


Note You need to log in before you can comment on or make changes to this bug.