Bug 1881868 - apiserver pods on Init:0/1 status because of timeout errors on CRIO
Summary: apiserver pods on Init:0/1 status because of timeout errors on CRIO
Keywords:
Status: CLOSED DUPLICATE of bug 1785399
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 4.6
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 4.7.0
Assignee: Martin André
QA Contact: weiwei jiang
URL:
Whiteboard:
Depends On:
Blocks: 1887744
TreeView+ depends on / blocked
 
Reported: 2020-09-23 09:14 UTC by David Sanz
Modified: 2021-01-12 07:47 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-19 10:28:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
last crio and kubelet (8.66 MB, text/plain)
2020-10-02 11:50 UTC, David Sanz
no flags Details

Description David Sanz 2020-09-23 09:14:55 UTC
Description of problem:
apiserver initialization pods is getting delayed for more than 15 minutes, causing installation failing on reaching the timeout limit

$ oc get pods -n openshift-apiserver
NAME                         READY   STATUS     RESTARTS   AGE
apiserver-5f4cc6ff8d-tfbnc   2/2     Running    0          20m
apiserver-7d995cb749-jhhwz   0/2     Init:0/1   0          16m
apiserver-7d995cb749-mzkmj   2/2     Running    0          17m


$ oc describe pod apiserver-7d995cb749-jhhwz
Name:                 apiserver-7d995cb749-jhhwz
Namespace:            openshift-apiserver
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 mrnd-13-46-2nbs4-master-0/192.168.2.200
Start Time:           Wed, 23 Sep 2020 10:49:13 +0200
Labels:               apiserver=true
                      app=openshift-apiserver-a
                      openshift-apiserver-anti-affinity=true
                      pod-template-hash=7d995cb749
                      revision=2
Annotations:          operator.openshift.io/dep-desired.generation: 3
                      operator.openshift.io/dep-openshift-apiserver.config.configmap: YELGoA==
                      operator.openshift.io/dep-openshift-apiserver.etcd-client.secret: ve1gQA==
                      operator.openshift.io/dep-openshift-apiserver.etcd-serving-ca.configmap: qeuRpw==
                      operator.openshift.io/dep-openshift-apiserver.image-import-ca.configmap: UMwN7A==
                      operator.openshift.io/dep-openshift-apiserver.trusted-ca-bundle.configmap: bNt55g==
Status:               Pending
IP:                   
IPs:                  <none>
Controlled By:        ReplicaSet/apiserver-7d995cb749
Init Containers:
  fix-audit-permissions:
    Container ID:  
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6e7777b7b1de5859732bc6158a8e030e721ad34ef855b835cc1e643d81dcbc2
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      chmod 0700 /var/log/openshift-apiserver
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/log/openshift-apiserver from audit-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from openshift-apiserver-sa-token-2hbdx (ro)
Containers:
  openshift-apiserver:
    Container ID:  
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6e7777b7b1de5859732bc6158a8e030e721ad34ef855b835cc1e643d81dcbc2
    Image ID:      
    Port:          8443/TCP
    Host Port:     0/TCP
    Command:
      /bin/bash
      -ec
    Args:
      if [ -s /var/run/configmaps/trusted-ca-bundle/tls-ca-bundle.pem ]; then
        echo "Copying system trust bundle"
        cp -f /var/run/configmaps/trusted-ca-bundle/tls-ca-bundle.pem /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
      fi
      exec openshift-apiserver start --config=/var/run/configmaps/config/config.yaml -v=2
      
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        100m
      memory:     200Mi
    Liveness:     http-get https://:8443/healthz delay=30s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get https://:8443/healthz delay=0s timeout=1s period=10s #success=1 #failure=10
    Environment:  <none>
    Mounts:
      /var/lib/kubelet/config.json from node-pullsecrets (ro)
      /var/log/openshift-apiserver from audit-dir (rw)
      /var/run/configmaps/audit from audit (rw)
      /var/run/configmaps/config from config (rw)
      /var/run/configmaps/etcd-serving-ca from etcd-serving-ca (rw)
      /var/run/configmaps/image-import-ca from image-import-ca (rw)
      /var/run/configmaps/trusted-ca-bundle from trusted-ca-bundle (rw)
      /var/run/secrets/encryption-config from encryption-config (rw)
      /var/run/secrets/etcd-client from etcd-client (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from openshift-apiserver-sa-token-2hbdx (ro)
      /var/run/secrets/serving-cert from serving-cert (rw)
  openshift-apiserver-check-endpoints:
    Container ID:  
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cff3766265459fcee9114fc7dfe4de72251e708a90a98f9fccf72818b0bb0178
    Image ID:      
    Port:          17698/TCP
    Host Port:     0/TCP
    Command:
      cluster-kube-apiserver-operator
      check-endpoints
    Args:
      --listen
      0.0.0.0:17698
      --namespace
      $(POD_NAMESPACE)
      --v
      2
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:     10m
      memory:  50Mi
    Environment:
      POD_NAME:       apiserver-7d995cb749-jhhwz (v1:metadata.name)
      POD_NAMESPACE:  openshift-apiserver (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from openshift-apiserver-sa-token-2hbdx (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  node-pullsecrets:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/config.json
    HostPathType:  File
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      config
    Optional:  false
  audit:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      audit-2
    Optional:  false
  etcd-client:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  etcd-client
    Optional:    false
  etcd-serving-ca:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      etcd-serving-ca
    Optional:  false
  image-import-ca:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      image-import-ca
    Optional:  true
  serving-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  serving-cert
    Optional:    false
  trusted-ca-bundle:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      trusted-ca-bundle
    Optional:  true
  encryption-config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  encryption-config-2
    Optional:    true
  audit-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/openshift-apiserver
    HostPathType:  
  openshift-apiserver-sa-token-2hbdx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  openshift-apiserver-sa-token-2hbdx
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  node-role.kubernetes.io/master=
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 120s
                 node.kubernetes.io/unreachable:NoExecute for 120s
Events:
  Type     Reason                  Age                   From                                Message
  ----     ------                  ----                  ----                                -------
  Warning  FailedScheduling        <unknown>                                                 0/3 nodes are available: 3 node(s) didn't match pod affinity/anti-affinity, 3 node(s) didn't match pod anti-affinity rules.
  Warning  FailedScheduling        <unknown>                                                 0/3 nodes are available: 3 node(s) didn't match pod affinity/anti-affinity, 3 node(s) didn't match pod anti-affinity rules.
  Warning  FailedScheduling        <unknown>                                                 0/3 nodes are available: 3 node(s) didn't match pod affinity/anti-affinity, 3 node(s) didn't match pod anti-affinity rules.
  Normal   Scheduled               <unknown>                                                 Successfully assigned openshift-apiserver/apiserver-7d995cb749-jhhwz to mrnd-13-46-2nbs4-master-0
  Warning  FailedCreatePodSandBox  6m57s                 kubelet, mrnd-13-46-2nbs4-master-0  Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Warning  FailedCreatePodSandBox  6m45s                 kubelet, mrnd-13-46-2nbs4-master-0  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_apiserver-7d995cb749-jhhwz_openshift-apiserver_9dc37c02-4e93-4d6f-a6ae-c9a518b87f3b_0 for id 751bf733f1ac692992bc798c6dd398264c33d475d13b5678bed958d22a4c5391: name is reserved
  Warning  FailedCreatePodSandBox  6m33s                 kubelet, mrnd-13-46-2nbs4-master-0  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_apiserver-7d995cb749-jhhwz_openshift-apiserver_9dc37c02-4e93-4d6f-a6ae-c9a518b87f3b_0 for id 67c7ae3d36e1c73cf7dd14003c8e15f026aa2ae0303e23d249b233a4224d876f: name is reserved
  Warning  FailedCreatePodSandBox  6m19s                 kubelet, mrnd-13-46-2nbs4-master-0  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_apiserver-7d995cb749-jhhwz_openshift-apiserver_9dc37c02-4e93-4d6f-a6ae-c9a518b87f3b_0 for id 802568f90a5e54d4e03e3f0b02cc9f3e0bd9511a8d051ab96adfebc463c95b25: name is reserved
  Warning  FailedCreatePodSandBox  6m5s                  kubelet, mrnd-13-46-2nbs4-master-0  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_apiserver-7d995cb749-jhhwz_openshift-apiserver_9dc37c02-4e93-4d6f-a6ae-c9a518b87f3b_0 for id bba00dc595d3acb03a61db38a8a952e1865017c38e42ad59f94c9ca949a3ad7f: name is reserved
  Warning  FailedCreatePodSandBox  5m54s                 kubelet, mrnd-13-46-2nbs4-master-0  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_apiserver-7d995cb749-jhhwz_openshift-apiserver_9dc37c02-4e93-4d6f-a6ae-c9a518b87f3b_0 for id 11f1162de654f1b8221fb6a9285245725b0d427d8414b312c62a9e5e486e7481: name is reserved
  Warning  FailedCreatePodSandBox  5m39s                 kubelet, mrnd-13-46-2nbs4-master-0  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_apiserver-7d995cb749-jhhwz_openshift-apiserver_9dc37c02-4e93-4d6f-a6ae-c9a518b87f3b_0 for id fa12b37df015e77e5aa60ea7222b4c603d47436869d3470ca0ef7167b8fc08d4: name is reserved
  Warning  FailedCreatePodSandBox  5m25s                 kubelet, mrnd-13-46-2nbs4-master-0  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_apiserver-7d995cb749-jhhwz_openshift-apiserver_9dc37c02-4e93-4d6f-a6ae-c9a518b87f3b_0 for id 6a18db3736d69969970b74ee19d3b5ab862dcddd3009aeab2246a2563a24eaec: name is reserved
  Warning  FailedCreatePodSandBox  5m10s                 kubelet, mrnd-13-46-2nbs4-master-0  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_apiserver-7d995cb749-jhhwz_openshift-apiserver_9dc37c02-4e93-4d6f-a6ae-c9a518b87f3b_0 for id a6a1bcd63e43b0cf04b9953210acff6b339c5ae18d85d7b85a93986123c4fac9: name is reserved
  Warning  FailedCreatePodSandBox  86s (x17 over 4m56s)  kubelet, mrnd-13-46-2nbs4-master-0  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_apiserver-7d995cb749-jhhwz_openshift-apiserver_9dc37c02-4e93-4d6f-a6ae-c9a518b87f3b_0 for id c65fb95981a7e26a5d574ad6c193bfbcb1ab4450e34770e0d0393178e88ced0f: name is reserved



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.Install default IPI on OSP cluster
2.Watch for installation to finish
3.

Actual results:
Pod initialization takes to much time to complete

Expected results:


Additional info:

Comment 1 Peter Hunt 2020-09-23 15:54:32 UTC
can you get the crio and kubelet logs when this was happening?

Comment 3 Peter Hunt 2020-09-24 14:13:29 UTC
I am seeing a lot of 
`Multus: [openshift-kube-scheduler/installer-3-mrnd-16-46-un-ghfk7-master-0]: PollImmediate error waiting for ReadinessIndicatorFile: timed out waiting for the condition`
and 
`Error while removing pod from CNI network \"multus-cni-network\": netplugin failed with no error message: context deadline exceeded`

I believe the issues are due to network plugins not responding to cri-o's requests

Multus team, can you PTAL

Comment 4 Douglas Smith 2020-09-24 14:53:41 UTC
This needs an error message improvement (change upstream pending), but, when this happens:

```
Multus: [openshift-apiserver/apiserver-c549ffd8d-65qr5]: PollImmediate error waiting for ReadinessIndicatorFile: timed out waiting for the condition
```

This means that Multus timed out waiting for the default network provider to be ready (typically openshift-sdn or OVN)

David, do you have the must-gather, or specifically the logs for the ovn-k or openshift-sdn pods? (depending on deployment type)

Comment 10 Ben Bennett 2020-09-25 20:49:30 UTC

*** This bug has been marked as a duplicate of bug 1785399 ***

Comment 12 David Sanz 2020-09-29 08:35:22 UTC
We hint this bug on more than 20% of installation on OSP 16, we need to retarget it to 4.6.

Comment 13 Andrew Stoycos 2020-09-30 14:06:33 UTC
@David Can you attach the OVS pod logs as well? (A complete must-gather would be even better if possible). We need to determine for sure whether this is a network plugin issue or something deeper (i.e related to https://bugzilla.redhat.com/show_bug.cgi?id=1785399)

Comment 14 Andrew Stoycos 2020-09-30 14:56:29 UTC
Please make sure to try and recreate this with a nightly build newer than 4.6.0-0.nightly-2020-09-30-091659, some OVS daemon changes went in that could fix any Network plugin timeout issues.

Comment 15 David Sanz 2020-10-01 10:14:50 UTC
Using 4.6.0-0.nightly-2020-10-01-070841, problems about creating pods are extended to:

NAMESPACE                                          NAME                                                     READY   STATUS              RESTARTS   AGE
openshift-controller-manager                       controller-manager-49x9c                                 0/1     ContainerCreating   0          15m
openshift-etcd                                     installer-3-mrnd-13-46-un-8b9v5-master-1                 0/1     ContainerCreating   0          11m
openshift-kube-apiserver                           installer-3-mrnd-13-46-un-8b9v5-master-1                 0/1     ContainerCreating   0          12m
openshift-kube-controller-manager                  installer-5-mrnd-13-46-un-8b9v5-master-1                 0/1     ContainerCreating   0          11m
openshift-kube-scheduler                           installer-6-mrnd-13-46-un-8b9v5-master-1                 0/1     ContainerCreating   0          11m


All with the same error:

  Type     Reason                  Age                   From                                   Message
  ----     ------                  ----                  ----                                   -------
  Warning  FailedCreatePodSandBox  5m53s                 kubelet, mrnd-13-46-un-8b9v5-master-1  Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Warning  FailedCreatePodSandBox  5m41s                 kubelet, mrnd-13-46-un-8b9v5-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_installer-3-mrnd-13-46-un-8b9v5-master-1_openshift-etcd_4c126433-9d7c-4a2e-a613-53b29464157d_0 for id c771d2dc56335f6e2bc7efa8437d9f08ea57d099df5df2320600d588799840b0: name is reserved
  Warning  FailedCreatePodSandBox  5m27s                 kubelet, mrnd-13-46-un-8b9v5-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_installer-3-mrnd-13-46-un-8b9v5-master-1_openshift-etcd_4c126433-9d7c-4a2e-a613-53b29464157d_0 for id e9dc5e2d3a7ea633d350ba1f239ee8c924d04401dd63f2854baa806ed485bf6c: name is reserved
  Warning  FailedCreatePodSandBox  5m16s                 kubelet, mrnd-13-46-un-8b9v5-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_installer-3-mrnd-13-46-un-8b9v5-master-1_openshift-etcd_4c126433-9d7c-4a2e-a613-53b29464157d_0 for id 40f0e89543abd727664eed9cbf14e3314abea72373e075fb285873434744e84c: name is reserved
  Warning  FailedCreatePodSandBox  5m4s                  kubelet, mrnd-13-46-un-8b9v5-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_installer-3-mrnd-13-46-un-8b9v5-master-1_openshift-etcd_4c126433-9d7c-4a2e-a613-53b29464157d_0 for id 3a063c50ccecad975a199d28859f1effffd997d07b66dd19d7657c039cee04c3: name is reserved
  Warning  FailedCreatePodSandBox  4m49s                 kubelet, mrnd-13-46-un-8b9v5-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_installer-3-mrnd-13-46-un-8b9v5-master-1_openshift-etcd_4c126433-9d7c-4a2e-a613-53b29464157d_0 for id 9fbf2ed500945c4f1e1909d80786d2ca050f9b72d8e0c1311c681a85908a19fb: name is reserved
  Warning  FailedCreatePodSandBox  4m34s                 kubelet, mrnd-13-46-un-8b9v5-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_installer-3-mrnd-13-46-un-8b9v5-master-1_openshift-etcd_4c126433-9d7c-4a2e-a613-53b29464157d_0 for id 52261871bcc83af63b858cd29a3525c8dc9f68c7a992d92dec855e8808de3528: name is reserved
  Warning  FailedCreatePodSandBox  4m20s                 kubelet, mrnd-13-46-un-8b9v5-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_installer-3-mrnd-13-46-un-8b9v5-master-1_openshift-etcd_4c126433-9d7c-4a2e-a613-53b29464157d_0 for id 7df244d61562b1d8b58d2fea8587b177d55e1b4ccd58801943f28aedfeaf1d0e: name is reserved
  Warning  FailedCreatePodSandBox  4m6s                  kubelet, mrnd-13-46-un-8b9v5-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_installer-3-mrnd-13-46-un-8b9v5-master-1_openshift-etcd_4c126433-9d7c-4a2e-a613-53b29464157d_0 for id eaff66c932d468fab8e09634ef22629e3f6bde4278534619dc47c492ca923afd: name is reserved
  Warning  FailedCreatePodSandBox  33s (x17 over 3m51s)  kubelet, mrnd-13-46-un-8b9v5-master-1  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_installer-3-mrnd-13-46-un-8b9v5-master-1_openshift-etcd_4c126433-9d7c-4a2e-a613-53b29464157d_0 for id 4189986ed0f143c3e660eb1eed4c43979447b6c3fd2773b97663dfcd3acd0b9f: name is reserved





[morenod@morenod-laptop ~]$ oc get pods -A -o wide | grep ovs | grep master-1
openshift-sdn                                      ovs-twn2g                                                1/1     Running             0          22m     192.168.0.173   mrnd-13-46-un-8b9v5-master-1         <none>           <none>


[morenod@morenod-laptop ~]$ oc logs -f ovs-twn2g -n openshift-sdn
openvswitch is running in systemd
==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:49:11.733Z|00007|dpdk|INFO|DPDK Disabled - Use other_config:dpdk-init to enable
2020-10-01T09:49:11.738Z|00008|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.13.2
2020-10-01T09:49:52.166Z|00009|memory|INFO|50044 kB peak resident set size after 40.5 seconds
2020-10-01T09:50:12.599Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2020-10-01T09:50:12.646Z|00002|ovs_numa|INFO|Discovered 8 CPU cores on NUMA node 0
2020-10-01T09:50:12.646Z|00003|ovs_numa|INFO|Discovered 1 NUMA nodes and 8 CPU cores
2020-10-01T09:50:12.646Z|00004|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting...
2020-10-01T09:50:12.646Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected
2020-10-01T09:50:12.648Z|00006|dpdk|INFO|DPDK Disabled - Use other_config:dpdk-init to enable
2020-10-01T09:50:12.651Z|00007|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.13.2

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:50:12.413Z|00005|jsonrpc|WARN|unix#1: receive error: Connection reset by peer
2020-10-01T09:50:12.413Z|00006|reconnect|WARN|unix#1: connection dropped (Connection reset by peer)
2020-10-01T09:50:12.424Z|00007|jsonrpc|WARN|unix#2: receive error: Connection reset by peer
2020-10-01T09:50:12.424Z|00008|reconnect|WARN|unix#2: connection dropped (Connection reset by peer)
2020-10-01T09:50:12.455Z|00009|jsonrpc|WARN|unix#4: receive error: Connection reset by peer
2020-10-01T09:50:12.455Z|00010|reconnect|WARN|unix#4: connection dropped (Connection reset by peer)
2020-10-01T09:50:13.188Z|00011|jsonrpc|WARN|unix#9: receive error: Connection reset by peer
2020-10-01T09:50:13.188Z|00012|reconnect|WARN|unix#9: connection dropped (Connection reset by peer)
2020-10-01T09:50:22.348Z|00013|memory|INFO|7580 kB peak resident set size after 10.0 seconds
2020-10-01T09:50:22.348Z|00014|memory|INFO|cells:39 monitors:3 sessions:2
2020-10-01T09:51:24.275Z|00015|jsonrpc|WARN|unix#11: receive error: Connection reset by peer
2020-10-01T09:51:24.275Z|00016|reconnect|WARN|unix#11: connection dropped (Connection reset by peer)
2020-10-01T09:51:24.289Z|00017|jsonrpc|WARN|unix#12: receive error: Connection reset by peer
2020-10-01T09:51:24.289Z|00018|reconnect|WARN|unix#12: connection dropped (Connection reset by peer)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:51:24.298Z|00008|memory|INFO|50320 kB peak resident set size after 71.7 seconds
2020-10-01T09:51:24.313Z|00009|ofproto_dpif|INFO|system@ovs-system: Datapath supports recirculation
2020-10-01T09:51:24.313Z|00010|ofproto_dpif|INFO|system@ovs-system: VLAN header stack length probed as 2
2020-10-01T09:51:24.313Z|00011|ofproto_dpif|INFO|system@ovs-system: MPLS label stack length probed as 1
2020-10-01T09:51:24.313Z|00012|ofproto_dpif|INFO|system@ovs-system: Datapath supports truncate action
2020-10-01T09:51:24.313Z|00013|ofproto_dpif|INFO|system@ovs-system: Datapath supports unique flow ids
2020-10-01T09:51:24.313Z|00014|ofproto_dpif|INFO|system@ovs-system: Datapath supports clone action
2020-10-01T09:51:24.313Z|00015|ofproto_dpif|INFO|system@ovs-system: Max sample nesting level probed as 10
2020-10-01T09:51:24.313Z|00016|ofproto_dpif|INFO|system@ovs-system: Datapath supports eventmask in conntrack action
2020-10-01T09:51:24.313Z|00017|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_clear action
2020-10-01T09:51:24.314Z|00018|ofproto_dpif|INFO|system@ovs-system: Max dp_hash algorithm probed to be 0
2020-10-01T09:51:24.314Z|00019|ofproto_dpif|INFO|system@ovs-system: Datapath supports check_pkt_len action
2020-10-01T09:51:24.314Z|00020|ofproto_dpif|INFO|system@ovs-system: Datapath does not support timeout policy in conntrack action
2020-10-01T09:51:24.314Z|00021|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_state
2020-10-01T09:51:24.314Z|00022|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_zone
2020-10-01T09:51:24.314Z|00023|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_mark
2020-10-01T09:51:24.314Z|00024|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_label
2020-10-01T09:51:24.314Z|00025|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_state_nat
2020-10-01T09:51:24.314Z|00026|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_orig_tuple
2020-10-01T09:51:24.314Z|00027|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_orig_tuple6
2020-10-01T09:51:24.314Z|00028|ofproto_dpif|INFO|system@ovs-system: Datapath does not support IPv6 ND Extensions
2020-10-01T09:51:24.352Z|00029|bridge|INFO|bridge br0: added interface br0 on port 65534
2020-10-01T09:51:24.352Z|00030|bridge|INFO|bridge br0: using datapath ID 0000ba0052709f48
2020-10-01T09:51:24.353Z|00031|connmgr|INFO|br0: added service controller "punix:/var/run/openvswitch/br0.mgmt"
2020-10-01T09:51:24.400Z|00032|bridge|INFO|bridge br0: added interface vxlan0 on port 1
2020-10-01T09:51:24.425Z|00033|netdev|WARN|failed to set MTU for network device tun0: No such device
2020-10-01T09:51:24.428Z|00034|bridge|INFO|bridge br0: added interface tun0 on port 2
2020-10-01T09:51:24.460Z|00035|connmgr|INFO|br0<->unix#3: 46 flow_mods in the last 0 s (46 adds)
2020-10-01T09:51:24.490Z|00036|connmgr|INFO|br0<->unix#6: 2 flow_mods in the last 0 s (2 adds)
2020-10-01T09:51:24.534Z|00037|connmgr|INFO|br0<->unix#10: 1 flow_mods in the last 0 s (1 adds)
2020-10-01T09:51:24.733Z|00038|connmgr|INFO|br0<->unix#18: 3 flow_mods in the last 0 s (3 adds)
2020-10-01T09:51:24.756Z|00039|connmgr|INFO|br0<->unix#21: 1 flow_mods in the last 0 s (1 adds)
2020-10-01T09:51:24.778Z|00040|connmgr|INFO|br0<->unix#24: 3 flow_mods in the last 0 s (3 adds)
2020-10-01T09:51:24.799Z|00041|connmgr|INFO|br0<->unix#27: 1 flow_mods in the last 0 s (1 adds)
2020-10-01T09:51:39.871Z|00042|bridge|INFO|bridge br0: added interface veth2a690890 on port 3
2020-10-01T09:51:39.898Z|00043|connmgr|INFO|br0<->unix#30: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:51:39.943Z|00044|connmgr|INFO|br0<->unix#34: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:39.945Z|00045|connmgr|INFO|br0<->unix#36: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:51:39.968Z|00046|bridge|INFO|bridge br0: added interface veth2fb158b1 on port 4
2020-10-01T09:51:40.000Z|00047|connmgr|INFO|br0<->unix#39: 5 flow_mods in the last 0 s (5 adds)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:51:40.033Z|00019|jsonrpc|WARN|unix#33: send error: Broken pipe
2020-10-01T09:51:40.033Z|00020|reconnect|WARN|unix#33: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:51:40.066Z|00048|connmgr|INFO|br0<->unix#43: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:51:40.066Z|00049|connmgr|INFO|br0<->unix#45: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:40.094Z|00050|bridge|INFO|bridge br0: added interface veth5948b28d on port 5
2020-10-01T09:51:40.187Z|00051|connmgr|INFO|br0<->unix#48: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:51:40.228Z|00052|connmgr|INFO|br0<->unix#51: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:40.251Z|00053|bridge|INFO|bridge br0: added interface vethbd70276f on port 6
2020-10-01T09:51:40.280Z|00054|connmgr|INFO|br0<->unix#54: 5 flow_mods in the last 0 s (5 adds)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:51:40.288Z|00021|jsonrpc|WARN|unix#41: receive error: Connection reset by peer
2020-10-01T09:51:40.288Z|00022|reconnect|WARN|unix#41: connection dropped (Connection reset by peer)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:51:40.320Z|00055|connmgr|INFO|br0<->unix#57: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:40.336Z|00056|bridge|INFO|bridge br0: added interface vethbd096485 on port 7
2020-10-01T09:51:40.370Z|00057|connmgr|INFO|br0<->unix#60: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:51:40.412Z|00058|connmgr|INFO|br0<->unix#63: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:40.429Z|00059|bridge|INFO|bridge br0: added interface veth26db4011 on port 8

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:51:40.435Z|00023|jsonrpc|WARN|unix#50: receive error: Connection reset by peer
2020-10-01T09:51:40.436Z|00024|reconnect|WARN|unix#50: connection dropped (Connection reset by peer)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:51:40.456Z|00060|connmgr|INFO|br0<->unix#66: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:51:40.508Z|00061|connmgr|INFO|br0<->unix#69: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:40.529Z|00062|bridge|INFO|bridge br0: added interface veth0079f7a1 on port 9
2020-10-01T09:51:40.566Z|00063|connmgr|INFO|br0<->unix#72: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:51:40.601Z|00064|connmgr|INFO|br0<->unix#75: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:40.619Z|00065|bridge|INFO|bridge br0: added interface veth37a47698 on port 10
2020-10-01T09:51:40.646Z|00066|connmgr|INFO|br0<->unix#78: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:51:40.686Z|00067|connmgr|INFO|br0<->unix#81: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:40.710Z|00068|bridge|INFO|bridge br0: added interface veth8795514d on port 11

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:51:40.718Z|00025|jsonrpc|WARN|unix#65: receive error: Connection reset by peer
2020-10-01T09:51:40.718Z|00026|reconnect|WARN|unix#65: connection dropped (Connection reset by peer)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:51:40.739Z|00069|connmgr|INFO|br0<->unix#84: 5 flow_mods in the last 0 s (5 adds)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:51:40.746Z|00027|reconnect|WARN|unix#66: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:51:40.778Z|00070|connmgr|INFO|br0<->unix#87: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:40.795Z|00071|bridge|INFO|bridge br0: added interface vethbcb558e6 on port 12
2020-10-01T09:51:40.822Z|00072|connmgr|INFO|br0<->unix#90: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:51:40.860Z|00073|connmgr|INFO|br0<->unix#93: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:41.087Z|00074|connmgr|INFO|br0<->unix#96: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:51:41.111Z|00075|connmgr|INFO|br0<->unix#99: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:51:41.134Z|00076|connmgr|INFO|br0<->unix#102: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:51:41.155Z|00077|connmgr|INFO|br0<->unix#105: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:51:41.176Z|00078|connmgr|INFO|br0<->unix#108: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:51:41.199Z|00079|connmgr|INFO|br0<->unix#111: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:51:41.219Z|00080|connmgr|INFO|br0<->unix#114: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:51:41.241Z|00081|connmgr|INFO|br0<->unix#117: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:51:50.792Z|00082|bridge|INFO|bridge br0: added interface veth4dc633cd on port 13
2020-10-01T09:51:50.861Z|00083|connmgr|INFO|br0<->unix#120: 5 flow_mods in the last 0 s (5 adds)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:51:50.897Z|00028|jsonrpc|WARN|Dropped 1 log messages in last 10 seconds (most recently, 10 seconds ago) due to excessive rate
2020-10-01T09:51:50.897Z|00029|jsonrpc|WARN|unix#80: receive error: Connection reset by peer
2020-10-01T09:51:50.897Z|00030|reconnect|WARN|unix#80: connection dropped (Connection reset by peer)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:51:50.964Z|00084|connmgr|INFO|br0<->unix#125: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:51:50.964Z|00085|connmgr|INFO|br0<->unix#126: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:50.994Z|00086|bridge|INFO|bridge br0: added interface vethc4979dbf on port 14

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:51:51.007Z|00031|reconnect|WARN|unix#82: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:51:51.041Z|00087|connmgr|INFO|br0<->unix#129: 5 flow_mods in the last 0 s (5 adds)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:51:51.051Z|00032|reconnect|WARN|unix#83: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:51:51.116Z|00088|connmgr|INFO|br0<->unix#133: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:51:51.126Z|00089|connmgr|INFO|br0<->unix#135: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:51.166Z|00090|bridge|INFO|bridge br0: added interface vethbb634c38 on port 15
2020-10-01T09:51:51.210Z|00091|connmgr|INFO|br0<->unix#138: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:51:51.268Z|00092|connmgr|INFO|br0<->unix#141: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:51.303Z|00093|bridge|INFO|bridge br0: added interface vethaaecf1e0 on port 16
2020-10-01T09:51:51.338Z|00094|connmgr|INFO|br0<->unix#144: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:51:51.389Z|00095|connmgr|INFO|br0<->unix#147: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:52.147Z|00096|connmgr|INFO|br0<->unix#150: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:51:57.969Z|00097|bridge|INFO|bridge br0: added interface veth436a7322 on port 17
2020-10-01T09:51:58.008Z|00098|connmgr|INFO|br0<->unix#156: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:51:58.062Z|00099|connmgr|INFO|br0<->unix#161: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:51:58.062Z|00100|connmgr|INFO|br0<->unix#162: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:52:04.762Z|00101|bridge|INFO|bridge br0: added interface vethb5aee088 on port 18
2020-10-01T09:52:04.796Z|00102|connmgr|INFO|br0<->unix#165: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:52:04.839Z|00103|connmgr|INFO|br0<->unix#168: 2 flow_mods in the last 0 s (2 deletes)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:52:09.086Z|00033|jsonrpc|WARN|Dropped 2 log messages in last 18 seconds (most recently, 18 seconds ago) due to excessive rate
2020-10-01T09:52:09.086Z|00034|jsonrpc|WARN|unix#111: send error: Broken pipe
2020-10-01T09:52:09.086Z|00035|reconnect|WARN|unix#111: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:52:11.704Z|00104|bridge|INFO|bridge br0: added interface vethb1c1441a on port 19
2020-10-01T09:52:11.750Z|00105|connmgr|INFO|br0<->unix#171: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:52:11.794Z|00106|connmgr|INFO|br0<->unix#175: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:52:11.839Z|00107|bridge|INFO|bridge br0: added interface veth8e374e65 on port 20
2020-10-01T09:52:11.844Z|00108|connmgr|INFO|br0<->unix#177: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:52:11.869Z|00109|connmgr|INFO|br0<->unix#180: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:52:11.916Z|00110|connmgr|INFO|br0<->unix#184: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:52:11.918Z|00111|connmgr|INFO|br0<->unix#186: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:52:12.481Z|00112|bridge|INFO|bridge br0: added interface veth8568d585 on port 21
2020-10-01T09:52:12.516Z|00113|connmgr|INFO|br0<->unix#189: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:52:12.555Z|00114|connmgr|INFO|br0<->unix#192: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:52:15.390Z|00115|bridge|INFO|bridge br0: added interface veth4c170ca8 on port 22
2020-10-01T09:52:15.430Z|00116|connmgr|INFO|br0<->unix#195: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:52:15.479Z|00117|connmgr|INFO|br0<->unix#199: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:52:15.480Z|00118|connmgr|INFO|br0<->unix#201: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:52:19.591Z|00119|bridge|INFO|bridge br0: added interface veth39e4529a on port 23
2020-10-01T09:52:19.644Z|00120|connmgr|INFO|br0<->unix#204: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:52:19.702Z|00121|connmgr|INFO|br0<->unix#208: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:52:19.707Z|00122|connmgr|INFO|br0<->unix#210: 2 flow_mods in the last 0 s (2 deletes)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:52:29.091Z|00036|jsonrpc|WARN|unix#142: send error: Broken pipe
2020-10-01T09:52:29.092Z|00037|reconnect|WARN|unix#142: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:52:31.727Z|00123|bridge|INFO|bridge br0: added interface veth897a989d on port 24
2020-10-01T09:52:31.754Z|00124|connmgr|INFO|br0<->unix#216: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:52:31.801Z|00125|connmgr|INFO|br0<->unix#221: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:52:31.801Z|00126|connmgr|INFO|br0<->unix#222: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:52:33.795Z|00127|bridge|INFO|bridge br0: added interface vethafc8bcb3 on port 25
2020-10-01T09:52:33.823Z|00128|connmgr|INFO|br0<->unix#225: 5 flow_mods in the last 0 s (5 adds)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:52:33.835Z|00038|jsonrpc|WARN|unix#151: send error: Broken pipe
2020-10-01T09:52:33.835Z|00039|reconnect|WARN|unix#151: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:52:33.863Z|00129|connmgr|INFO|br0<->unix#228: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:52:33.884Z|00130|connmgr|INFO|br0<->unix#231: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:52:33.910Z|00131|connmgr|INFO|br0<->unix#234: 4 flow_mods in the last 0 s (4 deletes)
2020-10-01T09:52:33.936Z|00132|bridge|INFO|bridge br0: deleted interface veth897a989d on port 24

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:52:34.059Z|00040|reconnect|WARN|unix#158: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:52:43.771Z|00133|bridge|INFO|bridge br0: added interface vethdc5f58f2 on port 26

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:52:43.781Z|00041|jsonrpc|WARN|Dropped 1 log messages in last 9 seconds (most recently, 9 seconds ago) due to excessive rate
2020-10-01T09:52:43.781Z|00042|jsonrpc|WARN|unix#161: send error: Broken pipe
2020-10-01T09:52:43.781Z|00043|reconnect|WARN|unix#161: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:52:43.803Z|00134|connmgr|INFO|br0<->unix#237: 5 flow_mods in the last 0 s (5 adds)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:52:43.812Z|00044|reconnect|WARN|unix#162: connection dropped (Broken pipe)
2020-10-01T09:52:43.827Z|00045|reconnect|WARN|unix#164: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:52:43.850Z|00135|connmgr|INFO|br0<->unix#240: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:53:04.284Z|00136|bridge|INFO|bridge br0: added interface veth9bd50ac5 on port 27
2020-10-01T09:53:04.322Z|00137|connmgr|INFO|br0<->unix#246: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:53:04.381Z|00138|connmgr|INFO|br0<->unix#250: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:53:04.382Z|00139|connmgr|INFO|br0<->unix#252: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:53:10.727Z|00140|connmgr|INFO|br0<->unix#255: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:53:10.778Z|00141|connmgr|INFO|br0<->unix#258: 4 flow_mods in the last 0 s (4 deletes)
2020-10-01T09:53:10.806Z|00142|bridge|INFO|bridge br0: deleted interface veth9bd50ac5 on port 27
2020-10-01T09:53:15.323Z|00143|bridge|INFO|bridge br0: added interface veth1c283b07 on port 28
2020-10-01T09:53:15.353Z|00144|connmgr|INFO|br0<->unix#261: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:53:15.402Z|00145|connmgr|INFO|br0<->unix#266: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:53:15.403Z|00146|connmgr|INFO|br0<->unix#267: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:53:22.557Z|00147|connmgr|INFO|br0<->unix#270: 2 flow_mods in the last 0 s (2 deletes)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:53:22.567Z|00046|jsonrpc|WARN|Dropped 2 log messages in last 39 seconds (most recently, 39 seconds ago) due to excessive rate
2020-10-01T09:53:22.567Z|00047|jsonrpc|WARN|unix#190: receive error: Connection reset by peer
2020-10-01T09:53:22.567Z|00048|reconnect|WARN|unix#190: connection dropped (Connection reset by peer)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:53:22.591Z|00148|connmgr|INFO|br0<->unix#273: 4 flow_mods in the last 0 s (4 deletes)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:53:22.608Z|00049|jsonrpc|WARN|unix#192: receive error: Connection reset by peer
2020-10-01T09:53:22.608Z|00050|reconnect|WARN|unix#192: connection dropped (Connection reset by peer)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:53:22.624Z|00149|bridge|INFO|bridge br0: deleted interface vethafc8bcb3 on port 25
2020-10-01T09:53:28.863Z|00150|connmgr|INFO|br0<->unix#280: 2 flow_mods in the last 0 s (2 deletes)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:53:28.876Z|00051|jsonrpc|WARN|unix#198: send error: Broken pipe
2020-10-01T09:53:28.876Z|00052|reconnect|WARN|unix#198: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:53:28.899Z|00151|connmgr|INFO|br0<->unix#283: 4 flow_mods in the last 0 s (4 deletes)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:53:28.911Z|00053|jsonrpc|WARN|unix#199: send error: Broken pipe
2020-10-01T09:53:28.911Z|00054|reconnect|WARN|unix#199: connection dropped (Broken pipe)
2020-10-01T09:53:28.916Z|00055|reconnect|WARN|unix#200: connection dropped (Broken pipe)
2020-10-01T09:53:28.923Z|00056|reconnect|WARN|unix#201: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:53:28.927Z|00152|bridge|INFO|bridge br0: deleted interface veth1c283b07 on port 28
2020-10-01T09:53:34.309Z|00153|bridge|INFO|bridge br0: added interface veth67d0ca3b on port 29
2020-10-01T09:53:34.341Z|00154|connmgr|INFO|br0<->unix#286: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:53:34.387Z|00155|connmgr|INFO|br0<->unix#289: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:53:35.274Z|00156|bridge|INFO|bridge br0: added interface vethfe2f5c8e on port 30
2020-10-01T09:53:35.305Z|00157|connmgr|INFO|br0<->unix#292: 5 flow_mods in the last 0 s (5 adds)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:53:35.314Z|00057|reconnect|WARN|unix#212: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:53:35.348Z|00158|connmgr|INFO|br0<->unix#295: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:53:40.917Z|00159|connmgr|INFO|br0<->unix#298: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:53:40.944Z|00160|connmgr|INFO|br0<->unix#301: 4 flow_mods in the last 0 s (4 deletes)
2020-10-01T09:53:40.972Z|00161|bridge|INFO|bridge br0: deleted interface vethfe2f5c8e on port 30
2020-10-01T09:53:46.492Z|00162|bridge|INFO|bridge br0: added interface veth1a2006db on port 31
2020-10-01T09:53:46.528Z|00163|connmgr|INFO|br0<->unix#304: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:53:46.576Z|00164|connmgr|INFO|br0<->unix#308: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:53:46.578Z|00165|connmgr|INFO|br0<->unix#310: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:53:52.983Z|00166|connmgr|INFO|br0<->unix#313: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:53:53.018Z|00167|connmgr|INFO|br0<->unix#316: 4 flow_mods in the last 0 s (4 deletes)
2020-10-01T09:53:53.054Z|00168|bridge|INFO|bridge br0: deleted interface veth1a2006db on port 31
2020-10-01T09:53:57.111Z|00169|bridge|INFO|bridge br0: added interface vethd9416ca6 on port 32
2020-10-01T09:53:57.143Z|00170|connmgr|INFO|br0<->unix#322: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:53:57.185Z|00171|connmgr|INFO|br0<->unix#325: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:53:59.900Z|00172|bridge|INFO|bridge br0: added interface vethf04d93fc on port 33
2020-10-01T09:53:59.950Z|00173|connmgr|INFO|br0<->unix#328: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:54:00.022Z|00174|connmgr|INFO|br0<->unix#332: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:54:00.031Z|00175|connmgr|INFO|br0<->unix#334: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:54:02.042Z|00176|connmgr|INFO|br0<->unix#337: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:54:02.084Z|00177|connmgr|INFO|br0<->unix#340: 4 flow_mods in the last 0 s (4 deletes)
2020-10-01T09:54:02.120Z|00178|bridge|INFO|bridge br0: deleted interface vethd9416ca6 on port 32
2020-10-01T09:54:03.108Z|00179|bridge|INFO|bridge br0: added interface vethde136a96 on port 34
2020-10-01T09:54:03.144Z|00180|connmgr|INFO|br0<->unix#343: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:54:03.199Z|00181|connmgr|INFO|br0<->unix#348: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:54:03.201Z|00182|connmgr|INFO|br0<->unix#349: 2 flow_mods in the last 0 s (1 adds, 1 deletes)
2020-10-01T09:54:10.089Z|00183|connmgr|INFO|br0<->unix#352: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:54:10.138Z|00184|connmgr|INFO|br0<->unix#355: 4 flow_mods in the last 0 s (4 deletes)
2020-10-01T09:54:10.182Z|00185|bridge|INFO|bridge br0: deleted interface vethde136a96 on port 34
2020-10-01T09:54:11.190Z|00186|bridge|INFO|bridge br0: added interface vethdd44deb6 on port 35
2020-10-01T09:54:11.232Z|00187|connmgr|INFO|br0<->unix#358: 5 flow_mods in the last 0 s (5 adds)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:54:11.254Z|00058|jsonrpc|WARN|Dropped 3 log messages in last 43 seconds (most recently, 36 seconds ago) due to excessive rate
2020-10-01T09:54:11.254Z|00059|jsonrpc|WARN|unix#267: send error: Broken pipe
2020-10-01T09:54:11.254Z|00060|reconnect|WARN|unix#267: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:54:11.295Z|00188|connmgr|INFO|br0<->unix#361: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:54:17.201Z|00189|connmgr|INFO|br0<->unix#364: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:54:17.242Z|00190|connmgr|INFO|br0<->unix#367: 4 flow_mods in the last 0 s (4 deletes)
2020-10-01T09:54:17.283Z|00191|bridge|INFO|bridge br0: deleted interface vethdd44deb6 on port 35
2020-10-01T09:54:32.673Z|00192|bridge|INFO|bridge br0: added interface veth310031aa on port 36
2020-10-01T09:54:32.712Z|00193|connmgr|INFO|br0<->unix#373: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:54:32.753Z|00194|connmgr|INFO|br0<->unix#376: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:54:38.266Z|00195|connmgr|INFO|br0<->unix#379: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:54:38.305Z|00196|connmgr|INFO|br0<->unix#382: 4 flow_mods in the last 0 s (4 deletes)
2020-10-01T09:54:38.347Z|00197|bridge|INFO|bridge br0: deleted interface veth310031aa on port 36
2020-10-01T09:54:43.331Z|00198|connmgr|INFO|br0<->unix#385: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:54:43.361Z|00199|connmgr|INFO|br0<->unix#388: 4 flow_mods in the last 0 s (4 deletes)
2020-10-01T09:54:43.391Z|00200|bridge|INFO|bridge br0: deleted interface veth67d0ca3b on port 29

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:55:59.069Z|00061|jsonrpc|WARN|unix#318: send error: Broken pipe
2020-10-01T09:55:59.069Z|00062|reconnect|WARN|unix#318: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:56:03.133Z|00201|bridge|INFO|bridge br0: added interface veth76fa0912 on port 37
2020-10-01T09:56:03.170Z|00202|connmgr|INFO|br0<->unix#401: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:56:03.209Z|00203|connmgr|INFO|br0<->unix#404: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:56:08.667Z|00204|connmgr|INFO|br0<->unix#407: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:56:08.712Z|00205|connmgr|INFO|br0<->unix#410: 4 flow_mods in the last 0 s (4 deletes)
2020-10-01T09:56:08.739Z|00206|bridge|INFO|bridge br0: deleted interface veth76fa0912 on port 37
2020-10-01T09:56:14.959Z|00207|connmgr|INFO|br0<->unix#413: 3 flow_mods in the last 0 s (3 adds)
2020-10-01T09:56:14.982Z|00208|connmgr|INFO|br0<->unix#416: 1 flow_mods in the last 0 s (1 adds)

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:56:44.090Z|00063|jsonrpc|WARN|unix#339: send error: Broken pipe
2020-10-01T09:56:44.091Z|00064|reconnect|WARN|unix#339: connection dropped (Broken pipe)

==> /host/var/log/openvswitch/ovs-vswitchd.log <==
2020-10-01T09:57:07.348Z|00209|bridge|INFO|bridge br0: added interface veth896712d7 on port 38
2020-10-01T09:57:07.385Z|00210|connmgr|INFO|br0<->unix#425: 5 flow_mods in the last 0 s (5 adds)
2020-10-01T09:57:07.435Z|00211|connmgr|INFO|br0<->unix#428: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:57:12.965Z|00212|connmgr|INFO|br0<->unix#431: 2 flow_mods in the last 0 s (2 deletes)
2020-10-01T09:57:12.999Z|00213|connmgr|INFO|br0<->unix#434: 4 flow_mods in the last 0 s (4 deletes)
2020-10-01T09:57:13.027Z|00214|bridge|INFO|bridge br0: deleted interface veth896712d7 on port 38

==> /host/var/log/openvswitch/ovsdb-server.log <==
2020-10-01T09:57:54.112Z|00065|jsonrpc|WARN|unix#367: send error: Broken pipe
2020-10-01T09:57:54.112Z|00066|reconnect|WARN|unix#367: connection dropped (Broken pipe)
2020-10-01T09:58:25.126Z|00067|jsonrpc|WARN|unix#377: receive error: Connection reset by peer
2020-10-01T09:58:25.127Z|00068|reconnect|WARN|unix#377: connection dropped (Connection reset by peer)
2020-10-01T09:59:49.092Z|00069|jsonrpc|WARN|unix#398: send error: Broken pipe
2020-10-01T09:59:49.092Z|00070|reconnect|WARN|unix#398: connection dropped (Broken pipe)
2020-10-01T10:01:59.073Z|00071|jsonrpc|WARN|unix#434: send error: Broken pipe
2020-10-01T10:01:59.073Z|00072|reconnect|WARN|unix#434: connection dropped (Broken pipe)
2020-10-01T10:02:34.080Z|00073|jsonrpc|WARN|unix#443: send error: Broken pipe
2020-10-01T10:02:34.081Z|00074|reconnect|WARN|unix#443: connection dropped (Broken pipe)
2020-10-01T10:02:44.079Z|00075|jsonrpc|WARN|unix#445: send error: Broken pipe
2020-10-01T10:02:44.079Z|00076|reconnect|WARN|unix#445: connection dropped (Broken pipe)
2020-10-01T10:04:44.079Z|00077|jsonrpc|WARN|unix#477: send error: Broken pipe
2020-10-01T10:04:44.080Z|00078|reconnect|WARN|unix#477: connection dropped (Broken pipe)
2020-10-01T10:05:39.078Z|00079|jsonrpc|WARN|unix#492: send error: Broken pipe
2020-10-01T10:05:39.078Z|00080|reconnect|WARN|unix#492: connection dropped (Broken pipe)
2020-10-01T10:07:04.073Z|00081|jsonrpc|WARN|unix#515: send error: Broken pipe
2020-10-01T10:07:04.074Z|00082|reconnect|WARN|unix#515: connection dropped (Broken pipe)
2020-10-01T10:10:29.078Z|00083|jsonrpc|WARN|unix#570: send error: Broken pipe
2020-10-01T10:10:29.078Z|00084|reconnect|WARN|unix#570: connection dropped (Broken pipe)
2020-10-01T10:13:29.070Z|00085|jsonrpc|WARN|unix#618: send error: Broken pipe
2020-10-01T10:13:29.071Z|00086|reconnect|WARN|unix#618: connection dropped (Broken pipe)
2020-10-01T10:13:59.067Z|00087|jsonrpc|WARN|unix#626: send error: Broken pipe
2020-10-01T10:13:59.067Z|00088|reconnect|WARN|unix#626: connection dropped (Broken pipe)
2020-10-01T10:14:04.072Z|00089|jsonrpc|WARN|unix#627: send error: Broken pipe
2020-10-01T10:14:04.072Z|00090|reconnect|WARN|unix#627: connection dropped (Broken pipe)

Comment 16 Andrew Stoycos 2020-10-01 14:46:56 UTC
Thanks David, 

OVS looks like it is up which points away from the problems seen in the previously attached CRIO logs, 

-> `Sep 24 08:29:53 mrnd-16-46-un-ghfk7-master-0 hyperkube[1795]: I0924 08:26:28.415967   36036 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)
Sep 24 08:29:53 mrnd-16-46-un-ghfk7-master-0 hyperkube[1795]: F0924 08:26:28.416008   36036 cmd.go:111] Failed to start sdn: node SDN setup failed: timed out waiting for the condition`


Can you post the CRIO + Kublet logs for this current scenario as well?

Comment 17 Dan Williams 2020-10-01 20:12:25 UTC
@david if you can get ovnkube-node and ovn-controller container logs for the "Kubelet may be retrying requests that are timing out in CRI-O due to system load: error reserving pod name k8s_installer-3-mrnd-13-46-un-8b9v5-master-1_openshift-etcd_4c126433-9d7c-4a2e-a613-53b29464157d_0 for id c771d2dc56335f6e2bc7efa8437d9f08ea57d099df5df2320600d588799840b0: name is reserved" failures too, that would be very useful.

Comment 19 Dan Williams 2020-10-02 04:55:56 UTC
(In reply to Joseph Callen from comment #18)
> Was just going through failures in vsphere getting similar issues as
> comment: #15 (https://bugzilla.redhat.com/show_bug.cgi?id=1881868#c15)
> 
> https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/
> periodic-ci-openshift-release-master-ocp-4.6-e2e-vsphere/1311643284136792064/
> artifacts/e2e-vsphere/ipi-install-vsphere/
> See log bundle.

That bundle only has one of those messages:

control-plane/172.31.255.221/journals/kubelet.log:Oct 01 12:37:36 ci-op-v038cp0k-0aec4-sl2sd-master-1 hyperkube[1887]: I1001 12:37:36.202218    1887 event.go:291] "Event occurred" object="openshift-controller-manager/controller-manager-zjddx" kind="Pod" apiVersion="v1" type="Warning" reason="FailedCreatePodSandBox" message="Failed to create pod sandbox: rpc error: code = Unknown desc = error reading container (probably exited) json message: EOF"

But the container is only in the API for *0.5* seconds, so no wonder we're getting error messages, because kubelet/CRIO/etc can't work that fast:

control-plane/172.31.255.221/journals/kubelet.log:Oct 01 12:37:34 ci-op-v038cp0k-0aec4-sl2sd-master-1 hyperkube[1887]: I1001 12:37:34.001806    1887 config.go:383] Receiving a new pod "controller-manager-zjddx_openshift-controller-manager(1cc59d45-b264-4563-b8ad-95e05efbccc7)"
control-plane/172.31.255.221/journals/kubelet.log:Oct 01 12:37:34 ci-op-v038cp0k-0aec4-sl2sd-master-1 hyperkube[1887]: I1001 12:37:34.002098    1887 kubelet.go:1834] SyncLoop (ADD, "api"): "controller-manager-zjddx_openshift-controller-manager(1cc59d45-b264-4563-b8ad-95e05efbccc7)"
control-plane/172.31.255.221/journals/kubelet.log:Oct 01 12:37:34 ci-op-v038cp0k-0aec4-sl2sd-master-1 hyperkube[1887]: I1001 12:37:34.332308    1887 kuberuntime_manager.go:425] No sandbox for pod "controller-manager-zjddx_openshift-controller-manager(1cc59d45-b264-4563-b8ad-95e05efbccc7)" can be found. Need to start a new one
control-plane/172.31.255.221/journals/kubelet.log:Oct 01 12:37:34 ci-op-v038cp0k-0aec4-sl2sd-master-1 hyperkube[1887]: I1001 12:37:34.332556    1887 kuberuntime_manager.go:724] Creating PodSandbox for pod "controller-manager-zjddx_openshift-controller-manager(1cc59d45-b264-4563-b8ad-95e05efbccc7)"
control-plane/172.31.255.221/journals/kubelet.log:Oct 01 12:37:34 ci-op-v038cp0k-0aec4-sl2sd-master-1 hyperkube[1887]: I1001 12:37:34.525839    1887 kubelet.go:1850] SyncLoop (DELETE, "api"): "controller-manager-zjddx_openshift-controller-manager(1cc59d45-b264-4563-b8ad-95e05efbccc7)"

Messages should be harmless though, and the pod should get retried. If it doesn't get retried, that is a kubelet (or operator) bug.

Comment 22 David Sanz 2020-10-02 11:50:51 UTC
Created attachment 1718422 [details]
last crio and kubelet

Comment 23 Dan Williams 2020-10-02 13:23:40 UTC
(In reply to David Sanz from comment #22)
> Created attachment 1718422 [details]
> last crio and kubelet

Any chance to grab the sdn-ovs and sdn containers (instead of ovnkube-node)?

Comment 26 Dan Winship 2020-10-02 20:03:50 UTC
So are all OSP installs failing for everyone? If not, what is there that makes your OSP installs different from the ones that succeed?


Note You need to log in before you can comment on or make changes to this bug.