Login
Log in using an SSO provider:
Fedora Account System
Red Hat Associate
Red Hat Customer
Login using a Red Hat Bugzilla account
Forgot Password
Create an Account
Red Hat Bugzilla – Attachment 1520616 Details for
Bug 1654044
OCP 3.11: pods end up in CrashLoopBackOff state after a rolling reboot of the node
Home
New
Search
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh90 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
[?]
This site requires JavaScript to be enabled to function correctly, please enable it.
Testing log
Testing-log.txt (text/plain), 32.19 KB, created by
Weibin Liang
on 2019-01-14 22:03:03 UTC
(
hide
)
Description:
Testing log
Filename:
MIME Type:
Creator:
Weibin Liang
Created:
2019-01-14 22:03:03 UTC
Size:
32.19 KB
patch
obsolete
>[root@ip-172-18-1-141 ec2-user]# oc get pods --all-namespaces -o wide >NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE >default docker-registry-1-mkqkp 1/1 Running 0 1h 10.130.0.4 ip-172-18-12-157.ec2.internal <none> >default registry-console-1-n827c 1/1 Running 1 1h 10.128.0.12 ip-172-18-1-141.ec2.internal <none> >default router-1-wlsml 1/1 Running 0 1h 172.18.12.157 ip-172-18-12-157.ec2.internal <none> >install-test mongodb-1-fjrcn 1/1 Running 0 1h 10.131.0.7 ip-172-18-5-173.ec2.internal <none> >install-test nodejs-mongodb-example-1-build 1/1 Running 0 1h 10.131.0.6 ip-172-18-5-173.ec2.internal <none> >kube-service-catalog apiserver-8jqhd 0/1 CrashLoopBackOff 10 1h 10.128.0.11 ip-172-18-1-141.ec2.internal <none> >kube-service-catalog controller-manager-zrm7w 0/1 CrashLoopBackOff 12 1h 10.128.0.14 ip-172-18-1-141.ec2.internal <none> >kube-system master-api-ip-172-18-1-141.ec2.internal 1/1 Running 1 1h 172.18.1.141 ip-172-18-1-141.ec2.internal <none> >kube-system master-controllers-ip-172-18-1-141.ec2.internal 1/1 Running 1 1h 172.18.1.141 ip-172-18-1-141.ec2.internal <none> >kube-system master-etcd-ip-172-18-1-141.ec2.internal 1/1 Running 1 1h 172.18.1.141 ip-172-18-1-141.ec2.internal <none> >lt26-07-arap edgeroute-pod 1/1 Running 0 1h 10.129.0.8 ip-172-18-6-77.ec2.internal <none> >openshift-ansible-service-broker asb-1-g7s5n 1/1 Running 4 1h 10.131.0.5 ip-172-18-5-173.ec2.internal <none> >openshift-console console-66549ff897-g8b68 1/1 Running 1 1h 10.128.0.13 ip-172-18-1-141.ec2.internal <none> >openshift-monitoring alertmanager-main-0 3/3 Running 0 1h 10.130.0.5 ip-172-18-12-157.ec2.internal <none> >openshift-monitoring alertmanager-main-1 3/3 Running 0 1h 10.131.0.4 ip-172-18-5-173.ec2.internal <none> >openshift-monitoring alertmanager-main-2 3/3 Running 0 1h 10.129.0.5 ip-172-18-6-77.ec2.internal <none> >openshift-monitoring cluster-monitoring-operator-56bb5946c4-vc967 1/1 Running 0 1h 10.129.0.2 ip-172-18-6-77.ec2.internal <none> >openshift-monitoring grafana-56f6875b69-qds8d 2/2 Running 0 1h 10.129.0.3 ip-172-18-6-77.ec2.internal <none> >openshift-monitoring kube-state-metrics-776f9667b-gnkv6 3/3 Running 0 1h 10.130.0.6 ip-172-18-12-157.ec2.internal <none> >openshift-monitoring node-exporter-b24vl 2/2 Running 2 1h 172.18.1.141 ip-172-18-1-141.ec2.internal <none> >openshift-monitoring node-exporter-knq6p 2/2 Running 0 1h 172.18.5.173 ip-172-18-5-173.ec2.internal <none> >openshift-monitoring node-exporter-rw9vs 2/2 Running 0 1h 172.18.6.77 ip-172-18-6-77.ec2.internal <none> >openshift-monitoring node-exporter-vbcb2 2/2 Running 0 1h 172.18.12.157 ip-172-18-12-157.ec2.internal <none> >openshift-monitoring prometheus-k8s-0 4/4 Running 1 1h 10.131.0.3 ip-172-18-5-173.ec2.internal <none> >openshift-monitoring prometheus-k8s-1 4/4 Running 1 1h 10.129.0.4 ip-172-18-6-77.ec2.internal <none> >openshift-monitoring prometheus-operator-7566fcccc8-s7cng 1/1 Running 0 1h 10.131.0.2 ip-172-18-5-173.ec2.internal <none> >openshift-node sync-4lbls 1/1 Running 0 1h 172.18.5.173 ip-172-18-5-173.ec2.internal <none> >openshift-node sync-6nb4s 1/1 Running 0 1h 172.18.12.157 ip-172-18-12-157.ec2.internal <none> >openshift-node sync-j6tgd 1/1 Running 1 1h 172.18.1.141 ip-172-18-1-141.ec2.internal <none> >openshift-node sync-nxrjz 1/1 Running 0 1h 172.18.6.77 ip-172-18-6-77.ec2.internal <none> >openshift-sdn ovs-76wst 1/1 Running 0 1h 172.18.6.77 ip-172-18-6-77.ec2.internal <none> >openshift-sdn ovs-mj6tx 1/1 Running 0 1h 172.18.12.157 ip-172-18-12-157.ec2.internal <none> >openshift-sdn ovs-sbswm 1/1 Running 1 1h 172.18.1.141 ip-172-18-1-141.ec2.internal <none> >openshift-sdn ovs-tg5jd 1/1 Running 0 1h 172.18.5.173 ip-172-18-5-173.ec2.internal <none> >openshift-sdn sdn-bp5m9 1/1 Running 0 1h 172.18.5.173 ip-172-18-5-173.ec2.internal <none> >openshift-sdn sdn-hr495 1/1 Running 0 1h 172.18.12.157 ip-172-18-12-157.ec2.internal <none> >openshift-sdn sdn-zg75b 1/1 Running 1 1h 172.18.1.141 ip-172-18-1-141.ec2.internal <none> >openshift-sdn sdn-zsdf6 1/1 Running 0 1h 172.18.6.77 ip-172-18-6-77.ec2.internal <none> >openshift-template-service-broker apiserver-hd7qj 0/1 CrashLoopBackOff 10 1h 10.128.0.10 ip-172-18-1-141.ec2.internal <none> >openshift-web-console webconsole-787f54c7f8-2rnv2 0/1 CrashLoopBackOff 10 1h 10.128.0.9 ip-172-18-1-141.ec2.internal <none> > > >[root@ip-172-18-1-141 ec2-user]# oc logs sdn-zg75b >2019/01/14 20:09:24 socat[15551] E connect(5, AF=1 "/var/run/openshift-sdn/cni-server.sock", 40): No such file or directory >User "sa" set. >Context "default/ip-172-18-1-141-ec2-internal:8443/system:admin" modified. >which: no openshift-sdn in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin) >I0114 20:09:28.977823 15326 start_network.go:200] Reading node configuration from /etc/origin/node/node-config.yaml >I0114 20:09:28.984164 15326 start_network.go:207] Starting node networking ip-172-18-1-141.ec2.internal (v3.11.69) >W0114 20:09:28.984515 15326 server.go:195] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. >I0114 20:09:28.984659 15326 feature_gate.go:230] feature gates: &{map[]} >I0114 20:09:28.988634 15326 transport.go:160] Refreshing client certificate from store >I0114 20:09:28.988735 15326 certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem". >I0114 20:09:29.025192 15326 node.go:147] Initializing SDN node of type "redhat/openshift-ovs-subnet" with configured hostname "ip-172-18-1-141.ec2.internal" (IP ""), iptables sync period "30s" >I0114 20:09:29.028937 15326 node.go:289] Starting openshift-sdn network plugin >I0114 20:09:29.429179 15326 sdn_controller.go:139] [SDN setup] full SDN setup required (local subnet gateway CIDR not found) >I0114 20:09:29.758548 15326 node.go:348] Starting openshift-sdn pod manager >E0114 20:09:29.767471 15326 cniserver.go:148] failed to remove old pod info socket: remove /var/run/openshift-sdn: device or resource busy >E0114 20:09:29.767561 15326 cniserver.go:151] failed to remove contents of socket directory: remove /var/run/openshift-sdn: device or resource busy >W0114 20:09:29.843992 15326 util_unix.go:75] Using "/var/run/dockershim.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/dockershim.sock". >W0114 20:09:29.920178 15326 node.go:367] will restart pod 'default/registry-console-1-n827c' due to update failure on restart: could not parse ofport "": strconv.Atoi: parsing "": invalid syntax >W0114 20:09:29.951723 15326 node.go:367] will restart pod 'kube-service-catalog/apiserver-8jqhd' due to update failure on restart: could not parse ofport "": strconv.Atoi: parsing "": invalid syntax >W0114 20:09:29.971252 15326 node.go:367] will restart pod 'kube-service-catalog/controller-manager-zrm7w' due to update failure on restart: could not parse ofport "": strconv.Atoi: parsing "": invalid syntax >W0114 20:09:30.019305 15326 node.go:367] will restart pod 'openshift-console/console-66549ff897-g8b68' due to update failure on restart: could not parse ofport "": strconv.Atoi: parsing "": invalid syntax >W0114 20:09:30.060834 15326 node.go:367] will restart pod 'openshift-template-service-broker/apiserver-hd7qj' due to update failure on restart: could not parse ofport "": strconv.Atoi: parsing "": invalid syntax >W0114 20:09:30.092802 15326 node.go:367] will restart pod 'openshift-web-console/webconsole-787f54c7f8-2rnv2' due to update failure on restart: could not parse ofport "": strconv.Atoi: parsing "": invalid syntax >I0114 20:09:30.753006 15326 node.go:392] openshift-sdn network plugin ready >I0114 20:09:30.776880 15326 network.go:95] Using iptables Proxier. >I0114 20:09:30.811589 15326 network.go:131] Tearing down userspace rules. >I0114 20:09:30.833098 15326 proxier.go:189] Setting proxy IP to 172.18.1.141 and initializing iptables >I0114 20:09:30.884559 15326 proxy.go:82] Starting multitenant SDN proxy endpoint filter >I0114 20:09:30.884763 15326 config.go:202] Starting service config controller >I0114 20:09:30.884784 15326 controller_utils.go:1025] Waiting for caches to sync for service config controller >I0114 20:09:30.906333 15326 network.go:239] Started Kubernetes Proxy on 0.0.0.0 >I0114 20:09:30.906556 15326 config.go:102] Starting endpoints config controller >I0114 20:09:30.906584 15326 controller_utils.go:1025] Waiting for caches to sync for endpoints config controller >I0114 20:09:30.908469 15326 network.go:53] Starting DNS on 127.0.0.1:53 >I0114 20:09:30.909399 15326 server.go:76] Monitoring dnsmasq to point cluster queries to 127.0.0.1 >I0114 20:09:30.909507 15326 logs.go:49] skydns: ready for queries on cluster.local. for tcp://127.0.0.1:53 [rcache 0] >I0114 20:09:30.909525 15326 logs.go:49] skydns: ready for queries on cluster.local. for udp://127.0.0.1:53 [rcache 0] >I0114 20:09:30.931616 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-web-console/webconsole:https to [10.128.0.4:8443] >I0114 20:09:30.932082 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for default/registry-console:registry-console to [10.128.0.3:9090] >I0114 20:09:30.932138 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for default/router:1936-tcp to [172.18.12.157:1936] >I0114 20:09:30.932163 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for default/router:80-tcp to [172.18.12.157:80] >I0114 20:09:30.932183 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for default/router:443-tcp to [172.18.12.157:443] >I0114 20:09:30.932219 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for install-test/mongodb:mongodb to [10.131.0.7:27017] >I0114 20:09:30.932257 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for lt26-07-arap/portal-svc: to [10.129.0.8:8080] >I0114 20:09:30.932306 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-monitoring/grafana:https to [10.129.0.3:3000] >I0114 20:09:30.932348 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-monitoring/prometheus-k8s:web to [10.129.0.4:9091 10.131.0.3:9091] >I0114 20:09:30.932385 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-monitoring/prometheus-operator:http to [10.131.0.2:8080] >I0114 20:09:30.932431 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for default/kubernetes:dns to [172.18.1.141:8053] >I0114 20:09:30.932452 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [172.18.1.141:8443] >I0114 20:09:30.932473 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for default/kubernetes:dns-tcp to [172.18.1.141:8053] >I0114 20:09:30.932535 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-ansible-service-broker/asb:port-1338 to [10.131.0.5:1338] >I0114 20:09:30.932564 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-ansible-service-broker/asb:port-1337 to [10.131.0.5:1337] >I0114 20:09:30.932614 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-monitoring/alertmanager-operated:web to [10.129.0.5:9093 10.130.0.5:9093 10.131.0.4:9093] >I0114 20:09:30.932640 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-monitoring/alertmanager-operated:mesh to [10.129.0.5:6783 10.130.0.5:6783 10.131.0.4:6783] >I0114 20:09:30.932681 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for default/docker-registry:5000-tcp to [10.130.0.4:5000] >I0114 20:09:30.932715 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for kube-service-catalog/apiserver:secure to [10.128.0.6:6443] >I0114 20:09:30.932766 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for kube-service-catalog/controller-manager:secure to [10.128.0.7:6443] >I0114 20:09:30.932804 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-monitoring/kube-state-metrics:https-self to [10.130.0.6:9443] >I0114 20:09:30.932828 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-monitoring/kube-state-metrics:https-main to [10.130.0.6:8443] >I0114 20:09:30.932897 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-monitoring/node-exporter:https to [172.18.1.141:9100 172.18.12.157:9100 172.18.5.173:9100 172.18.6.77:9100] >I0114 20:09:30.932942 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-template-service-broker/apiserver: to [10.128.0.8:8443] >I0114 20:09:30.932979 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for kube-system/kube-controllers:http-metrics to [172.18.1.141:8444] >I0114 20:09:30.933019 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for kube-system/kubelet:http-metrics to [172.18.1.141:10255 172.18.12.157:10255 172.18.5.173:10255 172.18.6.77:10255] >I0114 20:09:30.933044 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for kube-system/kubelet:cadvisor to [172.18.1.141:4194 172.18.12.157:4194 172.18.5.173:4194 172.18.6.77:4194] >I0114 20:09:30.933073 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for kube-system/kubelet:https-metrics to [172.18.1.141:10250 172.18.12.157:10250 172.18.5.173:10250 172.18.6.77:10250] >I0114 20:09:30.933127 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-console/console:https to [10.128.0.5:8443] >I0114 20:09:30.933164 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-monitoring/alertmanager-main:web to [10.129.0.5:9094 10.130.0.5:9094 10.131.0.4:9094] >I0114 20:09:30.933204 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-monitoring/cluster-monitoring-operator:http to [10.129.0.2:8080] >I0114 20:09:30.933236 15326 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-monitoring/prometheus-operated:web to [10.129.0.4:9091 10.131.0.3:9091] >I0114 20:09:30.986322 15326 controller_utils.go:1032] Caches are synced for service config controller >I0114 20:09:30.986424 15326 proxier.go:629] Not syncing iptables until Services and Endpoints have been received from master >I0114 20:09:31.006856 15326 controller_utils.go:1032] Caches are synced for endpoints config controller >I0114 20:09:31.006959 15326 service.go:314] Adding new service port "install-test/nodejs-mongodb-example:web" at 172.30.237.186:8080/TCP >I0114 20:09:31.006993 15326 service.go:314] Adding new service port "kube-service-catalog/apiserver:secure" at 172.30.23.3:443/TCP >I0114 20:09:31.007019 15326 service.go:314] Adding new service port "default/docker-registry:5000-tcp" at 172.30.244.119:5000/TCP >I0114 20:09:31.007045 15326 service.go:314] Adding new service port "openshift-monitoring/alertmanager-main:web" at 172.30.178.51:9094/TCP >I0114 20:09:31.007068 15326 service.go:314] Adding new service port "kube-service-catalog/controller-manager:secure" at 172.30.143.200:443/TCP >I0114 20:09:31.007091 15326 service.go:314] Adding new service port "default/kubernetes:https" at 172.30.0.1:443/TCP >I0114 20:09:31.007111 15326 service.go:314] Adding new service port "default/kubernetes:dns" at 172.30.0.1:53/UDP >I0114 20:09:31.007132 15326 service.go:314] Adding new service port "default/kubernetes:dns-tcp" at 172.30.0.1:53/TCP >I0114 20:09:31.007154 15326 service.go:314] Adding new service port "openshift-monitoring/grafana:https" at 172.30.40.211:3000/TCP >I0114 20:09:31.007184 15326 service.go:314] Adding new service port "lt26-07-arap/portal-svc:" at 172.30.0.45:8080/TCP >I0114 20:09:31.007208 15326 service.go:314] Adding new service port "openshift-ansible-service-broker/asb:port-1337" at 172.30.53.236:1337/TCP >I0114 20:09:31.007230 15326 service.go:314] Adding new service port "openshift-ansible-service-broker/asb:port-1338" at 172.30.53.236:1338/TCP >I0114 20:09:31.007254 15326 service.go:314] Adding new service port "openshift-monitoring/prometheus-k8s:web" at 172.30.216.69:9091/TCP >I0114 20:09:31.007276 15326 service.go:314] Adding new service port "install-test/mongodb:mongodb" at 172.30.196.2:27017/TCP >I0114 20:09:31.007304 15326 service.go:314] Adding new service port "default/router:80-tcp" at 172.30.122.57:80/TCP >I0114 20:09:31.007324 15326 service.go:314] Adding new service port "default/router:443-tcp" at 172.30.122.57:443/TCP >I0114 20:09:31.007345 15326 service.go:314] Adding new service port "default/router:1936-tcp" at 172.30.122.57:1936/TCP >I0114 20:09:31.007368 15326 service.go:314] Adding new service port "openshift-web-console/webconsole:https" at 172.30.243.142:443/TCP >I0114 20:09:31.007393 15326 service.go:314] Adding new service port "openshift-console/console:https" at 172.30.150.218:443/TCP >I0114 20:09:31.007415 15326 service.go:314] Adding new service port "openshift-template-service-broker/apiserver:" at 172.30.113.124:443/TCP >I0114 20:09:31.007436 15326 service.go:314] Adding new service port "default/registry-console:registry-console" at 172.30.192.255:9000/TCP >I0114 20:09:31.007552 15326 proxier.go:643] Stale udp service default/kubernetes:dns -> 172.30.0.1 >I0114 20:09:33.081057 15326 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-template-service-broker/apiserver: >I0114 20:09:33.262473 15326 roundrobin.go:338] LoadBalancerRR: Removing endpoints for default/registry-console:registry-console >I0114 20:09:33.410090 15326 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-web-console/webconsole:https >I0114 20:09:33.410227 15326 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-console/console:https >I0114 20:09:33.934156 15326 roundrobin.go:338] LoadBalancerRR: Removing endpoints for kube-service-catalog/controller-manager:secure >I0114 20:09:33.934248 15326 roundrobin.go:338] LoadBalancerRR: Removing endpoints for kube-service-catalog/apiserver:secure >I0114 20:09:33.934343 15326 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/node-exporter:https to [172.18.12.157:9100 172.18.5.173:9100 172.18.6.77:9100] >I0114 20:09:33.934392 15326 roundrobin.go:240] Delete endpoint 172.18.1.141:9100 for service "openshift-monitoring/node-exporter:https" >I0114 20:09:56.720028 15326 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/node-exporter:https to [172.18.1.141:9100 172.18.12.157:9100 172.18.5.173:9100 172.18.6.77:9100] >I0114 20:09:56.720068 15326 roundrobin.go:240] Delete endpoint 172.18.1.141:9100 for service "openshift-monitoring/node-exporter:https" >I0114 20:10:07.427969 15326 roundrobin.go:310] LoadBalancerRR: Setting endpoints for default/registry-console:registry-console to [10.128.0.12:9090] >I0114 20:10:07.428015 15326 roundrobin.go:240] Delete endpoint 10.128.0.12:9090 for service "default/registry-console:registry-console" >I0114 20:10:11.262706 15326 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-console/console:https to [10.128.0.13:8443] >I0114 20:10:11.262742 15326 roundrobin.go:240] Delete endpoint 10.128.0.13:8443 for service "openshift-console/console:https" > > >[root@ip-172-18-1-141 ec2-user]# oc logs pod/ovs-sbswm >Starting ovsdb-server [ OK ] >Configuring Open vSwitch system IDs [ OK ] >Inserting openvswitch module [ OK ] >Starting ovs-vswitchd [ OK ] >Enabling remote OVSDB managers [ OK ] >==> /var/log/openvswitch/ovs-vswitchd.log <== >2019-01-14T20:09:28.264Z|00043|bridge|WARN|could not open network device veth6414c1fa (No such device) >2019-01-14T20:09:28.269Z|00044|bridge|WARN|could not open network device veth8b68e98a (No such device) >2019-01-14T20:09:28.274Z|00045|bridge|WARN|could not open network device vetha204429e (No such device) >2019-01-14T20:09:28.305Z|00046|bridge|WARN|could not open network device vethfe58a44d (No such device) >2019-01-14T20:09:28.312Z|00047|bridge|WARN|could not open network device vethadd9e578 (No such device) >2019-01-14T20:09:28.316Z|00048|bridge|WARN|could not open network device veth5076c942 (No such device) >2019-01-14T20:09:28.323Z|00049|bridge|WARN|could not open network device veth6414c1fa (No such device) >2019-01-14T20:09:28.342Z|00050|bridge|WARN|could not open network device veth8b68e98a (No such device) >2019-01-14T20:09:28.373Z|00051|bridge|WARN|could not open network device vetha204429e (No such device) >2019-01-14T20:09:28.440Z|00052|bridge|WARN|could not open network device vethfe58a44d (No such device) > >==> /var/log/openvswitch/ovsdb-server.log <== >2019-01-14T19:14:42.443Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log >2019-01-14T19:14:42.526Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.9.0 >2019-01-14T19:14:52.533Z|00003|memory|INFO|2736 kB peak resident set size after 10.1 seconds >2019-01-14T19:14:52.534Z|00004|memory|INFO|cells:217 json-caches:1 monitors:1 sessions:2 >2019-01-14T20:07:09.137Z|00002|daemon_unix(monitor)|INFO|pid 1317 died, exit status 0, exiting >2019-01-14T20:09:25.044Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log >2019-01-14T20:09:25.057Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.9.0 > >==> /var/log/openvswitch/ovs-vswitchd.log <== >2019-01-14T20:09:29.443Z|00053|bridge|INFO|bridge br0: deleted interface br0 on port 65534 >2019-01-14T20:09:29.443Z|00054|bridge|INFO|bridge br0: deleted interface tun0 on port 2 >2019-01-14T20:09:29.443Z|00055|bridge|INFO|bridge br0: deleted interface vxlan0 on port 1 >2019-01-14T20:09:29.510Z|00056|ofproto_dpif|INFO|system@ovs-system: Datapath supports recirculation >2019-01-14T20:09:29.510Z|00057|ofproto_dpif|INFO|system@ovs-system: VLAN header stack length probed as 2 >2019-01-14T20:09:29.511Z|00058|ofproto_dpif|INFO|system@ovs-system: MPLS label stack length probed as 1 >2019-01-14T20:09:29.511Z|00059|ofproto_dpif|INFO|system@ovs-system: Datapath supports truncate action >2019-01-14T20:09:29.511Z|00060|ofproto_dpif|INFO|system@ovs-system: Datapath supports unique flow ids >2019-01-14T20:09:29.511Z|00061|ofproto_dpif|INFO|system@ovs-system: Datapath does not support clone action >2019-01-14T20:09:29.511Z|00062|ofproto_dpif|INFO|system@ovs-system: Max sample nesting level probed as 10 >2019-01-14T20:09:29.511Z|00063|ofproto_dpif|INFO|system@ovs-system: Datapath supports eventmask in conntrack action >2019-01-14T20:09:29.511Z|00064|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_clear action >2019-01-14T20:09:29.511Z|00065|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_state >2019-01-14T20:09:29.511Z|00066|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_zone >2019-01-14T20:09:29.511Z|00067|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_mark >2019-01-14T20:09:29.511Z|00068|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_label >2019-01-14T20:09:29.511Z|00069|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_state_nat >2019-01-14T20:09:29.511Z|00070|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_orig_tuple >2019-01-14T20:09:29.511Z|00071|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_orig_tuple6 >2019-01-14T20:09:29.551Z|00072|bridge|INFO|bridge br0: added interface br0 on port 65534 >2019-01-14T20:09:29.551Z|00073|bridge|INFO|bridge br0: using datapath ID 0000561cf8dcbf4e >2019-01-14T20:09:29.551Z|00074|connmgr|INFO|br0: added service controller "punix:/var/run/openvswitch/br0.mgmt" >2019-01-14T20:09:29.617Z|00075|bridge|INFO|bridge br0: added interface vxlan0 on port 1 >2019-01-14T20:09:29.717Z|00076|bridge|INFO|bridge br0: added interface tun0 on port 2 >2019-01-14T20:09:29.745Z|00077|connmgr|INFO|br0<->unix#2: 42 flow_mods in the last 0 s (42 adds) >2019-01-14T20:09:29.758Z|00078|connmgr|INFO|br0<->unix#4: 1 flow_mods in the last 0 s (1 adds) >2019-01-14T20:09:30.936Z|00079|connmgr|INFO|br0<->unix#9: 3 flow_mods in the last 0 s (3 adds) >2019-01-14T20:09:30.948Z|00080|connmgr|INFO|br0<->unix#12: 1 flow_mods in the last 0 s (1 adds) >2019-01-14T20:09:30.948Z|00081|connmgr|INFO|br0<->unix#13: 2 flow_mods in the last 0 s (2 adds) >2019-01-14T20:09:30.956Z|00082|connmgr|INFO|br0<->unix#15: 3 flow_mods in the last 0 s (3 adds) >2019-01-14T20:09:30.957Z|00083|connmgr|INFO|br0<->unix#17: 2 flow_mods in the last 0 s (2 adds) >2019-01-14T20:09:30.965Z|00084|connmgr|INFO|br0<->unix#19: 4 flow_mods in the last 0 s (4 adds) >2019-01-14T20:09:30.971Z|00085|connmgr|INFO|br0<->unix#21: 1 flow_mods in the last 0 s (1 adds) >2019-01-14T20:09:30.977Z|00086|connmgr|INFO|br0<->unix#23: 2 flow_mods in the last 0 s (2 adds) >2019-01-14T20:09:30.989Z|00087|connmgr|INFO|br0<->unix#25: 3 flow_mods in the last 0 s (3 adds) >2019-01-14T20:09:30.994Z|00088|connmgr|INFO|br0<->unix#27: 2 flow_mods in the last 0 s (2 adds) >2019-01-14T20:09:31.000Z|00089|connmgr|INFO|br0<->unix#29: 1 flow_mods in the last 0 s (1 adds) >2019-01-14T20:09:31.001Z|00090|connmgr|INFO|br0<->unix#31: 2 flow_mods in the last 0 s (2 adds) >2019-01-14T20:09:31.011Z|00091|connmgr|INFO|br0<->unix#33: 2 flow_mods in the last 0 s (2 adds) >2019-01-14T20:09:31.034Z|00092|connmgr|INFO|br0<->unix#35: 2 flow_mods in the last 0 s (2 adds) >2019-01-14T20:09:31.050Z|00093|connmgr|INFO|br0<->unix#37: 2 flow_mods in the last 0 s (2 adds) >2019-01-14T20:09:31.058Z|00094|connmgr|INFO|br0<->unix#39: 2 flow_mods in the last 0 s (2 adds) >2019-01-14T20:09:31.098Z|00095|connmgr|INFO|br0<->unix#41: 2 flow_mods in the last 0 s (2 adds) >2019-01-14T20:09:31.107Z|00096|connmgr|INFO|br0<->unix#43: 3 flow_mods in the last 0 s (3 adds) >2019-01-14T20:09:31.116Z|00097|connmgr|INFO|br0<->unix#45: 2 flow_mods in the last 0 s (2 adds) >2019-01-14T20:09:31.130Z|00098|connmgr|INFO|br0<->unix#47: 2 flow_mods in the last 0 s (2 adds) >2019-01-14T20:09:31.145Z|00099|connmgr|INFO|br0<->unix#49: 4 flow_mods in the last 0 s (4 adds) >2019-01-14T20:09:31.159Z|00100|connmgr|INFO|br0<->unix#51: 2 flow_mods in the last 0 s (2 adds) > >==> /var/log/openvswitch/ovsdb-server.log <== >2019-01-14T20:09:35.064Z|00003|memory|INFO|2688 kB peak resident set size after 10.0 seconds >2019-01-14T20:09:35.064Z|00004|memory|INFO|cells:217 json-caches:1 monitors:1 sessions:2 > >==> /var/log/openvswitch/ovs-vswitchd.log <== >2019-01-14T20:09:35.169Z|00101|bridge|INFO|bridge br0: added interface vethabbc9124 on port 3 >2019-01-14T20:09:35.227Z|00102|connmgr|INFO|br0<->unix#53: 4 flow_mods in the last 0 s (4 adds) >2019-01-14T20:09:35.290Z|00103|connmgr|INFO|br0<->unix#55: 2 flow_mods in the last 0 s (2 deletes) >2019-01-14T20:09:36.318Z|00104|bridge|INFO|bridge br0: added interface veth5a016377 on port 4 >2019-01-14T20:09:36.355Z|00105|connmgr|INFO|br0<->unix#57: 4 flow_mods in the last 0 s (4 adds) >2019-01-14T20:09:36.476Z|00106|connmgr|INFO|br0<->unix#59: 2 flow_mods in the last 0 s (2 deletes) >2019-01-14T20:09:36.558Z|00107|bridge|INFO|bridge br0: added interface veth12c4b034 on port 5 >2019-01-14T20:09:36.586Z|00108|connmgr|INFO|br0<->unix#61: 4 flow_mods in the last 0 s (4 adds) >2019-01-14T20:09:36.648Z|00109|connmgr|INFO|br0<->unix#63: 2 flow_mods in the last 0 s (2 deletes) >2019-01-14T20:09:36.781Z|00110|memory|INFO|43360 kB peak resident set size after 10.0 seconds >2019-01-14T20:09:36.781Z|00111|memory|INFO|handlers:1 ports:6 revalidators:1 rules:105 udpif keys:6 >2019-01-14T20:09:44.052Z|00112|connmgr|INFO|br0<->unix#65: 2 flow_mods in the last 0 s (2 deletes) >2019-01-14T20:09:44.858Z|00113|connmgr|INFO|br0<->unix#67: 2 flow_mods in the last 0 s (2 deletes) >2019-01-14T20:09:45.248Z|00114|connmgr|INFO|br0<->unix#69: 2 flow_mods in the last 0 s (2 deletes) >2019-01-14T20:09:57.371Z|00115|bridge|INFO|bridge br0: added interface veth40cc1ea4 on port 6 >2019-01-14T20:09:57.445Z|00116|connmgr|INFO|br0<->unix#71: 4 flow_mods in the last 0 s (4 adds) >2019-01-14T20:09:57.513Z|00117|connmgr|INFO|br0<->unix#73: 2 flow_mods in the last 0 s (2 deletes) >2019-01-14T20:09:57.835Z|00118|bridge|INFO|bridge br0: added interface vethdb791dbf on port 7 >2019-01-14T20:09:57.881Z|00119|connmgr|INFO|br0<->unix#75: 4 flow_mods in the last 0 s (4 adds) >2019-01-14T20:09:57.991Z|00120|connmgr|INFO|br0<->unix#77: 2 flow_mods in the last 0 s (2 deletes) >2019-01-14T20:09:58.692Z|00121|bridge|INFO|bridge br0: added interface veth7d487380 on port 8 >2019-01-14T20:09:58.744Z|00122|connmgr|INFO|br0<->unix#79: 4 flow_mods in the last 0 s (4 adds) >2019-01-14T20:09:58.832Z|00123|connmgr|INFO|br0<->unix#81: 2 flow_mods in the last 0 s (2 deletes) > >[root@ip-172-18-1-141 ec2-user]# oc get pods -o wide >NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE >ovs-76wst 1/1 Running 0 1h 172.18.6.77 ip-172-18-6-77.ec2.internal <none> >ovs-mj6tx 1/1 Running 0 1h 172.18.12.157 ip-172-18-12-157.ec2.internal <none> >ovs-sbswm 1/1 Running 1 1h 172.18.1.141 ip-172-18-1-141.ec2.internal <none> >ovs-tg5jd 1/1 Running 0 1h 172.18.5.173 ip-172-18-5-173.ec2.internal <none> >sdn-bp5m9 1/1 Running 0 1h 172.18.5.173 ip-172-18-5-173.ec2.internal <none> >sdn-hr495 1/1 Running 0 1h 172.18.12.157 ip-172-18-12-157.ec2.internal <none> >sdn-zg75b 1/1 Running 1 1h 172.18.1.141 ip-172-18-1-141.ec2.internal <none> >sdn-zsdf6 1/1 Running 0 1h 172.18.6.77 ip-172-18-6-77.ec2.internal <none> >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1654044
:
1518156
| 1520616