Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1482927 Details for
Bug 1628436
docker-registry pod cannot startup after running redeploy-certificates.yml playbook
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
History event list
eventlist.txt (text/plain), 52.08 KB, created by
Wenjing Zheng
on 2018-09-13 03:41:03 UTC
(
hide
)
Description:
History event list
Filename:
MIME Type:
Creator:
Wenjing Zheng
Created:
2018-09-13 03:41:03 UTC
Size:
52.08 KB
patch
obsolete
># oc get events >LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE >43m 43m 1 ip-172-18-9-194.ec2.internal.1553d0a8f30ee9ac Node Normal Starting kubelet, ip-172-18-9-194.ec2.internal Starting kubelet. >43m 43m 1 ip-172-18-9-194.ec2.internal.1553d0a90b2abc56 Node Normal NodeAllocatableEnforced kubelet, ip-172-18-9-194.ec2.internal Updated Node Allocatable limit across pods >43m 43m 5 ip-172-18-9-194.ec2.internal.1553d0a8feaef711 Node Normal NodeHasSufficientPID kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasSufficientPID >43m 43m 6 ip-172-18-9-194.ec2.internal.1553d0a8feacb66c Node Normal NodeHasSufficientDisk kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasSufficientDisk >43m 43m 6 ip-172-18-9-194.ec2.internal.1553d0a8feae6625 Node Normal NodeHasNoDiskPressure kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasNoDiskPressure >43m 43m 6 ip-172-18-9-194.ec2.internal.1553d0a8feadd158 Node Normal NodeHasSufficientMemory kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasSufficientMemory >40m 40m 1 ip-172-18-9-194.ec2.internal.1553d0d92b3695ed Node Normal Starting kubelet, ip-172-18-9-194.ec2.internal Starting kubelet. >40m 40m 1 ip-172-18-9-194.ec2.internal.1553d0d93318e154 Node Normal NodeHasSufficientPID kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasSufficientPID >40m 40m 1 ip-172-18-9-194.ec2.internal.1553d0d94758d4e4 Node Normal NodeAllocatableEnforced kubelet, ip-172-18-9-194.ec2.internal Updated Node Allocatable limit across pods >40m 40m 1 ip-172-18-9-194.ec2.internal.1553d0d93317a4dc Node Normal NodeHasSufficientMemory kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasSufficientMemory >40m 40m 1 ip-172-18-9-194.ec2.internal.1553d0d93316fbda Node Normal NodeHasSufficientDisk kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasSufficientDisk >40m 40m 1 ip-172-18-9-194.ec2.internal.1553d0d933183791 Node Normal NodeHasNoDiskPressure kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasNoDiskPressure >39m 39m 1 ip-172-18-9-194.ec2.internal.1553d0e25b660af8 Node Normal Starting openshift-sdn, ip-172-18-9-194.ec2.internal Starting openshift-sdn. >39m 39m 1 ip-172-18-9-194.ec2.internal.1553d0e4e171f409 Node Normal NodeReady kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeReady >39m 39m 1 ip-172-18-9-194.ec2.internal.1553d0eaccea58b7 Node Normal NodeNotReady kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeNotReady >39m 39m 1 ip-172-18-9-194.ec2.internal.1553d0eac8075857 Node Normal NodeHasSufficientMemory kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasSufficientMemory >39m 39m 1 ip-172-18-9-194.ec2.internal.1553d0eabcdc4fff Node Normal Starting kubelet, ip-172-18-9-194.ec2.internal Starting kubelet. >39m 39m 1 ip-172-18-9-194.ec2.internal.1553d0eac8068716 Node Normal NodeHasSufficientDisk kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasSufficientDisk >39m 39m 1 ip-172-18-9-194.ec2.internal.1553d0eac80a1978 Node Normal NodeHasSufficientPID kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasSufficientPID >39m 39m 1 ip-172-18-9-194.ec2.internal.1553d0eac807f970 Node Normal NodeHasNoDiskPressure kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasNoDiskPressure >39m 39m 1 ip-172-18-9-194.ec2.internal.1553d0eaebba470c Node Normal NodeAllocatableEnforced kubelet, ip-172-18-9-194.ec2.internal Updated Node Allocatable limit across pods >39m 39m 1 ip-172-18-9-194.ec2.internal.1553d0ed2424dd88 Node Normal NodeReady kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeReady >38m 38m 2 ip-172-18-6-229.ec2.internal.1553d0ee0a03872d Node Normal NodeHasSufficientPID kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeHasSufficientPID >38m 38m 1 ip-172-18-6-229.ec2.internal.1553d0ee0114a658 Node Normal Starting kubelet, ip-172-18-6-229.ec2.internal Starting kubelet. >38m 38m 1 ip-172-18-8-35.ec2.internal.1553d0ee192cd32a Node Normal Starting kubelet, ip-172-18-8-35.ec2.internal Starting kubelet. >38m 38m 2 ip-172-18-6-229.ec2.internal.1553d0ee0a02ff8b Node Normal NodeHasNoDiskPressure kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeHasNoDiskPressure >38m 38m 1 ip-172-18-6-229.ec2.internal.1553d0ee1408319e Node Normal NodeAllocatableEnforced kubelet, ip-172-18-6-229.ec2.internal Updated Node Allocatable limit across pods >38m 38m 2 ip-172-18-6-229.ec2.internal.1553d0ee0a024156 Node Normal NodeHasSufficientMemory kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeHasSufficientMemory >38m 38m 2 ip-172-18-6-229.ec2.internal.1553d0ee0a017ae3 Node Normal NodeHasSufficientDisk kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeHasSufficientDisk >38m 38m 2 ip-172-18-8-35.ec2.internal.1553d0ee21bdbe61 Node Normal NodeHasSufficientPID kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeHasSufficientPID >38m 38m 1 ip-172-18-8-35.ec2.internal.1553d0ee2f6243d5 Node Normal NodeAllocatableEnforced kubelet, ip-172-18-8-35.ec2.internal Updated Node Allocatable limit across pods >38m 38m 2 ip-172-18-8-35.ec2.internal.1553d0ee21bd15a8 Node Normal NodeHasNoDiskPressure kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeHasNoDiskPressure >38m 38m 2 ip-172-18-8-35.ec2.internal.1553d0ee21bb62a4 Node Normal NodeHasSufficientDisk kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeHasSufficientDisk >38m 38m 2 ip-172-18-8-35.ec2.internal.1553d0ee21bc6944 Node Normal NodeHasSufficientMemory kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeHasSufficientMemory >38m 38m 1 ip-172-18-8-35.ec2.internal.1553d0f06c0f5749 Node Normal Starting kubelet, ip-172-18-8-35.ec2.internal Starting kubelet. >38m 38m 1 ip-172-18-6-229.ec2.internal.1553d0f0717c18f7 Node Normal Starting kubelet, ip-172-18-6-229.ec2.internal Starting kubelet. >38m 38m 1 ip-172-18-6-229.ec2.internal.1553d0f08671ec41 Node Normal NodeAllocatableEnforced kubelet, ip-172-18-6-229.ec2.internal Updated Node Allocatable limit across pods >38m 38m 1 ip-172-18-8-35.ec2.internal.1553d0f0753db3aa Node Normal NodeHasNoDiskPressure kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeHasNoDiskPressure >38m 38m 1 ip-172-18-6-229.ec2.internal.1553d0f07c6140dd Node Normal NodeHasSufficientPID kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeHasSufficientPID >38m 38m 1 ip-172-18-8-35.ec2.internal.1553d0f0753c241b Node Normal NodeHasSufficientMemory kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeHasSufficientMemory >38m 38m 1 ip-172-18-8-35.ec2.internal.1553d0f0753ad86a Node Normal NodeHasSufficientDisk kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeHasSufficientDisk >38m 38m 1 ip-172-18-8-35.ec2.internal.1553d0f0753ec181 Node Normal NodeHasSufficientPID kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeHasSufficientPID >38m 38m 1 ip-172-18-6-229.ec2.internal.1553d0f07c603765 Node Normal NodeHasSufficientMemory kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeHasSufficientMemory >38m 38m 1 ip-172-18-6-229.ec2.internal.1553d0f07c5f76f6 Node Normal NodeHasSufficientDisk kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeHasSufficientDisk >38m 38m 1 ip-172-18-6-229.ec2.internal.1553d0f07c60b608 Node Normal NodeHasNoDiskPressure kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeHasNoDiskPressure >38m 38m 1 ip-172-18-8-35.ec2.internal.1553d0f08481d50c Node Normal NodeAllocatableEnforced kubelet, ip-172-18-8-35.ec2.internal Updated Node Allocatable limit across pods >38m 38m 1 ip-172-18-6-229.ec2.internal.1553d0f2a3100e8c Node Normal Starting openshift-sdn, ip-172-18-6-229.ec2.internal Starting openshift-sdn. >38m 38m 1 ip-172-18-8-35.ec2.internal.1553d0f2a6e1cb1b Node Normal Starting openshift-sdn, ip-172-18-8-35.ec2.internal Starting openshift-sdn. >38m 38m 1 ip-172-18-6-229.ec2.internal.1553d0f52a01dd04 Node Normal NodeReady kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeReady >38m 38m 1 ip-172-18-8-35.ec2.internal.1553d0f522cff01d Node Normal NodeReady kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeReady >37m 37m 1 router.1553d0fe916ecf7e DeploymentConfig Normal DeploymentCreated deploymentconfig-controller Created new replication controller "router-1" for version 1 >37m 37m 1 router-1-deploy.1553d0fe94f9baae Pod Normal Scheduled default-scheduler Successfully assigned default/router-1-deploy to ip-172-18-6-229.ec2.internal >37m 37m 1 router-1-deploy.1553d0ff36bb95d4 Pod spec.containers{deployment} Normal Pulled kubelet, ip-172-18-6-229.ec2.internal Successfully pulled image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.11.0-0.32.0" >37m 37m 1 router-1.1553d0ff563590a9 ReplicationController Normal SuccessfulCreate replication-controller Created pod: router-1-7dzwn >37m 37m 1 router-1-deploy.1553d0ff40b1a428 Pod spec.containers{deployment} Normal Started kubelet, ip-172-18-6-229.ec2.internal Started container >37m 37m 1 router-1-deploy.1553d0ff38c727e0 Pod spec.containers{deployment} Normal Created kubelet, ip-172-18-6-229.ec2.internal Created container >37m 37m 1 router-1-deploy.1553d0ff23a2a25c Pod spec.containers{deployment} Normal Pulling kubelet, ip-172-18-6-229.ec2.internal pulling image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.11.0-0.32.0" >37m 37m 1 router-1-7dzwn.1553d0ff7f4fc4cd Pod Normal Scheduled default-scheduler Successfully assigned default/router-1-7dzwn to ip-172-18-6-229.ec2.internal >37m 37m 1 router-1-7dzwn.1553d0ffcf6f3e4a Pod spec.containers{router} Normal Pulled kubelet, ip-172-18-6-229.ec2.internal Successfully pulled image "registry.reg-aws.openshift.com:443/openshift3/ose-haproxy-router:v3.11" >37m 37m 1 router-1-7dzwn.1553d0ff9c773303 Pod spec.containers{router} Normal Pulling kubelet, ip-172-18-6-229.ec2.internal pulling image "registry.reg-aws.openshift.com:443/openshift3/ose-haproxy-router:v3.11" >37m 37m 1 router-1-deploy.1553d103c7305c62 Pod spec.containers{deployment} Normal Killing kubelet, ip-172-18-6-229.ec2.internal Killing container with id docker://deployment:Need to kill Pod >37m 37m 1 docker-registry.1553d103c810135b DeploymentConfig Normal DeploymentCreated deploymentconfig-controller Created new replication controller "docker-registry-1" for version 1 >37m 37m 1 docker-registry-1-deploy.1553d103cacc22ad Pod Normal Scheduled default-scheduler Successfully assigned default/docker-registry-1-deploy to ip-172-18-6-229.ec2.internal >37m 37m 1 docker-registry-1-deploy.1553d1046acc39bb Pod spec.containers{deployment} Normal Pulled kubelet, ip-172-18-6-229.ec2.internal Container image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.11.0-0.32.0" already present on machine >37m 37m 1 docker-registry-1-deploy.1553d1046cf4a2a2 Pod spec.containers{deployment} Normal Created kubelet, ip-172-18-6-229.ec2.internal Created container >37m 37m 1 docker-registry-1-deploy.1553d10475582533 Pod spec.containers{deployment} Normal Started kubelet, ip-172-18-6-229.ec2.internal Started container >37m 37m 1 docker-registry-1.1553d1048a308f93 ReplicationController Normal SuccessfulCreate replication-controller Created pod: docker-registry-1-zpp9g >37m 37m 1 docker-registry-1-zpp9g.1553d1048b26d6eb Pod Normal Scheduled default-scheduler Successfully assigned default/docker-registry-1-zpp9g to ip-172-18-6-229.ec2.internal >37m 37m 1 docker-registry-1-zpp9g.1553d1050a69eb8b Pod spec.containers{registry} Normal Pulling kubelet, ip-172-18-6-229.ec2.internal pulling image "registry.reg-aws.openshift.com:443/openshift3/ose-docker-registry:v3.11" >37m 37m 1 registry-console.1553d1055e919059 DeploymentConfig Normal DeploymentCreated deploymentconfig-controller Created new replication controller "registry-console-1" for version 1 >37m 37m 1 docker-registry-1-zpp9g.1553d105826bae3b Pod spec.containers{registry} Normal Created kubelet, ip-172-18-6-229.ec2.internal Created container >37m 37m 1 registry-console-1-deploy.1553d10568f66a24 Pod Normal Scheduled default-scheduler Successfully assigned default/registry-console-1-deploy to ip-172-18-9-194.ec2.internal >37m 37m 1 docker-registry-1-zpp9g.1553d1058bd2961f Pod spec.containers{registry} Normal Started kubelet, ip-172-18-6-229.ec2.internal Started container >37m 37m 1 docker-registry-1-zpp9g.1553d1058070bedc Pod spec.containers{registry} Normal Pulled kubelet, ip-172-18-6-229.ec2.internal Successfully pulled image "registry.reg-aws.openshift.com:443/openshift3/ose-docker-registry:v3.11" >37m 37m 1 registry-console-1-deploy.1553d10602afc0a6 Pod spec.containers{deployment} Normal Pulling kubelet, ip-172-18-9-194.ec2.internal pulling image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.11.0-0.32.0" >37m 37m 1 registry-console-1-29dwz.1553d10650afe392 Pod Normal Scheduled default-scheduler Successfully assigned default/registry-console-1-29dwz to ip-172-18-9-194.ec2.internal >37m 37m 1 registry-console-1-deploy.1553d1062d519aab Pod spec.containers{deployment} Normal Started kubelet, ip-172-18-9-194.ec2.internal Started container >37m 37m 1 registry-console-1-deploy.1553d10623a1630f Pod spec.containers{deployment} Normal Created kubelet, ip-172-18-9-194.ec2.internal Created container >37m 37m 1 registry-console-1-deploy.1553d1062048ff29 Pod spec.containers{deployment} Normal Pulled kubelet, ip-172-18-9-194.ec2.internal Successfully pulled image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.11.0-0.32.0" >37m 37m 1 registry-console-1.1553d1064db040fd ReplicationController Normal SuccessfulCreate replication-controller Created pod: registry-console-1-29dwz >37m 37m 1 registry-console-1-29dwz.1553d106ccc48e5b Pod spec.containers{registry-console} Normal Pulling kubelet, ip-172-18-9-194.ec2.internal pulling image "registry.reg-aws.openshift.com:443/openshift3/registry-console:v3.11" >37m 37m 1 registry-console-1-29dwz.1553d1075d6cf514 Pod spec.containers{registry-console} Normal Pulled kubelet, ip-172-18-9-194.ec2.internal Successfully pulled image "registry.reg-aws.openshift.com:443/openshift3/registry-console:v3.11" >37m 37m 1 registry-console-1-29dwz.1553d10760a5de72 Pod spec.containers{registry-console} Normal Created kubelet, ip-172-18-9-194.ec2.internal Created container >37m 37m 1 registry-console-1-29dwz.1553d1076db94647 Pod spec.containers{registry-console} Normal Started kubelet, ip-172-18-9-194.ec2.internal Started container >37m 37m 1 docker-registry-1-deploy.1553d1080bc8bc02 Pod spec.containers{deployment} Normal Killing kubelet, ip-172-18-6-229.ec2.internal Killing container with id docker://deployment:Need to kill Pod >37m 37m 1 registry-console-1-deploy.1553d108ce24aae2 Pod spec.containers{deployment} Normal Killing kubelet, ip-172-18-9-194.ec2.internal Killing container with id docker://deployment:Need to kill Pod >33m 33m 9 ansible-service-broker.1553d13b40aca2a5 ClusterServiceBroker Warning ErrorFetchingCatalog service-catalog-controller-manager Error getting broker catalog: Get https://asb.openshift-ansible-service-broker.svc:1338/osb/v2/catalog: dial tcp 172.30.126.111:1338: getsockopt: no route to host >22m 31m 2 template-service-broker.1553d154c4014e32 ClusterServiceBroker Normal FetchedCatalog service-catalog-controller-manager Successfully fetched catalog entries from broker. >19m 33m 3 ansible-service-broker.1553d140e5c577a6 ClusterServiceBroker Normal FetchedCatalog service-catalog-controller-manager Successfully fetched catalog entries from broker. >13m 13m 1 ip-172-18-9-194.ec2.internal.1553d250613027f8 Node Normal Starting kubelet, ip-172-18-9-194.ec2.internal Starting kubelet. >13m 13m 1 ip-172-18-9-194.ec2.internal.1553d250e28f0fda Node Normal NodeAllocatableEnforced kubelet, ip-172-18-9-194.ec2.internal Updated Node Allocatable limit across pods >13m 13m 2 ip-172-18-9-194.ec2.internal.1553d25072f1978f Node Normal NodeHasNoDiskPressure kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasNoDiskPressure >13m 13m 2 ip-172-18-9-194.ec2.internal.1553d25072f1134d Node Normal NodeHasSufficientMemory kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasSufficientMemory >13m 13m 2 ip-172-18-9-194.ec2.internal.1553d25072f03a0e Node Normal NodeHasSufficientDisk kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasSufficientDisk >13m 13m 2 ip-172-18-9-194.ec2.internal.1553d25072f2234e Node Normal NodeHasSufficientPID kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeHasSufficientPID >13m 13m 1 ip-172-18-9-194.ec2.internal.1553d254b3de6920 Node Normal NodeNotReady kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeNotReady >13m 13m 1 docker-registry-1-zpp9g.1553d25560261193 Pod Warning NetworkFailed openshift-sdn, ip-172-18-6-229.ec2.internal The pod's network interface has been lost and the pod will be stopped. >13m 13m 1 router-1-7dzwn.1553d2554a2bc368 Pod Warning FailedCreatePodSandBox kubelet, ip-172-18-6-229.ec2.internal Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "router-1-7dzwn": Error response from daemon: transport is closing >13m 13m 1 ip-172-18-6-229.ec2.internal.1553d2558308dee1 Node Normal Starting openshift-sdn, ip-172-18-6-229.ec2.internal Starting openshift-sdn. >13m 13m 1 router-1-7dzwn.1553d2557605e83b Pod Warning FailedCreatePodSandBox kubelet, ip-172-18-6-229.ec2.internal Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "router-1-7dzwn": Error response from daemon: failed to update store for object type *libnetwork.endpoint: open : no such file or directory >13m 13m 2 registry-console-1-29dwz.1553d253dcfa271f Pod Warning NetworkNotReady kubelet, ip-172-18-9-194.ec2.internal network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized] >13m 13m 1 ip-172-18-9-194.ec2.internal.1553d2560bc2ef16 Node Normal Starting openshift-sdn, ip-172-18-9-194.ec2.internal Starting openshift-sdn. >13m 13m 1 registry-console-1-29dwz.1553d255fe4692b1 Pod Warning NetworkFailed openshift-sdn, ip-172-18-9-194.ec2.internal The pod's network interface has been lost and the pod will be stopped. >13m 13m 4 router-1-7dzwn.1553d254a3f1e083 Pod Normal SandboxChanged kubelet, ip-172-18-6-229.ec2.internal Pod sandbox changed, it will be killed and re-created. >13m 13m 1 router-1-7dzwn.1553d256f606332b Pod spec.containers{router} Normal Pulled kubelet, ip-172-18-6-229.ec2.internal Container image "registry.reg-aws.openshift.com:443/openshift3/ose-haproxy-router:v3.11" already present on machine >13m 37m 2 router-1-7dzwn.1553d0ffd1ecb061 Pod spec.containers{router} Normal Created kubelet, ip-172-18-6-229.ec2.internal Created container >13m 13m 1 ip-172-18-9-194.ec2.internal.1553d2570c27e100 Node Normal NodeReady kubelet, ip-172-18-9-194.ec2.internal Node ip-172-18-9-194.ec2.internal status is now: NodeReady >13m 37m 2 router-1-7dzwn.1553d0ffda093609 Pod spec.containers{router} Normal Started kubelet, ip-172-18-6-229.ec2.internal Started container >13m 13m 1 docker-registry-1-zpp9g.1553d2581cee7940 Pod Warning FailedCreatePodSandBox kubelet, ip-172-18-6-229.ec2.internal Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "94db32a3961e36760e0a4968db32f62439f937643fc39f50b9c77fffdfb9b74d" network for pod "docker-registry-1-zpp9g": NetworkPlugin cni failed to set up pod "docker-registry-1-zpp9g_default" network: failed to send CNI request: Post http://dummy/: dial unix /var/run/openshift-sdn/cni-server.sock: connect: connection refused, failed to clean up sandbox container "94db32a3961e36760e0a4968db32f62439f937643fc39f50b9c77fffdfb9b74d" network for pod "docker-registry-1-zpp9g": NetworkPlugin cni failed to teardown pod "docker-registry-1-zpp9g_default" network: failed to send CNI request: Post http://dummy/: dial unix /var/run/openshift-sdn/cni-server.sock: connect: connection refused] >13m 13m 5 docker-registry-1-zpp9g.1553d254afd97312 Pod Normal SandboxChanged kubelet, ip-172-18-6-229.ec2.internal Pod sandbox changed, it will be killed and re-created. >12m 12m 1 registry-console-1-29dwz.1553d259271f7920 Pod Normal SandboxChanged kubelet, ip-172-18-9-194.ec2.internal Pod sandbox changed, it will be killed and re-created. >12m 12m 1 ip-172-18-6-229.ec2.internal.1553d259c3abd26a Node Normal NodeHasSufficientDisk kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeHasSufficientDisk >12m 12m 1 ip-172-18-6-229.ec2.internal.1553d259b56e5852 Node Normal Starting kubelet, ip-172-18-6-229.ec2.internal Starting kubelet. >12m 12m 1 ip-172-18-6-229.ec2.internal.1553d259c3acad00 Node Normal NodeHasSufficientMemory kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeHasSufficientMemory >12m 12m 1 ip-172-18-6-229.ec2.internal.1553d259c3add6c9 Node Normal NodeHasSufficientPID kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeHasSufficientPID >12m 12m 1 ip-172-18-6-229.ec2.internal.1553d259c3ad477f Node Normal NodeHasNoDiskPressure kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeHasNoDiskPressure >12m 12m 1 ip-172-18-6-229.ec2.internal.1553d259cd76b979 Node Normal NodeNotReady kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeNotReady >12m 12m 1 ip-172-18-6-229.ec2.internal.1553d259f05643df Node Normal NodeAllocatableEnforced kubelet, ip-172-18-6-229.ec2.internal Updated Node Allocatable limit across pods >12m 12m 1 registry-console-1-29dwz.1553d25a97c13346 Pod spec.containers{registry-console} Normal Started kubelet, ip-172-18-9-194.ec2.internal Started container >12m 12m 1 registry-console-1-29dwz.1553d25a8878f4ff Pod spec.containers{registry-console} Normal Pulled kubelet, ip-172-18-9-194.ec2.internal Container image "registry.reg-aws.openshift.com:443/openshift3/registry-console:v3.11" already present on machine >12m 12m 1 registry-console-1-29dwz.1553d25a8c5f6ad7 Pod spec.containers{registry-console} Normal Created kubelet, ip-172-18-9-194.ec2.internal Created container >12m 12m 1 docker-registry-1-zpp9g.1553d25c16cfebf6 Pod Warning NetworkFailed openshift-sdn, ip-172-18-6-229.ec2.internal The pod's network interface has been lost and the pod will be stopped. >12m 12m 2 docker-registry-1-zpp9g.1553d25b62d063a9 Pod Normal SandboxChanged kubelet, ip-172-18-6-229.ec2.internal Pod sandbox changed, it will be killed and re-created. >12m 12m 1 ip-172-18-6-229.ec2.internal.1553d25c25f86a13 Node Normal NodeReady kubelet, ip-172-18-6-229.ec2.internal Node ip-172-18-6-229.ec2.internal status is now: NodeReady >12m 12m 1 ip-172-18-6-229.ec2.internal.1553d25c6afc8d9a Node Normal Starting openshift-sdn, ip-172-18-6-229.ec2.internal Starting openshift-sdn. >12m 12m 1 docker-registry-1-zpp9g.1553d25cb610fe37 Pod spec.containers{registry} Normal Created kubelet, ip-172-18-6-229.ec2.internal Created container >12m 12m 1 docker-registry-1-zpp9g.1553d25cbe2ad139 Pod spec.containers{registry} Normal Started kubelet, ip-172-18-6-229.ec2.internal Started container >12m 12m 1 docker-registry-1-zpp9g.1553d25cb3808571 Pod spec.containers{registry} Normal Pulled kubelet, ip-172-18-6-229.ec2.internal Container image "registry.reg-aws.openshift.com:443/openshift3/ose-docker-registry:v3.11" already present on machine >12m 12m 1 ip-172-18-8-35.ec2.internal.1553d25eac08cbfb Node Normal Starting openshift-sdn, ip-172-18-8-35.ec2.internal Starting openshift-sdn. >12m 12m 1 ip-172-18-8-35.ec2.internal.1553d262d6d77ad9 Node Normal Starting kubelet, ip-172-18-8-35.ec2.internal Starting kubelet. >12m 12m 1 ip-172-18-8-35.ec2.internal.1553d262e435fe17 Node Normal NodeHasSufficientPID kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeHasSufficientPID >12m 12m 1 ip-172-18-8-35.ec2.internal.1553d26310033e0b Node Normal NodeAllocatableEnforced kubelet, ip-172-18-8-35.ec2.internal Updated Node Allocatable limit across pods >12m 12m 1 ip-172-18-8-35.ec2.internal.1553d262ede6d6be Node Normal NodeNotReady kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeNotReady >12m 12m 1 ip-172-18-8-35.ec2.internal.1553d262e43462d8 Node Normal NodeHasSufficientMemory kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeHasSufficientMemory >12m 12m 1 ip-172-18-8-35.ec2.internal.1553d262e43526c1 Node Normal NodeHasNoDiskPressure kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeHasNoDiskPressure >12m 12m 1 ip-172-18-8-35.ec2.internal.1553d262e4336fe8 Node Normal NodeHasSufficientDisk kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeHasSufficientDisk >12m 12m 1 ip-172-18-8-35.ec2.internal.1553d26544798a0f Node Normal NodeReady kubelet, ip-172-18-8-35.ec2.internal Node ip-172-18-8-35.ec2.internal status is now: NodeReady >12m 12m 1 ip-172-18-8-35.ec2.internal.1553d26646a6bdcd Node Normal Starting openshift-sdn, ip-172-18-8-35.ec2.internal Starting openshift-sdn. >11m 12m 14 ansible-service-broker.1553d25eb996d978 ClusterServiceBroker Warning ErrorFetchingCatalog service-catalog-controller-manager Error getting broker catalog: Get https://asb.openshift-ansible-service-broker.svc:1338/osb/v2/catalog: dial tcp 172.30.126.111:1338: getsockopt: no route to host >11m 11m 1 router-2-deploy.1553d26e4dbf78d9 Pod Normal Scheduled default-scheduler Successfully assigned default/router-2-deploy to ip-172-18-6-229.ec2.internal >11m 11m 1 router.1553d26e4b1f3eb5 DeploymentConfig Normal DeploymentCreated deploymentconfig-controller Created new replication controller "router-2" for version 2 >11m 11m 1 router-2-deploy.1553d26ed361ed13 Pod spec.containers{deployment} Normal Started kubelet, ip-172-18-6-229.ec2.internal Started container >11m 11m 1 router-2-deploy.1553d26ec9478216 Pod spec.containers{deployment} Normal Created kubelet, ip-172-18-6-229.ec2.internal Created container >11m 11m 1 router-2-deploy.1553d26ec6baac49 Pod spec.containers{deployment} Normal Pulled kubelet, ip-172-18-6-229.ec2.internal Container image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.11.0-0.32.0" already present on machine >11m 11m 1 router-1.1553d26f639f1870 ReplicationController Normal SuccessfulDelete replication-controller Deleted pod: router-1-7dzwn >11m 11m 1 router-1-7dzwn.1553d26f687ad743 Pod spec.containers{router} Normal Killing kubelet, ip-172-18-6-229.ec2.internal Killing container with id docker://router:Need to kill Pod >11m 11m 1 router-2-z5hfw.1553d26fa173b7ce Pod Warning FailedScheduling default-scheduler 0/3 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 2 node(s) didn't match node selector. >11m 11m 1 router-2.1553d26fa17e755a ReplicationController Normal SuccessfulCreate replication-controller Created pod: router-2-z5hfw >11m 11m 1 router-2-z5hfw.1553d26fd2f4d834 Pod Normal Scheduled default-scheduler Successfully assigned default/router-2-z5hfw to ip-172-18-6-229.ec2.internal >11m 11m 1 router-2-z5hfw.1553d26ffe6e76fa Pod spec.containers{router} Normal Pulled kubelet, ip-172-18-6-229.ec2.internal Successfully pulled image "registry.reg-aws.openshift.com:443/openshift3/ose-haproxy-router:v3.11.0" >11m 11m 1 router-2-z5hfw.1553d27009db74b2 Pod spec.containers{router} Normal Started kubelet, ip-172-18-6-229.ec2.internal Started container >11m 11m 1 router-2-z5hfw.1553d27000e1f3f6 Pod spec.containers{router} Normal Created kubelet, ip-172-18-6-229.ec2.internal Created container >11m 11m 1 router-2-z5hfw.1553d26ff30ddadd Pod spec.containers{router} Normal Pulling kubelet, ip-172-18-6-229.ec2.internal pulling image "registry.reg-aws.openshift.com:443/openshift3/ose-haproxy-router:v3.11.0" >11m 11m 1 docker-registry.1553d271e5d74602 DeploymentConfig Normal DeploymentCreated deploymentconfig-controller Created new replication controller "docker-registry-2" for version 2 >11m 11m 1 docker-registry-2-deploy.1553d271e852b999 Pod Normal Scheduled default-scheduler Successfully assigned default/docker-registry-2-deploy to ip-172-18-6-229.ec2.internal >11m 11m 1 docker-registry-2-deploy.1553d2728e443ae8 Pod spec.containers{deployment} Normal Started kubelet, ip-172-18-6-229.ec2.internal Started container >11m 11m 1 docker-registry-2-deploy.1553d27285398a1e Pod spec.containers{deployment} Normal Created kubelet, ip-172-18-6-229.ec2.internal Created container >11m 11m 1 docker-registry-2-deploy.1553d27283114253 Pod spec.containers{deployment} Normal Pulled kubelet, ip-172-18-6-229.ec2.internal Container image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.11.0-0.32.0" already present on machine >11m 11m 1 docker-registry-2-f6vfg.1553d272e36b08a9 Pod Normal Scheduled default-scheduler Successfully assigned default/docker-registry-2-f6vfg to ip-172-18-6-229.ec2.internal >11m 11m 1 docker-registry-2.1553d272e2f7d1c0 ReplicationController Normal SuccessfulCreate replication-controller Created pod: docker-registry-2-f6vfg >11m 11m 1 docker-registry-2-f6vfg.1553d27366b8e174 Pod spec.containers{registry} Normal Created kubelet, ip-172-18-6-229.ec2.internal Created container >11m 11m 1 docker-registry-2-f6vfg.1553d27362935cf3 Pod spec.containers{registry} Normal Pulled kubelet, ip-172-18-6-229.ec2.internal Container image "registry.reg-aws.openshift.com:443/openshift3/ose-docker-registry:v3.11" already present on machine >11m 11m 1 docker-registry-2-f6vfg.1553d2736f557a86 Pod spec.containers{registry} Normal Started kubelet, ip-172-18-6-229.ec2.internal Started container >10m 10m 1 docker-registry.1553d27cfb31a9b3 DeploymentConfig Normal ReplicationControllerScaled deploymentconfig-controller Scaled replication controller "docker-registry-2" from 1 to 0 >10m 10m 1 docker-registry-2-f6vfg.1553d281adb0df6f Pod spec.containers{registry} Normal Killing kubelet, ip-172-18-6-229.ec2.internal Killing container with id docker://registry:Need to kill Pod >10m 10m 1 docker-registry-2.1553d281a7151283 ReplicationController Normal SuccessfulDelete replication-controller Deleted pod: docker-registry-2-f6vfg >4m 11m 2 template-service-broker.1553d26d6d1c0b61 ClusterServiceBroker Normal FetchedCatalog service-catalog-controller-manager Successfully fetched catalog entries from broker. >3m 3m 1 docker-registry.1553d2d802e0cf75 DeploymentConfig Normal DeploymentCreated deploymentconfig-controller Created new replication controller "docker-registry-3" for version 3 >3m 3m 1 docker-registry-3-deploy.1553d2d8059894e6 Pod Normal Scheduled default-scheduler Successfully assigned default/docker-registry-3-deploy to ip-172-18-6-229.ec2.internal >3m 3m 1 docker-registry-3-deploy.1553d2d872ca9f7a Pod spec.containers{deployment} Normal Pulled kubelet, ip-172-18-6-229.ec2.internal Container image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.11.0-0.32.0" already present on machine >3m 3m 1 docker-registry-3-deploy.1553d2d8756509ad Pod spec.containers{deployment} Normal Created kubelet, ip-172-18-6-229.ec2.internal Created container >3m 3m 1 docker-registry-3-deploy.1553d2d87e09b958 Pod spec.containers{deployment} Normal Started kubelet, ip-172-18-6-229.ec2.internal Started container >3m 3m 1 docker-registry-3-qlqh2.1553d2d8ce7df6fa Pod Normal Scheduled default-scheduler Successfully assigned default/docker-registry-3-qlqh2 to ip-172-18-6-229.ec2.internal >3m 3m 1 docker-registry-3.1553d2d8cd8e442e ReplicationController Normal SuccessfulCreate replication-controller Created pod: docker-registry-3-qlqh2 >3m 3m 1 docker-registry-3-qlqh2.1553d2d961211917 Pod spec.containers{registry} Normal Created kubelet, ip-172-18-6-229.ec2.internal Created container >3m 3m 1 docker-registry-3-qlqh2.1553d2d95e87c5ba Pod spec.containers{registry} Normal Pulled kubelet, ip-172-18-6-229.ec2.internal Container image "registry.reg-aws.openshift.com:443/openshift3/ose-docker-registry:v3.11" already present on machine >3m 3m 1 docker-registry-3-qlqh2.1553d2d96a1a60b6 Pod spec.containers{registry} Normal Started kubelet, ip-172-18-6-229.ec2.internal Started container >3m 3m 1 docker-registry-1.1553d2dbdf8ba7b0 ReplicationController Normal SuccessfulDelete replication-controller Deleted pod: docker-registry-1-zpp9g >3m 3m 1 docker-registry-1-zpp9g.1553d2dbe3eaefac Pod spec.containers{registry} Normal Killing kubelet, ip-172-18-6-229.ec2.internal Killing container with id docker://registry:Need to kill Pod >3m 3m 1 docker-registry-3-deploy.1553d2dc83607118 Pod spec.containers{deployment} Normal Killing kubelet, ip-172-18-6-229.ec2.internal Killing container with id docker://deployment:Need to kill Pod >25s 10m 3 ansible-service-broker.1553d27c9d252864 ClusterServiceBroker Normal FetchedCatalog service-catalog-controller-manager Successfully fetched catalog entries from broker.
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1628436
:
1482926
| 1482927