Bug 1339060
| Summary: | ImagePullBackOff when deploying elastic search | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Jaspreet Kaur <jkaur> |
| Component: | Logging | Assignee: | Luke Meyer <lmeyer> |
| Status: | CLOSED DUPLICATE | QA Contact: | chunchen <chunchen> |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.2.0 | CC: | aos-bugs, steven, wsun |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-05-24 14:11:58 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Recently-discovered issue; I am only surprised that it took this long for a customer to report the problem. See original bug for a workaround while we work on a technical fix. Running `oc deploy --latest` as this user did won't help, and it is not the intent that the tags be updated. *** This bug has been marked as a duplicate of bug 1338965 *** |
Description of problem: After installing Openshift Enterprise 3.2 we see issues when deploying the elastic serach, fluentd and kibana pods. Version-Release number of selected component (if applicable): Below error is seen : oc get dc NAME REVISION REPLICAS TRIGGERED BY logging-es-038eerlv 3 1 config,image(logging-elasticsearch:3.2.0) logging-es-ops-o5qti2ej 0 1 config,image(logging-elasticsearch:3.2.0) logging-es-ops-t7f10fs5 0 1 config,image(logging-elasticsearch:3.2.0) logging-es-ops-v8hoxck4 0 1 config,image(logging-elasticsearch:3.2.0) logging-es-tfpilm8g 0 1 config,image(logging-elasticsearch:3.2.0) logging-es-y5uxef2o 0 1 config,image(logging-elasticsearch:3.2.0) logging-fluentd 0 0 config,image(logging-fluentd:3.2.0) logging-kibana 1 1 config,image(logging-auth-proxy:3.2.0),image(logging-kibana:3.2.0) logging-kibana-ops 0 1 config,image(logging-auth-proxy:3.2.0),image(logging-kibana:3.2.0) [root@dppuosif001 ~]# oc describe pods logging-kibana-1-x9ey0 Name: logging-kibana-1-x9ey0 Namespace: logging Node: dppuosif011.server.lan/172.31.180.21 Start Time: Mon, 23 May 2016 08:28:46 +0200 Labels: component=kibana,deployment=logging-kibana-1,deploymentconfig=logging-kibana,provider=openshift Status: Pending IP: 10.212.2.3 Controllers: ReplicationController/logging-kibana-1 Containers: kibana: Container ID: Image: logging-kibana Image ID: Port: QoS Tier: cpu: BestEffort memory: BestEffort State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment Variables: ES_HOST: logging-es ES_PORT: 9200 kibana-proxy: Container ID: Image: openshift-auth-proxy Image ID: Port: 3000/TCP QoS Tier: memory: BestEffort cpu: BestEffort State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment Variables: OAP_BACKEND_URL: http://localhost:5601 OAP_AUTH_MODE: oauth2 OAP_TRANSFORM: user_header,token_header OAP_OAUTH_ID: kibana-proxy OAP_MASTER_URL: https://kubernetes.default.svc.cluster.local OAP_PUBLIC_MASTER_URL: https://os-cluster.server.lan:8443 OAP_LOGOUT_REDIRECT: https://os-cluster.server.lan:8443/console/logout OAP_MASTER_CA_FILE: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt OAP_DEBUG: false Conditions: Type Status Ready False Volumes: kibana: Type: Secret (a volume populated by a Secret) SecretName: logging-kibana kibana-proxy: Type: Secret (a volume populated by a Secret) SecretName: logging-kibana-proxy aggregated-logging-kibana-token-b59uf: Type: Secret (a volume populated by a Secret) SecretName: aggregated-logging-kibana-token-b59uf Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 38s 38s 1 {default-scheduler } Normal Scheduled Successfully assigned logging-kibana-1-x9ey0 to dppuosif011.server.lan 27s 27s 1 {kubelet node.example.com} spec.containers{kibana-proxy} Normal Pulling pulling image "openshift-auth-proxy" 27s 27s 1 {kubelet node.example.com} spec.containers{kibana} Warning Failed Failed to pull image "logging-kibana": Error: image library/logging-kibana:latest not found 17s 17s 1 {kubelet node.example.com} spec.containers{kibana-proxy} Warning Failed Failed to pull image "openshift-auth-proxy": Error: image library/openshift-auth-proxy:latest not found 17s 17s 1 {kubelet node.example.com} Warning FailedSync Error syncing pod, skipping: [failed to "StartContainer" for "kibana" with ImagePullBackOff: "Back-off pulling image \"logging-kibana\"" , failed to "StartContainer" for "kibana-proxy" with ImagePullBackOff: "Back-off pulling image \"openshift-auth-proxy\"" ] 17s 17s 1 {kubelet node.example.com} Warning FailedSync Error syncing pod, skipping: [failed to "StartContainer" for "kibana" with ErrImagePull: "Error: image library/logging-kibana:latest not found" , failed to "StartContainer" for "kibana-proxy" with ErrImagePull: "Error: image library/openshift-auth-proxy:latest not found" ] 17s 17s 1 {kubelet node.example.com} spec.containers{kibana} Normal BackOff Back-off pulling image "logging-kibana" 17s 17s 1 {kubelet node.example.com} spec.containers{kibana-proxy} Normal BackOff Back-off pulling image "openshift-auth-proxy" 37s 2s 2 {kubelet node.example.com} spec.containers{kibana} Normal Pulling pulling image "logging-kibana" ----- Actual results: It always fails to pull images with kibana, elastic search, kibana Expected results: It should not have happened with default templates. Additional info: Change the dc to image tags resolved the issue : oc get dc NAME REVISION REPLICAS TRIGGERED BY logging-es-yr8bhk5x 3 1 config,image(logging-elasticsearch:3.2.0-8) logging-fluentd 3 1 config,image(logging-fluentd:3.2.0-8) logging-kibana 4 2 config,image(logging-auth-proxy:3.2.0-4),image(logging-kibana:3.2.0-4)