Bug 1873275 - kubectl example for oc new-project results in a pod stuck in ImagePullBackoff
Summary: kubectl example for oc new-project results in a pod stuck in ImagePullBackoff
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 4.6
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.6.0
Assignee: Maciej Szulik
QA Contact: zhou ying
: 1879051 (view as bug list)
Depends On:
TreeView+ depends on / blocked
Reported: 2020-08-27 18:25 UTC by Mike Fiedler
Modified: 2020-09-15 11:21 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed:
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Github openshift oc pull 536 None closed Bug 1873275: Use echoserver as simple k8s application 2020-09-23 02:25:07 UTC
Github openshift oc pull 544 None closed Bug 1873275: fix kubectl example 2020-09-23 02:25:04 UTC

Description Mike Fiedler 2020-08-27 18:25:49 UTC
Description of problem:

Running the command suggested after oc new-project:

kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node

Results in:

NAME                          READY   STATUS             RESTARTS   AGE
hello-node-855447ffcc-cjl8l   0/1     ImagePullBackOff   0          2m47s

86s         Normal    Pulling             pod/hello-node-855447ffcc-cjl8l    Pulling image "gcr.io/hello-minikube-zero-install/hello-node"
85s         Warning   Failed              pod/hello-node-855447ffcc-cjl8l    Failed to pull image "gcr.io/hello-minikube-zero-install/hello-node": rpc error: code = Unknown desc = Requesting bear token: invalid status code from registry 400 (Bad Request)
85s         Warning   Failed              pod/hello-node-855447ffcc-cjl8l    Error: ErrImagePull
56s         Normal    BackOff             pod/hello-node-855447ffcc-cjl8l    Back-off pulling image "gcr.io/hello-minikube-zero-install/hello-node"
70s         Warning   Failed              pod/hello-node-855447ffcc-cjl8l    Error: ImagePullBackOff

Version-Release number of selected component (if applicable): 4.6.0-0.nightly-2020-08-27-005538

How reproducible: Always

Steps to Reproduce:  See above

Expected results: Sample cli commands work.

Additional info:

Comment 3 zhou ying 2020-09-01 05:45:45 UTC
Confirmed with latest version , Image pull issue has fixed , but the pod still couldn't running: 

[root@dhcp-140-138 ~]# ./kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
deployment.apps/hello-node created
[root@dhcp-140-138 ~]# ./kubectl get po 
NAME                          READY   STATUS             RESTARTS   AGE
hello-node-7567d9fdc9-752dz   0/1     CrashLoopBackOff   3          86s

[root@dhcp-140-138 ~]# ./kubectl describe po/hello-node-7567d9fdc9-752dz
Name:         hello-node-7567d9fdc9-752dz
Namespace:    zhouy
Priority:     0
Node:         ip-10-0-187-49.ap-northeast-2.compute.internal/
Start Time:   Tue, 01 Sep 2020 13:41:03 +0800
Labels:       app=hello-node
Annotations:  k8s.v1.cni.cncf.io/network-status:
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                    "default": true,
                    "dns": {}
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                    "default": true,
                    "dns": {}
              openshift.io/scc: restricted
Status:       Running
Controlled By:  ReplicaSet/hello-node-7567d9fdc9
    Container ID:   cri-o://71622c4af6bf6e47d566a69788a4621d424359077b9fe9f525e180ab80bb0c6a
    Image:          k8s.gcr.io/echoserver:1.4
    Image ID:       k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 01 Sep 2020 13:41:28 +0800
      Finished:     Tue, 01 Sep 2020 13:41:28 +0800
    Ready:          False
    Restart Count:  2
    Environment:    <none>
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tgtfb (ro)
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tgtfb
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
  Type     Reason          Age                From                                                     Message
  ----     ------          ----               ----                                                     -------
  Normal   Scheduled       <unknown>                                                                   Successfully assigned zhouy/hello-node-7567d9fdc9-752dz to ip-10-0-187-49.ap-northeast-2.compute.internal
  Normal   AddedInterface  42s                multus                                                   Add eth0 []
  Normal   Pulling         41s                kubelet, ip-10-0-187-49.ap-northeast-2.compute.internal  Pulling image "k8s.gcr.io/echoserver:1.4"
  Normal   Pulled          33s                kubelet, ip-10-0-187-49.ap-northeast-2.compute.internal  Successfully pulled image "k8s.gcr.io/echoserver:1.4" in 8.041931132s
  Normal   Created         18s (x3 over 33s)  kubelet, ip-10-0-187-49.ap-northeast-2.compute.internal  Created container echoserver
  Normal   Started         18s (x3 over 33s)  kubelet, ip-10-0-187-49.ap-northeast-2.compute.internal  Started container echoserver
  Normal   Pulled          18s (x2 over 33s)  kubelet, ip-10-0-187-49.ap-northeast-2.compute.internal  Container image "k8s.gcr.io/echoserver:1.4" already present on machine
  Warning  BackOff         4s (x4 over 32s)   kubelet, ip-10-0-187-49.ap-northeast-2.compute.internal  Back-off restarting failed container

[root@dhcp-140-138 ~]# ./kubectl logs -f po/hello-node-7567d9fdc9-752dz
2020/09/01 05:42:39 [emerg] 1#1: mkdir() "/var/lib/nginx/proxy" failed (13: Permission denied)
nginx: [emerg] mkdir() "/var/lib/nginx/proxy" failed (13: Permission denied)

Comment 5 zhou ying 2020-09-09 08:45:45 UTC
The issue has fixed: 
[zhouying@dhcp-140-138 ~]$ oc new-project zhouyt1
Now using project "zhouyt1" on server "https://api.zy09096.qe.devcluster.openshift.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname

[zhouying@dhcp-140-138 ~]$  kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname
deployment.apps/hello-node created
[zhouying@dhcp-140-138 ~]$ oc get po 
NAME                          READY   STATUS    RESTARTS   AGE
hello-node-6d9d95bb88-4wmp5   1/1     Running   0          11s

Comment 6 Maciej Szulik 2020-09-15 11:21:21 UTC
*** Bug 1879051 has been marked as a duplicate of this bug. ***

Note You need to log in before you can comment on or make changes to this bug.