Bug 1819598

Summary: Console show status "Completed" even the pod container in ImagePullBackOff status
Product: OpenShift Container Platform Reporter: shahan <hasha>
Component: Management ConsoleAssignee: Samuel Padgett <spadgett>
Status: CLOSED ERRATA QA Contact: Yadan Pei <yapei>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.5CC: aos-bugs, jokerman, spadgett
Target Milestone: ---   
Target Release: 4.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 1823593 (view as bug list) Environment:
Last Closed: 2020-07-13 17:24:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1823593    

Description shahan 2020-04-01 07:12:08 UTC
Version-Release number of selected component (if applicable):
4.5.0-0.nightly-2020-03-31-203533

How reproducible:
Always


Steps to Reproduce:
1. create pod with yaml:
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  restartPolicy: Never
  containers:
  - name: container1
    image: busybox_error
    imagePullPolicy: Always
    command: ["sleep", "1"]
  - name: container2
    image: busybox
    imagePullPolicy: Always
    command: ["sleep", "100"]
2. check the pod status on the pod list page
3.

Actual results:
The status of pod display ImagePullBackOff/ErrImagePull at the beginning, then it changed to "Completed" after a while

Expected results:
It should show "ImagePullBackOff" as cli do
$ oc get po -n hasha-pro1
NAME                  READY   STATUS             RESTARTS   AGE
firstcontainererror   0/2     ImagePullBackOff   0          13m


Additional info:
Seems this issue only can reproduce on pod with two+ containers, and one status of container in completed. 

[hasha@localhost ~]$ oc describe po firstcontainererror -n hasha-pro1
Name:         firstcontainererror
Namespace:    hasha-pro1
Priority:     0
Node:         ip-10-0-169-152.ap-south-1.compute.internal/10.0.169.152
Start Time:   Wed, 01 Apr 2020 14:25:11 +0800
Labels:       <none>
Annotations:  k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.129.2.26"
                    ],
                    "dns": {},
                    "default-route": [
                        "10.129.2.1"
                    ]
                }]
              openshift.io/scc: anyuid
Status:       Pending
IP:           10.129.2.26
IPs:
  IP:  10.129.2.26
Containers:
  container1:
    Container ID:  
    Image:         busyboxerror
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      1
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7srqv (ro)
  container2:
    Container ID:  cri-o://37fdda365a524d272a38bb24e8d5d5f1025c02c1cacb3fe03522ec84241616ce
    Image:         busybox
    Image ID:      docker.io/library/busybox@sha256:afe605d272837ce1732f390966166c2afff5391208ddd57de10942748694049d
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      100
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 01 Apr 2020 14:25:25 +0800
      Finished:     Wed, 01 Apr 2020 14:27:05 +0800
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7srqv (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-7srqv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-7srqv
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From                                                  Message
  ----     ------     ----               ----                                                  -------
  Normal   Scheduled  <unknown>          default-scheduler                                     Successfully assigned hasha-pro1/firstcontainererror to ip-10-0-169-152.ap-south-1.compute.internal
  Normal   Pulling    36m                kubelet, ip-10-0-169-152.ap-south-1.compute.internal  Pulling image "busybox"
  Normal   Started    36m                kubelet, ip-10-0-169-152.ap-south-1.compute.internal  Started container container2
  Normal   Pulled     36m                kubelet, ip-10-0-169-152.ap-south-1.compute.internal  Successfully pulled image "busybox"
  Normal   Created    36m                kubelet, ip-10-0-169-152.ap-south-1.compute.internal  Created container container2
  Normal   Pulling    36m (x3 over 36m)  kubelet, ip-10-0-169-152.ap-south-1.compute.internal  Pulling image "busyboxerror"
  Warning  Failed     35m (x3 over 36m)  kubelet, ip-10-0-169-152.ap-south-1.compute.internal  Failed to pull image "busyboxerror": rpc error: code = Unknown desc = Error reading manifest latest in docker.io/library/busyboxerror: errors:
denied: requested access to the resource is denied
unauthorized: authentication required
  Warning  Failed   35m (x3 over 36m)     kubelet, ip-10-0-169-152.ap-south-1.compute.internal  Error: ErrImagePull
  Warning  Failed   35m (x6 over 36m)     kubelet, ip-10-0-169-152.ap-south-1.compute.internal  Error: ImagePullBackOff
  Normal   BackOff  113s (x150 over 36m)  kubelet, ip-10-0-169-152.ap-south-1.compute.internal  Back-off pulling image "busyboxerror"

Comment 3 shahan 2020-04-03 02:58:59 UTC
The console status aligned with cli now.
4.5.0-0.ci-2020-04-03-014149

Comment 5 errata-xmlrpc 2020-07-13 17:24:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409