Bug 1508828

Summary: [CRI-O] Can't show image digests info on node
Product: OpenShift Container Platform Reporter: DeShuai Ma <dma>
Component: ContainersAssignee: Nalin Dahyabhai <nalin>
Status: CLOSED ERRATA QA Contact: Weinan Liu <weinliu>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.7.0CC: amurdaca, aos-bugs, jhonce, jokerman, mmccomas, wjiang
Target Milestone: ---   
Target Release: 3.9.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-05-17 06:42:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description DeShuai Ma 2017-11-02 10:13:34 UTC
Description of problem:
When use cri-o env, then get the node yaml info, we can't get the digests info

Version-Release number of selected component (if applicable):
openshift v3.7.0-0.188.0
kubernetes v1.7.6+a08f5eeb62
etcd 3.2.8
cri-o 1.0.2

How reproducible:
Always

Steps to Reproduce:
1.
[root@ip-172-18-14-130 ~]# ./kpod images --digests
IMAGE ID               IMAGE NAME                                                                           DIGEST                                                                       CREATED AT               SIZE
5277144bc0dc           [docker.io/kubernetes/pause:latest                       ]                           sha256:11194928cd2e79ec411b3f0cc82e7937cdfbc3989fa4a8ceb006407ccf92b552      Jul 19, 2014 07:02       241 KB
dfba30225ff2           [<none>                                                  ]                           sha256:5d00a02c942eee3f4bd360fefb5000363a1d0b8d5d2a6efcf01f2d89f1eb6c24      Jul 19, 2014 07:02       241 KB
9769dda4c2b0           [registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.7.0-0.188.0]   sha256:ac7a212cc057a19b0dee12a0c082278bad9d0438a3e078564ee8b7f0074a70c4      Oct 31, 2017 07:25       253.1 MB
6c3965eec06c           [registry.ops.openshift.com/openshift3/ose:v3.7          ]                           sha256:c637f2a9ca534b88fac6770e29a43248d4cbd56382ba056516590f3d98206cde      Nov 1, 2017 06:06        1010 MB
8d43b6d7c8f9           [docker.io/ocpqe/hello-pod:latest                        ]                           sha256:289953c559120c7d2ca92d92810885887ee45c871c373a1e492e845eca575b8c      Aug 10, 2017 10:10       282.8 MB
ecff0e57fdda           [registry.access.redhat.com/rhscl/mongodb-32-rhel7:latest]                           sha256:7cd2ac01a55786ecdcfa0d11fc378ef3e22b0eb84c82df546c36941b1910dd3d      Oct 20, 2017 12:30       549.1 MB
e6919157c185           [docker.io/deshuai/hello-pod:latest                      ]                           sha256:289953c559120c7d2ca92d92810885887ee45c871c373a1e492e845eca575b8c      Aug 10, 2017 10:10       282.8 MB
253cf78fc7e1           [docker.io/openshift/oauth-proxy:v1.0.0                  ]                           sha256:df98e39697cb4d3f3dda7beae4c875ee318f217868b94400ff1e4dbeb38ab809      Oct 27, 2017 17:32       217.7 MB
6f5824692ce4           [docker.io/openshift/prometheus:v2.0.0-dev.3             ]                           sha256:61041872a61902c0dfd10ee5b3dc3fe0f465e98b6e0611f4fa5e26cb4feea4be      Oct 3, 2017 03:58        258.5 MB
39bd27b1c708           [docker.io/openshift/prometheus-alert-buffer:v0.0.2      ]                           sha256:076f8dd576806f5c2dde7e536d020c31aa7d2ec7dcea52da6cbb944895def7ba      Oct 3, 2017 04:03        191.2 MB
a50d70f4de08           [docker.io/openshift/prometheus-alertmanager:v0.9.1      ]                           sha256:9367a268442151beaab2938cfb8fd103d116c495d28c9228c86e425c0d86509f      Oct 3, 2017 04:02        210.7 MB
27b79e7a8ced           [gcr.io/google_containers/busybox:latest                 ]                           sha256:6632a6043ee0735c8592e7c617b192687ee7154e603948d409b598a4ee865d1e      Dec 31, 2014 22:23       2.326 MB
[root@ip-172-18-14-130 ~]# 
[root@ip-172-18-14-130 ~]# oc get no
NAME                            STATUS    AGE       VERSION
ip-172-18-14-130.ec2.internal   Ready     9h        v1.7.6+a08f5eeb62
[root@ip-172-18-14-130 ~]# oc get po ip-172-18-14-130.ec2.internal -o yaml
Error from server (NotFound): pods "ip-172-18-14-130.ec2.internal" not found
[root@ip-172-18-14-130 ~]# oc get no ip-172-18-14-130.ec2.internal -o yaml
apiVersion: v1
kind: Node
metadata:
  annotations:
    volumes.kubernetes.io/controller-managed-attach-detach: "true"
  creationTimestamp: 2017-11-02T00:48:16Z
  labels:
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/instance-type: m3.large
    beta.kubernetes.io/os: linux
    failure-domain.beta.kubernetes.io/region: us-east-1
    failure-domain.beta.kubernetes.io/zone: us-east-1d
    kubernetes.io/hostname: ip-172-18-14-130.ec2.internal
    openshift-infra: apiserver
    registry: "true"
    role: node
    router: "true"
  name: ip-172-18-14-130.ec2.internal
  resourceVersion: "71370"
  selfLink: /api/v1/nodes/ip-172-18-14-130.ec2.internal
  uid: 8344e835-bf67-11e7-a3f1-0e6509b58fe4
spec:
  externalID: i-02b04559372fcca6e
  providerID: aws:///us-east-1d/i-02b04559372fcca6e
  taints:
  - effect: NoSchedule
    key: role
    timeAdded: null
    value: master
status:
  addresses:
  - address: 172.18.14.130
    type: InternalIP
  - address: 54.210.143.16
    type: ExternalIP
  - address: ip-172-18-14-130.ec2.internal
    type: InternalDNS
  - address: ec2-54-210-143-16.compute-1.amazonaws.com
    type: ExternalDNS
  - address: ip-172-18-14-130.ec2.internal
    type: Hostname
  allocatable:
    cpu: "2"
    memory: 7390684Ki
    pods: "250"
  capacity:
    cpu: "2"
    memory: 7493084Ki
    pods: "250"
  conditions:
  - lastHeartbeatTime: 2017-11-02T10:06:37Z
    lastTransitionTime: 2017-11-02T09:31:53Z
    message: kubelet has sufficient disk space available
    reason: KubeletHasSufficientDisk
    status: "False"
    type: OutOfDisk
  - lastHeartbeatTime: 2017-11-02T10:06:37Z
    lastTransitionTime: 2017-11-02T09:31:53Z
    message: kubelet has sufficient memory available
    reason: KubeletHasSufficientMemory
    status: "False"
    type: MemoryPressure
  - lastHeartbeatTime: 2017-11-02T10:06:37Z
    lastTransitionTime: 2017-11-02T09:31:53Z
    message: kubelet has no disk pressure
    reason: KubeletHasNoDiskPressure
    status: "False"
    type: DiskPressure
  - lastHeartbeatTime: 2017-11-02T10:06:37Z
    lastTransitionTime: 2017-11-02T09:32:03Z
    message: kubelet is posting ready status
    reason: KubeletReady
    status: "True"
    type: Ready
  daemonEndpoints:
    kubeletEndpoint:
      Port: 10250
  images:
  - names:
    - registry.ops.openshift.com/openshift3/ose:v3.7
    sizeBytes: 1058793303
  - names:
    - registry.access.redhat.com/rhscl/mongodb-32-rhel7:latest
    sizeBytes: 575801242
  - names:
    - docker.io/deshuai/hello-pod:latest
    sizeBytes: 296589274
  - names:
    - docker.io/ocpqe/hello-pod:latest
    sizeBytes: 296589274
  - names:
    - docker.io/openshift/prometheus:v2.0.0-dev.3
    sizeBytes: 271099290
  - names:
    - registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.7.0-0.188.0
    sizeBytes: 265421625
  - names:
    - docker.io/openshift/oauth-proxy:v1.0.0
    sizeBytes: 228244754
  - names:
    - docker.io/openshift/prometheus-alertmanager:v0.9.1
    sizeBytes: 220898782
  - names:
    - docker.io/openshift/prometheus-alert-buffer:v0.0.2
    sizeBytes: 200523944
  - names:
    - gcr.io/google_containers/busybox:latest
    sizeBytes: 2439416
  - names:
    - docker.io/kubernetes/pause:latest
    sizeBytes: 246793
  - names: null
    sizeBytes: 246793
  nodeInfo:
    architecture: amd64
    bootID: 0841ae2d-a94f-42e6-9e60-99fb0510d850
    containerRuntimeVersion: cri-o://1.0.2
    kernelVersion: 3.10.0-693.5.2.el7.x86_64
    kubeProxyVersion: v1.7.6+a08f5eeb62
    kubeletVersion: v1.7.6+a08f5eeb62
    machineID: 5d23ff8f285e44988da260d1e65952d2
    operatingSystem: linux
    osImage: Red Hat Enterprise Linux Server 7.4 (Maipo)
    systemUUID: EC2280E8-F28C-8D5C-DB70-EAAA473B4FD6

Actual results:


Expected results:


Additional info:

Comment 1 Antonio Murdaca 2017-11-02 16:32:57 UTC
I'm sorry, can you actually provide "Actual" and "Expected" results? it's really to even understand where the bug is right now. You also using kpod which I'm not sure what's the matter with it as well.


(In reply to DeShuai Ma from comment #0)
> Description of problem:
> When use cri-o env, then get the node yaml info, we can't get the digests
> info
> 
> Version-Release number of selected component (if applicable):
> openshift v3.7.0-0.188.0
> kubernetes v1.7.6+a08f5eeb62
> etcd 3.2.8
> cri-o 1.0.2
> 
> How reproducible:
> Always
> 
> Steps to Reproduce:
> 1.
> [root@ip-172-18-14-130 ~]# ./kpod images --digests
> IMAGE ID               IMAGE NAME                                           
> DIGEST                                                                      
> CREATED AT               SIZE
> 5277144bc0dc           [docker.io/kubernetes/pause:latest                   
> ]                          
> sha256:11194928cd2e79ec411b3f0cc82e7937cdfbc3989fa4a8ceb006407ccf92b552     
> Jul 19, 2014 07:02       241 KB
> dfba30225ff2           [<none>                                              
> ]                          
> sha256:5d00a02c942eee3f4bd360fefb5000363a1d0b8d5d2a6efcf01f2d89f1eb6c24     
> Jul 19, 2014 07:02       241 KB
> 9769dda4c2b0          
> [registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.7.0-0.
> 188.0]  
> sha256:ac7a212cc057a19b0dee12a0c082278bad9d0438a3e078564ee8b7f0074a70c4     
> Oct 31, 2017 07:25       253.1 MB
> 6c3965eec06c           [registry.ops.openshift.com/openshift3/ose:v3.7      
> ]                          
> sha256:c637f2a9ca534b88fac6770e29a43248d4cbd56382ba056516590f3d98206cde     
> Nov 1, 2017 06:06        1010 MB
> 8d43b6d7c8f9           [docker.io/ocpqe/hello-pod:latest                    
> ]                          
> sha256:289953c559120c7d2ca92d92810885887ee45c871c373a1e492e845eca575b8c     
> Aug 10, 2017 10:10       282.8 MB
> ecff0e57fdda          
> [registry.access.redhat.com/rhscl/mongodb-32-rhel7:latest]                  
> sha256:7cd2ac01a55786ecdcfa0d11fc378ef3e22b0eb84c82df546c36941b1910dd3d     
> Oct 20, 2017 12:30       549.1 MB
> e6919157c185           [docker.io/deshuai/hello-pod:latest                  
> ]                          
> sha256:289953c559120c7d2ca92d92810885887ee45c871c373a1e492e845eca575b8c     
> Aug 10, 2017 10:10       282.8 MB
> 253cf78fc7e1           [docker.io/openshift/oauth-proxy:v1.0.0              
> ]                          
> sha256:df98e39697cb4d3f3dda7beae4c875ee318f217868b94400ff1e4dbeb38ab809     
> Oct 27, 2017 17:32       217.7 MB
> 6f5824692ce4           [docker.io/openshift/prometheus:v2.0.0-dev.3         
> ]                          
> sha256:61041872a61902c0dfd10ee5b3dc3fe0f465e98b6e0611f4fa5e26cb4feea4be     
> Oct 3, 2017 03:58        258.5 MB
> 39bd27b1c708           [docker.io/openshift/prometheus-alert-buffer:v0.0.2  
> ]                          
> sha256:076f8dd576806f5c2dde7e536d020c31aa7d2ec7dcea52da6cbb944895def7ba     
> Oct 3, 2017 04:03        191.2 MB
> a50d70f4de08           [docker.io/openshift/prometheus-alertmanager:v0.9.1  
> ]                          
> sha256:9367a268442151beaab2938cfb8fd103d116c495d28c9228c86e425c0d86509f     
> Oct 3, 2017 04:02        210.7 MB
> 27b79e7a8ced           [gcr.io/google_containers/busybox:latest             
> ]                          
> sha256:6632a6043ee0735c8592e7c617b192687ee7154e603948d409b598a4ee865d1e     
> Dec 31, 2014 22:23       2.326 MB
> [root@ip-172-18-14-130 ~]# 
> [root@ip-172-18-14-130 ~]# oc get no
> NAME                            STATUS    AGE       VERSION
> ip-172-18-14-130.ec2.internal   Ready     9h        v1.7.6+a08f5eeb62
> [root@ip-172-18-14-130 ~]# oc get po ip-172-18-14-130.ec2.internal -o yaml
> Error from server (NotFound): pods "ip-172-18-14-130.ec2.internal" not found
> [root@ip-172-18-14-130 ~]# oc get no ip-172-18-14-130.ec2.internal -o yaml
> apiVersion: v1
> kind: Node
> metadata:
>   annotations:
>     volumes.kubernetes.io/controller-managed-attach-detach: "true"
>   creationTimestamp: 2017-11-02T00:48:16Z
>   labels:
>     beta.kubernetes.io/arch: amd64
>     beta.kubernetes.io/instance-type: m3.large
>     beta.kubernetes.io/os: linux
>     failure-domain.beta.kubernetes.io/region: us-east-1
>     failure-domain.beta.kubernetes.io/zone: us-east-1d
>     kubernetes.io/hostname: ip-172-18-14-130.ec2.internal
>     openshift-infra: apiserver
>     registry: "true"
>     role: node
>     router: "true"
>   name: ip-172-18-14-130.ec2.internal
>   resourceVersion: "71370"
>   selfLink: /api/v1/nodes/ip-172-18-14-130.ec2.internal
>   uid: 8344e835-bf67-11e7-a3f1-0e6509b58fe4
> spec:
>   externalID: i-02b04559372fcca6e
>   providerID: aws:///us-east-1d/i-02b04559372fcca6e
>   taints:
>   - effect: NoSchedule
>     key: role
>     timeAdded: null
>     value: master
> status:
>   addresses:
>   - address: 172.18.14.130
>     type: InternalIP
>   - address: 54.210.143.16
>     type: ExternalIP
>   - address: ip-172-18-14-130.ec2.internal
>     type: InternalDNS
>   - address: ec2-54-210-143-16.compute-1.amazonaws.com
>     type: ExternalDNS
>   - address: ip-172-18-14-130.ec2.internal
>     type: Hostname
>   allocatable:
>     cpu: "2"
>     memory: 7390684Ki
>     pods: "250"
>   capacity:
>     cpu: "2"
>     memory: 7493084Ki
>     pods: "250"
>   conditions:
>   - lastHeartbeatTime: 2017-11-02T10:06:37Z
>     lastTransitionTime: 2017-11-02T09:31:53Z
>     message: kubelet has sufficient disk space available
>     reason: KubeletHasSufficientDisk
>     status: "False"
>     type: OutOfDisk
>   - lastHeartbeatTime: 2017-11-02T10:06:37Z
>     lastTransitionTime: 2017-11-02T09:31:53Z
>     message: kubelet has sufficient memory available
>     reason: KubeletHasSufficientMemory
>     status: "False"
>     type: MemoryPressure
>   - lastHeartbeatTime: 2017-11-02T10:06:37Z
>     lastTransitionTime: 2017-11-02T09:31:53Z
>     message: kubelet has no disk pressure
>     reason: KubeletHasNoDiskPressure
>     status: "False"
>     type: DiskPressure
>   - lastHeartbeatTime: 2017-11-02T10:06:37Z
>     lastTransitionTime: 2017-11-02T09:32:03Z
>     message: kubelet is posting ready status
>     reason: KubeletReady
>     status: "True"
>     type: Ready
>   daemonEndpoints:
>     kubeletEndpoint:
>       Port: 10250
>   images:
>   - names:
>     - registry.ops.openshift.com/openshift3/ose:v3.7
>     sizeBytes: 1058793303
>   - names:
>     - registry.access.redhat.com/rhscl/mongodb-32-rhel7:latest
>     sizeBytes: 575801242
>   - names:
>     - docker.io/deshuai/hello-pod:latest
>     sizeBytes: 296589274
>   - names:
>     - docker.io/ocpqe/hello-pod:latest
>     sizeBytes: 296589274
>   - names:
>     - docker.io/openshift/prometheus:v2.0.0-dev.3
>     sizeBytes: 271099290
>   - names:
>     -
> registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.7.0-0.
> 188.0
>     sizeBytes: 265421625
>   - names:
>     - docker.io/openshift/oauth-proxy:v1.0.0
>     sizeBytes: 228244754
>   - names:
>     - docker.io/openshift/prometheus-alertmanager:v0.9.1
>     sizeBytes: 220898782
>   - names:
>     - docker.io/openshift/prometheus-alert-buffer:v0.0.2
>     sizeBytes: 200523944
>   - names:
>     - gcr.io/google_containers/busybox:latest
>     sizeBytes: 2439416
>   - names:
>     - docker.io/kubernetes/pause:latest
>     sizeBytes: 246793
>   - names: null
>     sizeBytes: 246793
>   nodeInfo:
>     architecture: amd64
>     bootID: 0841ae2d-a94f-42e6-9e60-99fb0510d850
>     containerRuntimeVersion: cri-o://1.0.2
>     kernelVersion: 3.10.0-693.5.2.el7.x86_64
>     kubeProxyVersion: v1.7.6+a08f5eeb62
>     kubeletVersion: v1.7.6+a08f5eeb62
>     machineID: 5d23ff8f285e44988da260d1e65952d2
>     operatingSystem: linux
>     osImage: Red Hat Enterprise Linux Server 7.4 (Maipo)
>     systemUUID: EC2280E8-F28C-8D5C-DB70-EAAA473B4FD6
> 
> Actual results:
> 
> 
> Expected results:
> 
> 
> Additional info:

Comment 2 Antonio Murdaca 2017-11-02 16:35:08 UTC
What you posted isn't even a reproducer, could you follow the guidelines on reporting the bug so it'll be easier for me to reproduce and understand expectations?

Comment 3 Antonio Murdaca 2017-11-14 09:21:55 UTC
Alright, reproduced this, the upstream issue is https://github.com/kubernetes-incubator/cri-o/issues/531

Comment 5 Antonio Murdaca 2018-01-18 15:47:16 UTC
This is going to work as expected in 3.10 (CRI-O 1.10). Back port is unlikely as the patch is pretty huge to backport (it requires a full re-vendor of c/image and c/storage).

Comment 8 weiwei jiang 2018-01-25 05:45:18 UTC
Move back to MODIFIED since this patch will be in in 3.10(CRI-O 1.10), and still can be reproduced with openshift v3.9.0-0.23.0(CRI-O 1.9.0)

Comment 13 DeShuai Ma 2018-04-20 00:58:11 UTC
In comment 11 we have verified the bug

Comment 16 errata-xmlrpc 2018-05-17 06:42:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1566