Bug 1546324
Summary: | Manifest does not match provided manifest digest | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Seth Jennings <sjenning> |
Component: | Containers | Assignee: | Lokesh Mandvekar <lsm5> |
Status: | CLOSED ERRATA | QA Contact: | DeShuai Ma <dma> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.9.0 | CC: | aos-bugs, jhonce, jokerman, lsm5, mmccomas, wjiang |
Target Milestone: | --- | ||
Target Release: | 3.9.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | cri-o-1.9.7-2.gita98f9c9.el7 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-06-27 18:01:32 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Seth Jennings
2018-02-16 20:19:08 UTC
Just to make sure, I tried to start the pod on a different node, same issue. Started mongodb pod, same issue. However, if I build the sample php application that pushes and pulls from the internal registry, that works. Also, if I start a pod with a image from docker.io, it also works. Seems to be an issue with registry.access.redhat.com. Hmm, near as I can figure (by setting --log-level=debug for cri-o and trying it myself), the registry is serving up that image's manifest with an indicated MIME type of "text/plain" instead of a more typical manifest type like "application/vnd.docker.distribution.manifest.v1+prettyjws". Because "text/plain" is not one of the formats that the containers-storage transport claims to support in its list of acceptable manifest MIME types, the image's manifest is being converted, on the fly, to a "application/vnd.docker.distribution.manifest.v2+json" in order to be acceptable, and as a result it no longer matches the digest when any subsequent attempt is made to access the image and the digest check is performed. container/images fix (or workaround?): https://github.com/containers/image/pull/417 New container/images PR: https://github.com/containers/image/pull/418 and the pick and version bump for cri-o: https://github.com/kubernetes-incubator/cri-o/pull/1351 Verify on openshift v3.9.0-0.51.0 with cri-o # openshift version openshift v3.9.0-0.51.0 kubernetes v1.9.1+a0ce1bc657 etcd 3.2.8 # pwd /var/lib/containers/atomic/cri-o.0/rootfs/bin # ./crio --version crio version 1.9.7 Steps to verify: 1. Trigger cakephp+mysql ephemeral in console when env is ready 2. Check the mysql app is running. # oc get po -n dma1 NAME READY STATUS RESTARTS AGE cakephp-mysql-example-1-build 1/1 Running 0 6m mysql-1-bs9bn 1/1 Running 0 6m # oc get po -n dma1 NAME READY STATUS RESTARTS AGE cakephp-mysql-example-1-build 1/1 Running 0 6m mysql-1-bs9bn 1/1 Running 0 6m # oc describe pod mysql-1-bs9bn -n dma1 Name: mysql-1-bs9bn Namespace: dma1 Node: ip-172-18-14-130.ec2.internal/172.18.14.130 Start Time: Mon, 26 Feb 2018 03:57:13 -0500 Labels: deployment=mysql-1 deploymentconfig=mysql name=mysql Annotations: openshift.io/deployment-config.latest-version=1 openshift.io/deployment-config.name=mysqla openshift.io/deployment.name=mysql-1 openshift.io/scc=restricted Status: Running IP: 10.128.0.11 Controlled By: ReplicationController/mysql-1 Containers: mysql: Container ID: cri-o://c9842048503ee30b9eefd89512e6a9a697682c93271d57bca8954b977046a0de Image: registry.access.redhat.com/rhscl/mysql-57-rhel7@sha256:7638d886370ca1eb3cefcfecf483f8dc4dc1ca05559dd521d1585ae7c4ed668e Image ID: registry.access.redhat.com/rhscl/mysql-57-rhel7@sha256:7638d886370ca1eb3cefcfecf483f8dc4dc1ca05559dd521d1585ae7c4ed668e Port: 3306/TCP State: Running Started: Mon, 26 Feb 2018 03:58:20 -0500 Ready: True Restart Count: 0 Limits: memory: 512Mi Requests: memory: 512Mi Liveness: tcp-socket :3306 delay=30s timeout=1s period=10s #success=1 #failure=3 Readiness: exec [/bin/sh -i -c MYSQL_PWD='YlOSByUhBxOYYK70' mysql -h 127.0.0.1 -u cakephp -D default -e 'SELECT 1'] delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: MYSQL_USER: <set to the key 'database-user' in secret 'cakephp-mysql-example'> Optional: false MYSQL_PASSWORD: <set to the key 'database-password' in secret 'cakephp-mysql-example'> Optional: false MYSQL_DATABASE: default Mounts: /var/lib/mysql/data from data (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-kr7l4 (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-kr7l4: Type: Secret (a volume populated by a Secret) SecretName: default-token-kr7l4 Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/memory-pressure:NoSchedule Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 7m default-scheduler Successfully assigned mysql-1-bs9bn to ip-172-18-14-130.ec2.internal Normal SuccessfulMountVolume 7m kubelet, ip-172-18-14-130.ec2.internal MountVolume.SetUp succeeded for volume "data" Normal SuccessfulMountVolume 7m kubelet, ip-172-18-14-130.ec2.internal MountVolume.SetUp succeeded for volume "default-token-kr7l4" Normal Pulling 7m kubelet, ip-172-18-14-130.ec2.internal pulling image "registry.access.redhat.com/rhscl/mysql-57-rhel7@sha256:7638d886370ca1eb3cefcfecf483f8dc4dc1ca05559dd521d1585ae7c4ed668e" Normal Pulled 6m kubelet, ip-172-18-14-130.ec2.internal Successfully pulled image "registry.access.redhat.com/rhscl/mysql-57-rhel7@sha256:7638d886370ca1eb3cefcfecf483f8dc4dc1ca05559dd521d1585ae7c4ed668e" Normal Created 6m kubelet, ip-172-18-14-130.ec2.internal Created container Normal Started 6m kubelet, ip-172-18-14-130.ec2.internal Started container Warning Unhealthy 5m kubelet, ip-172-18-14-130.ec2.internal Readiness probe failed: sh: cannot set terminal process group (-1): Inappropriate ioctl for device sh: no job control in this shell ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111) Though I have verify the bug with latest cri-o, But still have one question: Now "./crio --version" only shows version is "1.9.7", how to make sure the cri-o in my system is cri-o-1.9.7-2.gita98f9c9.el7 not cri-o-1.9.7-1.gita98f9c9.el7 ? Currently starter-ca-central-1 is deployed with docker nodes, so need wait for crio node to give a check on it for https://trello.com/c/9DGSzqg8/164-starter-ca-central-1-cri-o-issue-pulling-images-from-red-hat-registry. (In reply to DeShuai Ma from comment #9) > Though I have verify the bug with latest cri-o, But still have one question: > Now "./crio --version" only shows version is "1.9.7", how to make sure the > cri-o in my system is cri-o-1.9.7-2.gita98f9c9.el7 not > cri-o-1.9.7-1.gita98f9c9.el7 ? can you try rpm -q cri0 ? rpm -q cri-o, sorry Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2013 |