Bug 1794167 - buildah builds images that podman can't pull (unsupported docker v2s2 media type: "")
Summary: buildah builds images that podman can't pull (unsupported docker v2s2 media t...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: ansible-role-tripleo-modify-image
Version: 15.0 (Stein)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: Alex Schultz
QA Contact:
URL:
Whiteboard:
Depends On: 1793960
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-22 19:56 UTC by Alex Schultz
Modified: 2020-03-05 12:02 UTC (History)
9 users (show)

Fixed In Version: ansible-role-tripleo-modify-image-1.1.1-0.20200122212319.58d7a5b.el8ost
Doc Type: Known Issue
Doc Text:
There is a known issue when building container images with buildah. The default format for the image is OCI, but podman 1.6.x contains stricter restrictions about container format metadata. As a result, containers that you push to the undercloud registry can fail if they were originally in OCI format. The workaround is to use the `--format docker` option to build images in docker format instead of OCI format, and you can push the containers to the undercloud registry successfully.
Clone Of: 1793960
Environment:
Last Closed: 2020-03-05 12:01:54 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:0643 0 None None None 2020-03-05 12:02:14 UTC

Description Alex Schultz 2020-01-22 19:56:40 UTC
+++ This bug was initially created as a clone of Bug #1793960 +++

Description of problem:

running command:
tripleo container image prepare -e /home/stack/containers-prepare-parameter-copy.yaml --output-env-file /home/stack/prepare_output.yaml --log-file /var/log/tripleo-container-image-prepare.log --debug


which is running tripleo-modify-image ansible role undearneath to patch the image (using buildah bud and attached Dockerfile) and then tripleo-common (https://opendev.org/openstack/tripleo-common/src/branch/master/tripleo_common/image/image_uploader.py#L1703) to upload it to registry (+deletes it from local host)

(see attachment yamls and logs for details)

after that, our CI job is testing whether patched + pushed image is pull'able from that registry, with:
podman pull undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-nova-compute:20200110.1-hotfix

which fails with:
STDERR:

Trying to pull undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-nova-compute:20200110.1-hotfix...
  unsupported docker v2s2 media type: ""
Error: error pulling image "undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-nova-compute:20200110.1-hotfix": unable to pull undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-nova-compute:20200110.1-hotfix: unable to pull image: Error initializing image from source docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-nova-compute:20200110.1-hotfix: unsupported docker v2s2 media type: ""


also, skopeo shows the same media type: "" issue:
[stack@undercloud-0 ~]$ skopeo inspect docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-nova-compute:20200110.1-hotfix
FATA[0000] Error parsing manifest for image: unsupported docker v2s2 media type: ""


Version-Release number of selected component (if applicable):
OSP16, compose: RHOS_TRUNK-16.0-RHEL-8-20200113.n.0

[stack@undercloud-0 ~]$ yum list installed | grep -Ei "tripleo|buildah|podman"
ansible-role-tripleo-modify-image.noarch          1.1.1-0.20191030113421.66a92a4.el8ost                     @rhelosp-16.0                
ansible-tripleo-ipsec.noarch                      9.2.0-0.20191022054642.ffe104c.el8ost                     @rhelosp-16.0                
buildah.x86_64                                    1.11.6-4.module+el8.1.1+5259+bcdd613a                     @rhosp-rhel-8.1-appstream    
openstack-tripleo-common.noarch                   11.3.3-0.20200107225621.47626e1.el8ost                    @rhelosp-16.0                
openstack-tripleo-common-containers.noarch        11.3.3-0.20200107225621.47626e1.el8ost                    @rhelosp-16.0                
openstack-tripleo-heat-templates.noarch           11.3.2-0.20200109050651.8f93d27.el8ost                    @rhelosp-16.0                
openstack-tripleo-image-elements.noarch           10.6.1-0.20191022065313.7338463.el8ost                    @rhelosp-16.0                
openstack-tripleo-puppet-elements.noarch          11.2.1-0.20191108131052.2ad3189.el8ost                    @rhelosp-16.0                
openstack-tripleo-validations.noarch              11.3.1-0.20191126041901.2bba53a.el8ost                    @rhelosp-16.0                
podman.x86_64                                     1.6.4-2.module+el8.1.1+5363+bf8ff1af                      @rhosp-rhel-8.1-appstream    
podman-manpages.noarch                            1.6.4-2.module+el8.1.1+5363+bf8ff1af                      @rhosp-rhel-8.1-appstream    
puppet-tripleo.noarch                             11.4.1-0.20200106153547.5946c6f.el8ost                    @rhelosp-16.0                
python3-tripleo-common.noarch                     11.3.3-0.20200107225621.47626e1.el8ost                    @rhelosp-16.0                
python3-tripleoclient.noarch                      12.3.1-0.20191230195937.585fb28.el8ost                    @rhelosp-16.0                
python3-tripleoclient-heat-installer.noarch       12.3.1-0.20191230195937.585fb28.el8ost                    @rhelosp-16.0                
tripleo-ansible.noarch                            0.4.2-0.20200110023759.ee731ba.el8ost                     @rhelosp-16.0         

How reproducible:
100%


Steps to Reproduce:
1. run the below command
tripleo container image prepare -e /home/stack/containers-prepare-parameter-copy.yaml --output-env-file /home/stack/prepare_output.yaml --log-file /var/log/tripleo-container-image-prepare.log --debug

(the yaml file attached)

2. observe the failure

3.

Actual results:
pulling container image fails


Expected results:
pulling container image to succeed


Additional info:
ping me on irc (wznoinsk) for fastest reponse or if you need access to machine that has this problem at this moment

--- Additional comment from Alex Schultz on 2020-01-22 17:19:11 UTC ---

Currently this can be reproduced by running the following on an undercloud:

Fails:

export CONT=$(sudo buildah from centos:7)
sudo buildah config --label maintainer="Alex Schultz <aschultz>" $CONT
sudo buildah run $CONT touch /foo
sudo buildah commit $CONT foo/foo:latest
sudo podman tag localhost/foo/foo undercloud.ctlplane.localdomain:8787/foo/foo:latest
sudo openstack tripleo container image push --local undercloud.ctlplane.localdomain:8787/foo/foo:latest
sudo podman rmi localhost/foo/foo
sudo podman rmi undercloud.ctlplane.localdomain:8787/foo/foo:latest
sudo podman pull undercloud.ctlplane.localdomain:8787/foo/foo:latest

Works:

export CONT=$(sudo buildah from centos:7)
sudo buildah config --label maintainer="Alex Schultz <aschultz>" $CONT
sudo buildah run $CONT touch /foo
sudo buildah commit -f docker $CONT foo/bar:latest
sudo podman tag localhost/foo/bar undercloud.ctlplane.localdomain:8787/foo/bar:latest
sudo openstack tripleo container image push --local undercloud.ctlplane.localdomain:8787/foo/bar:latest
sudo podman rmi localhost/foo/bar
sudo podman rmi undercloud.ctlplane.localdomain:8787/foo/bar:latest
sudo podman pull undercloud.ctlplane.localdomain:8787/foo/bar:latest



The resulting metadata in the registry looks like:

Fails:

[root@undercloud foo]# cat foo/manifests/sha256\:238d95b4b77978d50ebd6abc924292a57b59b51b38934babd11da693f1120a43/index.json 
{
   "schemaVersion": 2,
   "config": {
      "mediaType": "application/vnd.docker.container.image.v1+json",
      "digest": "sha256:e5b1caa22fefda782c14e076be8ebf406474898a692c025beaea06b139e73b2c",
      "size": 1306
   },
   "layers": [
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "digest": "sha256:e6a50b627bcb03d96996bb8e836ecb178eae7425636e3424d9e8d33a918768dd",
         "size": 75167623
      },
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "digest": "sha256:23d0487be944221e1524d0fcf062e074632b93c341d0e21c3532d0bc1f098ee8",
         "size": 197
      }
   ]
}


Works:

[root@undercloud foo]# cat bar/manifests/sha256\:770cd2a5a5e6f38813b31da95745c1aecf11d139ae05b7bae0747aba2f122790/index.json 
{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
   "config": {
      "mediaType": "application/vnd.docker.container.image.v1+json",
      "size": 2529,
      "digest": "sha256:2352c67ba58bc3958b747f3cf6b2101a8f72b598a2f6b5e5ee2a79656b09a4c6"
   },
   "layers": [
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 75167623,
         "digest": "sha256:e6a50b627bcb03d96996bb8e836ecb178eae7425636e3424d9e8d33a918768dd"
      },
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 198,
         "digest": "sha256:b8c30d010e150b162e99da9818e6a1cc1024cfae3fd512193212909294475a4b"
      }
   ]
}


It's likely because we don't currently support the oci format correctly. 

Additional details: 
https://github.com/cri-o/cri-o/issues/2905
https://issues.sonatype.org/browse/NEXUS-16947

Comment 3 Alex McLeod 2020-02-19 12:48:55 UTC
If this bug requires doc text for errata release, please set the 'Doc Type' and provide draft text according to the template in the 'Doc Text' field. The documentation team will review, edit, and approve the text.

If this bug does not require doc text, please set the 'requires_doc_text' flag to '-'.

Comment 5 errata-xmlrpc 2020-03-05 12:01:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0643


Note You need to log in before you can comment on or make changes to this bug.