Bug 1810768
Summary: | Podman pushes a v1 manifest when pushing to the same repo but a different tag | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Matt Prahl <mprahl> |
Component: | podman | Assignee: | Lokesh Mandvekar <lsm5> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 32 | CC: | bbaude, debarshir, dwalsh, jnovy, lsm5, lucarval, mheon, mhofmann, mitr, rh.container.bot, santiago, tsweeney, vrothber |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | podman 3.1.2 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-05-25 15:45:20 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Matt Prahl
2020-03-05 20:58:16 UTC
Tom, could you PTAL? Matt, I'm not seeing this problem with upstream and you're a few versions behind latest. Any chance you could upgrade Podman and try again? Also is there a difference in behavior if you first tag the image `podman tag iib-build:69-amd64 quay.io/temp-iib/iib:test3` and then do `podman push quay.io/temp-iib/iib:test3`? Matt Heon does this sound at all familiar? Hi Tom, The same issue is present on the latest in Fedora 30 (podman-2:1.8.0-4.fc30.x86_64). Could you provide a scratch build for Fedora 30 or RHEL 8 of the latest podman and crun? The VM I'm seeing this on runs on RHEL 8 and I can't install F31+ RPMs on it due to the switch to zstd compression on RPMs in F31. Hi Matt, The smoothest way would be to either have you git clone the Libpod repo and build Podman (https://podman.io/getting-started/installation) or secondarily you could pull the container image from quay.io at quay.io/podman/upstream:latest and see if either of those work. Can you try one of those routes? Hi Tom, I tried podman 1.8.2 from the following repository, but I get the same issue: https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_8/devel:kubic:libcontainers:stable.repo Here is the output: Build the Image --------------- $ podman -v podman version 1.8.2 $ cat Dockerfile FROM centos:8 RUN echo hello $ podman build -t reproducer:1 -f Dockerfile . STEP 1: FROM centos:8 STEP 2: RUN echo hello hello STEP 3: COMMIT reproducer:1 --> 8dde1584cff 8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2 First Push ---------- $ podman push --authfile=./auth.json --log-level=debug reproducer:1 quay.io/temp/reproducer:1 DEBU[0000] Reading configuration file "/usr/share/containers/libpod.conf" DEBU[0000] Using conmon: "/usr/bin/conmon" DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /var/lib/containers/storage DEBU[0000] Using run root /var/run/containers/storage DEBU[0000] Using static dir /var/lib/containers/storage/libpod DEBU[0000] Using tmp dir /var/run/libpod DEBU[0000] Using volume path /var/lib/containers/storage/volumes DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is not being used DEBU[0000] cached value indicated that native-diff is usable DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false DEBU[0000] Initializing event backend journald DEBU[0000] using runtime "/usr/bin/runc" WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument WARN[0000] Error initializing configured OCI runtime kata-runtime: no valid executable found for OCI runtime kata-runtime: invalid argument WARN[0000] Error initializing configured OCI runtime kata-qemu: no valid executable found for OCI runtime kata-qemu: invalid argument WARN[0000] Error initializing configured OCI runtime kata-fc: no valid executable found for OCI runtime kata-fc: invalid argument INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist WARN[0000] Default CNI network name podman is unchangeable DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/reproducer:1" DEBU[0000] reference "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/reproducer:1" does not resolve to an image ID DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]localhost/reproducer:1" DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2" DEBU[0000] Returning credentials from ./auth.json DEBU[0000] Using registries.d directory /etc/containers/registries.d for sigstore configuration DEBU[0000] Using "default-docker" configuration DEBU[0000] Using file:///var/lib/atomic/sigstore DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/quay.io DEBU[0000] Using blob info cache at /var/lib/containers/cache/blob-info-cache-v1.boltdb DEBU[0000] IsRunningImageAllowed for image containers-storage:[overlay@/var/lib/containers/storage]@8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2 DEBU[0000] Using default policy section DEBU[0000] Requirement 0: allowed DEBU[0000] Overall: allowed Getting image source signatures DEBU[0000] Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.index.v1+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.docker.distribution.manifest.v1+json] DEBU[0000] ... will first try using the original manifest unmodified DEBU[0000] Checking /v2/temp/reproducer/blobs/sha256:43c079536031ea0173ca8f3076aebb101685ac8da3fbd6eba7cf42a558dce87c DEBU[0000] GET https://quay.io/v2/ DEBU[0000] Checking /v2/temp/reproducer/blobs/sha256:aff9833d464191900b4a3b6e68aad912f930de639b411124e433aeb3757001cd DEBU[0000] Ping https://quay.io/v2/ status 401 DEBU[0000] GET https://quay.io/v2/auth?account=mprahl&scope=repository%3Atemp%2Freproducer%3Apull%2Cpush&service=quay.io DEBU[0000] GET https://quay.io/v2/auth?account=mprahl&scope=repository%3Atemp%2Freproducer%3Apull%2Cpush&service=quay.io DEBU[0000] Increasing token expiration to: 60 seconds DEBU[0000] HEAD https://quay.io/v2/temp/reproducer/blobs/sha256:43c079536031ea0173ca8f3076aebb101685ac8da3fbd6eba7cf42a558dce87c DEBU[0000] Increasing token expiration to: 60 seconds DEBU[0000] HEAD https://quay.io/v2/temp/reproducer/blobs/sha256:aff9833d464191900b4a3b6e68aad912f930de639b411124e433aeb3757001cd DEBU[0000] ... not present DEBU[0000] Trying to reuse cached location sha256:d71c5eab96b2ab4ce9461534cf5e4beda459d7dd3049e9ee2a4c73a13c0629b4 in quay.io/temp/reproducer DEBU[0000] Checking /v2/temp/reproducer/blobs/sha256:d71c5eab96b2ab4ce9461534cf5e4beda459d7dd3049e9ee2a4c73a13c0629b4 DEBU[0000] GET https://quay.io/v2/auth?account=mprahl&scope=repository%3Atemp%2Freproducer%3Apull%2Cpush&scope=repository%3Atemp%2Freproducer%3Apull&service=quay.io DEBU[0000] ... not present DEBU[0000] exporting filesystem layer "26aae65626fe7464fbd25ceeb715dcde030c8276076f0e5db1385e9ce8cb0e6f" without compression for blob "sha256:43c079536031ea0173ca8f3076aebb101685ac8da3fbd6eba7cf42a558dce87c" DEBU[0000] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/2EO4HLZ63OSTE6RCVMYZXSKLAC,upperdir=/var/lib/containers/storage/overlay/26aae65626fe7464fbd25ceeb715dcde030c8276076f0e5db1385e9ce8cb0e6f/diff,workdir=/var/lib/containers/storage/overlay/26aae65626fe7464fbd25ceeb715dcde030c8276076f0e5db1385e9ce8cb0e6f/work DEBU[0000] No compression detected DEBU[0000] Compressing blob on the fly DEBU[0000] Uploading /v2/temp/reproducer/blobs/uploads/ DEBU[0000] POST https://quay.io/v2/temp/reproducer/blobs/uploads/ Copying blob 43c079536031 [--------------------------------------] 8.0b / 2.5KiB Copying blob 43c079536031 done DEBU[0000] Increasing token expiration to: 60 seconds Copying blob 43c079536031 done DEBU[0001] ... not present DEBU[0001] Trying to reuse cached location sha256:d71c5eab96b2ab4ce9461534cf5e4beda459d7dd3049e9ee2a4c73a13c0629b4 in quay.io/temp/centos DEBU[0001] Checking /v2/temp/centos/blobs/sha256:d71c5eab96b2ab4ce9461534cf5e4beda459d7dd3049e9ee2a4c73a13c0629b4 DEBU[0001] GET https://quay.io/v2/auth?account=mprahl&scope=repository%3Atemp%2Freproducer%3Apull%2Cpush&scope=repository%3Atemp%2Fcentos%3ApuCopying blob 43c079536031 done DEBU[0001] PUT https://quay.io/v2/temp/reproducer/blobs/uploads/c8048da1-a8fd-4dab-99bb-4f8d4c2b1c76?digest=sha256%3A2d84362881e26b2da037e1a5eCopying blob 43c079536031 done DEBU[0001] Increasing token expiration to: 60 seconds Copying blob 43c079536031 done DEBU[0001] ... not present DEBU[0001] exporting filesystem layer "aff9833d464191900b4a3b6e68aad912f930de639b411124e433aeb3757001cd" without compression for blob "sha256:aff9833d464191900b4a3b6e68aad912f930de639b411124e433aeb3757001cd" DEBU[0001] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/aff9833d464191900b4a3b6e68aad912f930de639b411124e433aeb3757001cd/empty,upperdir=/var/lib/containers/storage/overlay/aff9833d464191900b4a3b6e68aad912f930de639b411124e433aeb3757001cd/diff,workdir=/var/lib/containers/storage/overlay/aff9833d464191900b4a3b6e68aad912f930de639b411124e433aeb3757001cd/work DEBU[0001] No compression detected DEBU[0001] Compressing blob on the fly DEBU[0001] Uploading /v2/temp/reproducer/blobs/uploads/ Copying blob 43c079536031 done Copying blob 43c079536031 done Copying blob 43c079536031 done Copying blob aff9833d4641 done Copying blob 43c079536031 done Copying blob aff9833d4641 done DEBU[0008] Upload of layer sha256:d71c5eab96b2ab4ce9461534cf5e4beda459d7dd3049e9ee2a4c73a13c0629b4 complete DEBU[0009] exporting opaque data as blob "sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2" DEBU[0009] No compression detected DEBU[0009] Using original blob without modification DEBU[0009] Checking /v2/temp/reproducer/blobs/sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2 DEBU[0009] HEAD https://quay.io/v2/temp/reproducer/blobs/sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2 DEBU[0009] ... not present DEBU[0009] Uploading /v2/temp/reproducer/blobs/uploads/ DEBU[0009] POST https://quay.io/v2/temp/reproducer/blobs/uploads/ Copying config 8dde1584cf [--------------------------------------] 0.0b / 1.2KiB Copying config 8dde1584cf done DEBU[0009] PUT https://quay.io/v2/temp/reproducer/blobs/uploads/db07c76f-70de-4fd1-b9d1-3f5b1b477b90?digest=sha256%3A8dde1584cff50d2bbc2d7337eCopying config 8dde1584cf done DEBU[0010] Upload of layer sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2 complete Writing manifest to image destination DEBU[0010] PUT https://quay.io/v2/temp/reproducer/manifests/1 DEBU[0010] Writing manifest using preferred type application/vnd.oci.image.manifest.v1+json failed: Error writing manifest: Error uploading manifest 1 to quay.io/temp/reproducer: manifest invalid: manifest invalid DEBU[0010] Trying to use manifest type application/vnd.docker.distribution.manifest.v2+json… DEBU[0010] exporting opaque data as blob "sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2" DEBU[0010] No compression detected DEBU[0010] Using original blob without modification DEBU[0010] Checking /v2/temp/reproducer/blobs/sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2 DEBU[0010] HEAD https://quay.io/v2/temp/reproducer/blobs/sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2 DEBU[0010] ... already exists Writing manifest to image destination DEBU[0010] PUT https://quay.io/v2/temp/reproducer/manifests/1 Storing signatures DEBU[0010] Successfully pushed docker://quay.io/temp/reproducer:1 with digest sha256:39c44cb791119b1e405467c1323e1611513c5a07ecb7d01cd680f81c409eea8a Second Push ----------- $ podman push --authfile=./auth.json --log-level=debug reproducer:1 quay.io/temp/reproducer:2 DEBU[0000] Reading configuration file "/usr/share/containers/libpod.conf" DEBU[0000] Using conmon: "/usr/bin/conmon" DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /var/lib/containers/storage DEBU[0000] Using run root /var/run/containers/storage DEBU[0000] Using static dir /var/lib/containers/storage/libpod DEBU[0000] Using tmp dir /var/run/libpod DEBU[0000] Using volume path /var/lib/containers/storage/volumes DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is not being used DEBU[0000] cached value indicated that native-diff is usable DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false DEBU[0000] Initializing event backend journald WARN[0000] Error initializing configured OCI runtime kata-fc: no valid executable found for OCI runtime kata-fc: invalid argument DEBU[0000] using runtime "/usr/bin/runc" WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument WARN[0000] Error initializing configured OCI runtime kata-runtime: no valid executable found for OCI runtime kata-runtime: invalid argument WARN[0000] Error initializing configured OCI runtime kata-qemu: no valid executable found for OCI runtime kata-qemu: invalid argument INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist WARN[0000] Default CNI network name podman is unchangeable DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/reproducer:1" DEBU[0000] reference "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/reproducer:1" does not resolve to an image ID DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]localhost/reproducer:1" DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2" DEBU[0000] Returning credentials from ./auth.json DEBU[0000] Using registries.d directory /etc/containers/registries.d for sigstore configuration DEBU[0000] Using "default-docker" configuration DEBU[0000] Using file:///var/lib/atomic/sigstore DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/quay.io DEBU[0000] Using blob info cache at /var/lib/containers/cache/blob-info-cache-v1.boltdb DEBU[0000] IsRunningImageAllowed for image containers-storage:[overlay@/var/lib/containers/storage]@8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2 DEBU[0000] Using default policy section DEBU[0000] Requirement 0: allowed DEBU[0000] Overall: allowed Getting image source signatures DEBU[0000] Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.index.v1+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.docker.distribution.manifest.v1+json] DEBU[0000] ... will first try using the original manifest unmodified DEBU[0000] Checking /v2/temp/reproducer/blobs/sha256:43c079536031ea0173ca8f3076aebb101685ac8da3fbd6eba7cf42a558dce87c DEBU[0000] GET https://quay.io/v2/ DEBU[0000] Checking /v2/temp/reproducer/blobs/sha256:aff9833d464191900b4a3b6e68aad912f930de639b411124e433aeb3757001cd DEBU[0000] Ping https://quay.io/v2/ status 401 DEBU[0000] GET https://quay.io/v2/auth?account=mprahl&scope=repository%3Atemp%2Freproducer%3Apull%2Cpush&service=quay.io DEBU[0000] GET https://quay.io/v2/auth?account=mprahl&scope=repository%3Atemp%2Freproducer%3Apull%2Cpush&service=quay.io DEBU[0000] Increasing token expiration to: 60 seconds DEBU[0000] HEAD https://quay.io/v2/temp/reproducer/blobs/sha256:aff9833d464191900b4a3b6e68aad912f930de639b411124e433aeb3757001cd DEBU[0000] Increasing token expiration to: 60 seconds DEBU[0000] HEAD https://quay.io/v2/temp/reproducer/blobs/sha256:43c079536031ea0173ca8f3076aebb101685ac8da3fbd6eba7cf42a558dce87c DEBU[0000] ... not present DEBU[0000] Trying to reuse cached location sha256:2d84362881e26b2da037e1a5edc7b89003b549ac015265f12b75390eccffb982 in quay.io/temp/reproducer DEBU[0000] Checking /v2/temp/reproducer/blobs/sha256:2d84362881e26b2da037e1a5edc7b89003b549ac015265f12b75390eccffb982 DEBU[0000] GET https://quay.io/v2/auth?account=mprahl&scope=repository%3Atemp%2Freproducer%3Apull%2Cpush&scope=repository%3Atemp%2Freproducer%3Apull&service=quay.io DEBU[0000] ... not present DEBU[0000] Trying to reuse cached location sha256:d71c5eab96b2ab4ce9461534cf5e4beda459d7dd3049e9ee2a4c73a13c0629b4 in quay.io/temp/reproducer DEBU[0000] Checking /v2/temp/reproducer/blobs/sha256:d71c5eab96b2ab4ce9461534cf5e4beda459d7dd3049e9ee2a4c73a13c0629b4 DEBU[0000] GET https://quay.io/v2/auth?account=mprahl&scope=repository%3Atemp%2Freproducer%3Apull%2Cpush&scope=repository%3Atemp%2Freproducer%3Apull&service=quay.io DEBU[0000] Increasing token expiration to: 60 seconds DEBU[0000] HEAD https://quay.io/v2/temp/reproducer/blobs/sha256:2d84362881e26b2da037e1a5edc7b89003b549ac015265f12b75390eccffb982 DEBU[0000] Increasing token expiration to: 60 seconds DEBU[0000] HEAD https://quay.io/v2/temp/reproducer/blobs/sha256:d71c5eab96b2ab4ce9461534cf5e4beda459d7dd3049e9ee2a4c73a13c0629b4 DEBU[0001] ... already exists DEBU[0001] Skipping blob sha256:43c079536031ea0173ca8f3076aebb101685ac8da3fbd6eba7cf42a558dce87c (already present): Copying blob 43c079536031 skipped: already exists DEBU[0001] ... already exists Copying blob 43c079536031 skipped: already exists Copying blob aff9833d4641 [--------------------------------------] 0.0b / 0.0b DEBU[0001] exporting opaque data as blob "sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2" DEBU[0001] No compression detected DEBU[0001] Using original blob without modification DEBU[0001] Checking /v2/temp/reproducer/blobs/sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2 DEBU[0001] HEAD https://quay.io/v2/temp/reproducer/blobs/sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2 DEBU[0001] ... already exists Writing manifest to image destination DEBU[0001] PUT https://quay.io/v2/temp/reproducer/manifests/2 DEBU[0001] Writing manifest using preferred type application/vnd.oci.image.manifest.v1+json failed: Error writing manifest: Error uploading manifest 2 to quay.io/temp/reproducer: manifest invalid: manifest invalid DEBU[0001] Trying to use manifest type application/vnd.docker.distribution.manifest.v2+json… DEBU[0001] exporting opaque data as blob "sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2" DEBU[0001] No compression detected DEBU[0001] Using original blob without modification DEBU[0001] Checking /v2/temp/reproducer/blobs/sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2 DEBU[0001] HEAD https://quay.io/v2/temp/reproducer/blobs/sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2 DEBU[0001] ... already exists Writing manifest to image destination DEBU[0001] PUT https://quay.io/v2/temp/reproducer/manifests/2 DEBU[0001] Upload of manifest type application/vnd.docker.distribution.manifest.v2+json failed: Error writing manifest: Error uploading manifest 2 to quay.io/temp/reproducer: manifest invalid: manifest invalid DEBU[0001] Trying to use manifest type application/vnd.docker.distribution.manifest.v1+prettyjws… DEBU[0001] exporting opaque data as blob "sha256:8dde1584cff50d2bbc2d7337e42e0e4eb86113d3dedf6ec2ea7687ad174be2f2" DEBU[0001] Uploading empty layer during conversion to schema 1 DEBU[0001] Checking /v2/temp/reproducer/blobs/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 DEBU[0001] HEAD https://quay.io/v2/temp/reproducer/blobs/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 DEBU[0001] ... already exists Writing manifest to image destination DEBU[0001] PUT https://quay.io/v2/temp/reproducer/manifests/2 Storing signatures DEBU[0001] Successfully pushed docker://quay.io/temp/reproducer:2 with digest sha256:7e46b69403820d80b9cd0e26df4fad60992f389ee0cc53fb1e9f754ee824cd27 Miloslav, any thoughts on why a second push would change the manifest from v2 to v1? I’m fairly sure this is https://github.com/containers/image/issues/733 . Quay rejects the schema2 manifest with non-matching MIME types, causing the push to fall back to schema1, and that succeeds because schema1 does not contain MIME types. This message is a reminder that Fedora 30 is nearing its end of life. Fedora will stop maintaining and issuing updates for Fedora 30 on 2020-05-26. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '30'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 30 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. Hello, Is there any update on this? It'd be nice to get this fixed. Miloslav (or Valentin), were you able to confirm that the PR's mentioned in https://github.com/containers/image/issues/733 will fix this? (https://github.com/containers/image/pull/904 and https://github.com/containers/image/pull/909) If that's the case, it should be in c/image 5.4.4 which should land in Podman v2.0. It doesn't look to have been included in the Pdoman v1.9 release branch at this time though. There is no PR for https://github.com/containers/image/issues/733 yet. This message is a reminder that Fedora 31 is nearing its end of life. Fedora will stop maintaining and issuing updates for Fedora 31 on 2020-11-24. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '31'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 31 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. I believe this is fixed in podman 3.0, at least by following the linked issues above. I will mark this bug as modified and fixed in podman 3.0 Reopen if I am mistaken. Assigning to Lokesh as he handles Fedora. This message is a reminder that Fedora 32 is nearing its end of life. Fedora will stop maintaining and issuing updates for Fedora 32 on 2021-05-25. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '32'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 32 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. Fedora 32 changed to end-of-life (EOL) status on 2021-05-25. Fedora 32 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed. I can no longer reproduce the issue on Fedora 33 with podman 3.1.2. Thank you! The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days |