Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1806707

Summary: podman pull doesn't maintain manifest information on disk correctly
Product: Red Hat Enterprise Linux 8 Reporter: Alex Schultz <aschultz>
Component: podmanAssignee: Matthew Heon <mheon>
Status: CLOSED NOTABUG QA Contact: atomic-bugs <atomic-bugs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 8.1CC: bbaude, bdobreli, dornelas, dwalsh, jligon, jnovy, lsm5, mheon, mitr, nalin, sbaker
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-06-03 19:51:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1186913, 1804045    
Attachments:
Description Flags
layers.json none

Description Alex Schultz 2020-02-24 19:29:14 UTC
Created attachment 1665492 [details]
layers.json

Description of problem:

When a user runs 'podman pull', the digest values for the layers 'manifest' file in the overlay-image folder in /var/lib/containers/storage/overlay-images/<container>/ may not match the layers as stored/used by podman.

Example,

sudo podman pull docker.io/osp16/trilio-datamover:osp16


[root@undercloud d4bd1e54c0c51219ee6b2bc58246c90a545ea3970e1900c01454be7ca0f8b1c0]# ls -al
total 28
drwx------.  2 root root 4096 Feb 24 12:18  .
drwx------. 40 root root 4096 Feb 24 12:18  ..
-rw-------.  1 root root 1383 Feb 24 12:18 '=bWFuaWZlc3Qtc2hhMjU2OmZkZjU4M2Q3OGVhYmExYmNiZTRmNDg2ZDJlYjk0YThkYTE4YTlkNzRmYTRhNTMwMTFkZmUwZTk5MzU3MjA0NGQ='
-rw-------.  1 root root 9032 Feb 24 12:18 '=c2hhMjU2OmQ0YmQxZTU0YzBjNTEyMTllZTZiMmJjNTgyNDZjOTBhNTQ1ZWEzOTcwZTE5MDBjMDE0NTRiZTdjYTBmOGIxYzA='
-rw-------.  1 root root    0 Feb 24 12:18 '=c2lnbmF0dXJlLWZkZjU4M2Q3OGVhYmExYmNiZTRmNDg2ZDJlYjk0YThkYTE4YTlkNzRmYTRhNTMwMTFkZmUwZTk5MzU3MjA0NGQ='
-rw-------.  1 root root 1383 Feb 24 12:18  manifest
[root@undercloud d4bd1e54c0c51219ee6b2bc58246c90a545ea3970e1900c01454be7ca0f8b1c0]# jq . manifest 
{
  "schemaVersion": 2,
  "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
  "config": {
    "mediaType": "application/vnd.docker.container.image.v1+json",
    "size": 9032,
    "digest": "sha256:d4bd1e54c0c51219ee6b2bc58246c90a545ea3970e1900c01454be7ca0f8b1c0"
  },
  "layers": [
    {
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "size": 76907790,
      "digest": "sha256:933041e5135b6d2775e5def32a99ce221eefab64b05ce9b300f51612b434b4b8"
    },
    {
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "size": 1721,
      "digest": "sha256:9f00789c7305a52e96997cf1f41d9f79ea31af8d869decbe0da096da3158cd2c"
    },
    {
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "size": 46645015,
      "digest": "sha256:b4ee033867ed0126766693b0e82851184757c9d8ed8a88d57d99f01c56a255ba"
    },
    {
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "size": 84992493,
      "digest": "sha256:a5a5eee4dd56dab6dc53ed519157bacb6e83141b0e49fbef29d13c119ba76cb7"
    },
    {
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "size": 105314746,
      "digest": "sha256:0349dcc885033870e9655b372259becd6bda8b6dab443c7c67132d428a4b5199"
    },
    {
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "size": 362189011,
      "digest": "sha256:cc3ef2dbb28d10073c97beba7cdbb6dc253991d85cc93a8d76a5f99eda1768f9"
    },
    {
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "size": 193371740,
      "digest": "sha256:083a043eead9e470b4e105d10fef3c66ee8c067b2e8b47b52d853f1e6083e360"
    }
  ]
}
[root@undercloud d4bd1e54c0c51219ee6b2bc58246c90a545ea3970e1900c01454be7ca0f8b1c0]# jq . '=c2hhMjU2OmQ0YmQxZTU0YzBjNTEyMTllZTZiMmJjNTgyNDZjOTBhNTQ1ZWEzOTcwZTE5MDBjMDE0NTRiZTdjYTBmOGIxYzA=' 
{
  "created": "2020-02-24T06:10:41.136756854Z",
  "container_config": {
    "Hostname": "9ce2b0f43192",
    "Domainname": "",
    "User": "nova",
    "AttachStdin": false,
    "AttachStdout": false,
    "AttachStderr": false,
    "Tty": false,
    "OpenStdin": false,
    "StdinOnce": false,
    "Env": [
      "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
      "container=oci",
      "LANG=en_US.UTF-8",
      "KOLLA_BASE_DISTRO=rhel",
      "KOLLA_INSTALL_TYPE=binary",
      "KOLLA_INSTALL_METATYPE=rhos",
      "KOLLA_DISTRO_PYTHON_VERSION=3.6",
      "KOLLA_BASE_ARCH=x86_64",
      "PS1=$(tput bold)($(printenv KOLLA_SERVICE_NAME))$(tput sgr0)[$(id -un)@$(hostname -s) $(pwd)]$ "
    ],
    "Cmd": [
      "kolla_start"
    ],
    "ArgsEscaped": true,
    "Image": "1ce77652aa2bdf70177b974f1d98d57c39f28bf2ad33685a7c17c65c31be43fe",
    "Volumes": {},
    "WorkingDir": "",
    "Entrypoint": [
      "dumb-init",
      "--single-child",
      "--"
    ],
    "OnBuild": [],
    "Labels": {
      "architecture": "x86_64",
      "authoritative-source-url": "registry.access.redhat.com",
      "batch": "20200130.1",
      "build-date": "2020-01-30T06:10:45.345348",
      "com.redhat.build-host": "cpt-1002.osbs.prod.upshift.rdu2.redhat.com",
      "com.redhat.component": "openstack-nova-compute-container",
      "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements",
      "description": "Red Hat OpenStack Platform 16.0 nova-compute TrilioData Datamover",
      "distribution-scope": "public",
      "io.k8s.description": "Red Hat OpenStack Platform 16.0 nova-compute",
      "io.k8s.display-name": "Red Hat OpenStack Platform 16.0 nova-compute",
      "io.openshift.expose-services": "",
      "io.openshift.tags": "rhosp osp openstack osp-16.0",
      "maintainer": "shyam.biradar",
      "name": "rhosp16/openstack-nova-compute-triliodata-plugin",
      "release": "4.0",
      "summary": "Red Hat OpenStack Platform 16.0 nova-compute TrilioData Datamover",
      "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp16/openstack-nova-compute/images/16.0-80",
      "vcs-ref": "9429d8bb2549bd4a67cc26031ce1fbdd042b3a92",
      "vcs-type": "git",
      "vendor": "TrilioData",
      "version": "4.0.0"
    },
    "StopSignal": "SIGTERM"
  },
  "author": "TrilioData shyam.biradar",
  "config": {
    "Hostname": "9ce2b0f43192",
    "Domainname": "",
    "User": "nova",
    "AttachStdin": false,
    "AttachStdout": false,
    "AttachStderr": false,
    "Tty": false,
    "OpenStdin": false,
    "StdinOnce": false,
    "Env": [
      "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
      "container=oci",
      "LANG=en_US.UTF-8",
      "KOLLA_BASE_DISTRO=rhel",
      "KOLLA_INSTALL_TYPE=binary",
      "KOLLA_INSTALL_METATYPE=rhos",
      "KOLLA_DISTRO_PYTHON_VERSION=3.6",
      "KOLLA_BASE_ARCH=x86_64",
      "PS1=$(tput bold)($(printenv KOLLA_SERVICE_NAME))$(tput sgr0)[$(id -un)@$(hostname -s) $(pwd)]$ "
    ],
    "Cmd": [
      "kolla_start"
    ],
    "ArgsEscaped": true,
    "Image": "1ce77652aa2bdf70177b974f1d98d57c39f28bf2ad33685a7c17c65c31be43fe",
    "Volumes": {},
    "WorkingDir": "",
    "Entrypoint": [
      "dumb-init",
      "--single-child",
      "--"
    ],
    "OnBuild": [],
    "Labels": {
      "architecture": "x86_64",
      "authoritative-source-url": "registry.access.redhat.com",
      "batch": "20200130.1",
      "build-date": "2020-01-30T06:10:45.345348",
      "com.redhat.build-host": "cpt-1002.osbs.prod.upshift.rdu2.redhat.com",
      "com.redhat.component": "openstack-nova-compute-container",
      "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements",
      "description": "Red Hat OpenStack Platform 16.0 nova-compute TrilioData Datamover",
      "distribution-scope": "public",
      "io.k8s.description": "Red Hat OpenStack Platform 16.0 nova-compute",
      "io.k8s.display-name": "Red Hat OpenStack Platform 16.0 nova-compute",
      "io.openshift.expose-services": "",
      "io.openshift.tags": "rhosp osp openstack osp-16.0",
      "maintainer": "shyam.biradar",
      "name": "rhosp16/openstack-nova-compute-triliodata-plugin",
      "release": "4.0",
      "summary": "Red Hat OpenStack Platform 16.0 nova-compute TrilioData Datamover",
      "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp16/openstack-nova-compute/images/16.0-80",
      "vcs-ref": "9429d8bb2549bd4a67cc26031ce1fbdd042b3a92",
      "vcs-type": "git",
      "vendor": "TrilioData",
      "version": "4.0.0"
    },
    "StopSignal": "SIGTERM"
  },
  "architecture": "amd64",
  "os": "linux",
  "parent": "sha256:cc913a1cd6a72403887bd9118aa06439dc37da7ae757fa0ca25cd5f366b60a70",
  "rootfs": {
    "type": "layers",
    "diff_ids": [
      "sha256:1295eae54c9d95bd8e2c7f83df2a90ac3923d89ec44231fd49f31e7a934f9656",
      "sha256:85f69e555a1b647e78d442738170b5a6925e7141335612ce2c58f0ea18ab828d",
      "sha256:3c566e4e5b7f1cb186bedd92884ac8ccb22c342441c5360dcd9252892d3a73a0",
      "sha256:fbe48a83e5322827e592143c77b99cf9b7560252596b7802007ff83c79834f1b",
      "sha256:fc65768eb1bd3f334e8e852d4872df31a032ad1eb4109da55c652c3fcccff15d",
      "sha256:deb6070d1de373610118ced470aa0b38c831f3367912060d3978064f17826135",
      "sha256:9a03eaa1e257553b6ba13b2bf074097572570075a2835fdcd94a422ef7a048d3"
    ]
  },
  "history": [
    {
      "created": "2020-01-29T19:40:01.667754318Z",
      "comment": "Imported from -"
    },
    {
      "created": "2020-01-29T19:40:16.841325Z"
    },
    {
      "created": "2020-01-30T04:43:26.831413Z"
    },
    {
      "created": "2020-01-30T04:57:10.501355Z"
    },
    {
      "created": "2020-01-30T05:29:46.404117Z"
    },
    {
      "created": "2020-01-30T06:13:39.516058Z"
    },
    {
      "created": "2020-02-24T01:09:23.190278491-05:00",
      "created_by": "/bin/sh -c #(nop) MAINTAINER TrilioData shyam.biradar",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:09:23.190390302-05:00",
      "created_by": "/bin/sh -c #(nop) LABEL name=\"rhosp16/openstack-nova-compute-triliodata-plugin\"       maintainer=\"shyam.biradar\"       vendor=\"TrilioData\"       version=\"4.0.0\"       release=\"4.0\"       summary=\"Red Hat OpenStack Platform 16.0 nova-compute TrilioData Datamover\"       description=\"Red Hat OpenStack Platform 16.0 nova-compute TrilioData Datamover\"",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:09:23.190405845-05:00",
      "created_by": "/bin/sh -c #(nop) USER root",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:09:23.740990127-05:00",
      "created_by": "/bin/sh -c #(nop) ADD trilio.repo /etc/yum.repos.d/",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:01.806449977-05:00",
      "created_by": "/bin/sh -c yum install python3-tvault-contego puppet-triliovault -y",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:03.094831511-05:00",
      "created_by": "/bin/sh -c mkdir -p /opt/tvault/",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:04.843539607-05:00",
      "created_by": "/bin/sh -c #(nop) ADD start_datamover_s3 start_datamover_nfs start_tvault_object_store /opt/tvault/",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:06.183209065-05:00",
      "created_by": "/bin/sh -c chown -R nova:nova /opt/tvault/",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:07.344456425-05:00",
      "created_by": "/bin/sh -c chmod 755 /opt/tvault/start_datamover_s3 /opt/tvault/start_datamover_nfs /opt/tvault/start_tvault_object_store",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:08.166166722-05:00",
      "created_by": "/bin/sh -c #(nop) ADD nova-sudoers /etc/sudoers.d/nova-sudoers",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:08.801819057-05:00",
      "created_by": "/bin/sh -c #(nop) ADD trilio.filters /usr/share/nova/rootwrap/trilio.filters",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:12.342056062-05:00",
      "created_by": "/bin/sh -c usermod -aG disk nova",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:12.742874304-05:00",
      "created_by": "/bin/sh -c #(nop) ADD tvault-contego.conf /etc/tvault-contego/tvault-contego.conf",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:14.229478444-05:00",
      "created_by": "/bin/sh -c chown nova:nova /etc/tvault-contego/tvault-contego.conf",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:15.597403097-05:00",
      "created_by": "/bin/sh -c mkdir -p /var/triliovault-mounts",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:17.688829994-05:00",
      "created_by": "/bin/sh -c chown nova:nova /var/triliovault-mounts",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:22.690918074-05:00",
      "created_by": "/bin/sh -c mkdir -p /var/triliovault",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:25.898025842-05:00",
      "created_by": "/bin/sh -c chmod 777 /var/triliovault-mounts",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:27.716255823-05:00",
      "created_by": "/bin/sh -c chown nova:nova /var/triliovault",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:30.07637046-05:00",
      "created_by": "/bin/sh -c chmod 777 /var/triliovault",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:33.070740441-05:00",
      "created_by": "/bin/sh -c mkdir -p /var/log/trilio-datamover",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:34.450281456-05:00",
      "created_by": "/bin/sh -c chown nova:nova /var/log/trilio-datamover",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:35.331578554-05:00",
      "created_by": "/bin/sh -c #(nop) ADD log-rotate-conf /etc/logrotate.d/tvault-contego",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:38.081002745-05:00",
      "created_by": "/bin/sh -c yum clean all",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:39.525081538-05:00",
      "created_by": "/bin/sh -c rm -f /etc/yum.repos.d/trilio.repo",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:40.54014173-05:00",
      "created_by": "/bin/sh -c mkdir /licenses",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T01:10:41.136197808-05:00",
      "created_by": "/bin/sh -c #(nop) COPY licensing.txt /licenses",
      "empty_layer": true
    },
    {
      "created": "2020-02-24T06:10:41.136756854Z",
      "author": "TrilioData shyam.biradar",
      "created_by": "/bin/sh -c #(nop) USER nova"
    }
  ]
}



Version-Release number of selected component (if applicable):
podman-1.6.4-2.module+el8.1.1+5363+bf8ff1af.x86_64


Expected results:
manifest digest references should be able to be found in the overlay-layers/layers.json

Actual result:
metadata digest references are not in the overlay-layers/layers.json but the layer ids from the '=c2hhMjU2OmQ0YmQxZTU0YzBjNTEyMTllZTZiMmJjNTgyNDZjOTBhNTQ1ZWEzOTcwZTE5MDBjMDE0NTRiZTdjYTBmOGIxYzA=' file are in overlay-layers/layers.json

Comment 1 Matthew Heon 2020-02-24 19:48:30 UTC
I think this may be confusion between digest and image ID? The digest is SHA256 of the layer, but it is not the identifier that Podman and c/storage use to identify the image; that's a random 256-bit integer. You can pull and refer to images by their SHA256 digest, but most output (inspect, images, etc) will refer to them by the image ID.

Comment 2 Alex Schultz 2020-02-24 20:13:05 UTC
So we're taking the manifest and looking for the digest in the layers.json and not finding it in compressed-diff-digest or diff-digest.  In this case "sha256:933041e5135b6d2775e5def32a99ce221eefab64b05ce9b300f51612b434b4b8" is not in the layers.json

Comment 3 Matthew Heon 2020-02-24 20:45:13 UTC
Can I ask why you're directly working with layers.json? I don't believe we document that file as a public interface, so I don't think we're prepared to make any stability guarantees about it and its contents. Is what you're trying to do not available from the CLI? Are there performance reasons to avoid Podman or Buildah commands here?

Comment 4 Nalin Dahyabhai 2020-02-24 20:47:21 UTC
(In reply to Matthew Heon from comment #1)
> I think this may be confusion between digest and image ID? The digest is
> SHA256 of the layer, but it is not the identifier that Podman and c/storage
> use to identify the image; that's a random 256-bit integer. You can pull and
> refer to images by their SHA256 digest, but most output (inspect, images,
> etc) will refer to them by the image ID.

For Docker v2s2 and OCI images, the image ID is computed as the digest of the image's configuration blob, and the image's digest (so far as there is such a thing) is the digest of the image's manifest.  The computations are more involved for Docker v2s1 images, but for a given image the values for each should always be consistent.

Comment 5 Alex Schultz 2020-02-24 21:10:30 UTC
We've had to implement out own container registry which requires us to serve up layers. In order to upload the layers from a locally stored container, we're using the existing manifest on disk to find and export the layers.

Comment 6 Miloslav Trmač 2020-02-24 21:21:00 UTC
Alex,

First of all, why do you care? The data stored in /var/lib/containers/storage is not a documented ABI with any expectation of stability.

What is the end-user-visible impact of this situation (whether or not it is correct)?
- https://bugzilla.redhat.com/show_bug.cgi?id=1804045#c0 shows a docker:/… syntax that causes an incorrect upload URL to be formed (by some client that is not checking for a valid tag format, / are invalid in tags)
- https://bugzilla.redhat.com/show_bug.cgi?id=1804045#c2 shows a blob upload initiation at POST /v2/trilio/trilio-datamover/blobs/uploads/ failing with 404

The two are fairly different failures, and it’s not obvious to me how they relate at all to layer IDs.


---

Second, the assumption that layer blob (“compressed”) digests are maintained in the layer metadata so that layers can be found using that data is, as of the current implementation, plainly incorrect.

Layers are deduplicated by their IDs (~ usually the sequence of DiffIDs of the layer and all its parents), so the “same” layer can easily be pulled from two or more different sources, with two or more layer blob (“compressed”) digests; but there is only one "compressed-diff-digest" value per layer, so if any differently-compressed layer duplicate is downloaded, that duplicate different compressed digest will not be recorded anywhere.

(And we are intentionally not editing the stored manifests on deduplication, we want to preserve the originals along with signatures. The manifests don’t exist in local storage to look up local layers.)

---

(In reply to Alex Schultz from comment #5)
> We've had to implement out own container registry which requires us to serve
> up layers. In order to upload the layers from a locally stored container,
> we're using the existing manifest on disk to find and export the layers.

Do I understand correctly that there is have a docker/distribution server that serves the contents of a local containers-storage store?

1) Still, the only reasonable way to access the store is via the containers/storage library.  So, that implies Go, and at that point you might as well use containers/image — ideally via the top-level copy.Image pipeline to copy images as units, or if not that and you have to serve the docker/distribution protocol, doing everything that copy.Image does (in this area, note LayerInfosForCopy at least).

2) For such a registry, the compressed digests seem very useless as a concept anyway, because the c/storage store only stores uncompressed layers, and it’s not in general possible to recreate the original compressed layers from the uncompressed state (as compression implementations change over time).

Comment 8 Alex Schultz 2020-02-24 22:04:37 UTC
> What is the end-user-visible impact of this situation (whether or not it is
> correct)?
> - https://bugzilla.redhat.com/show_bug.cgi?id=1804045#c0 shows a docker:/…
> syntax that causes an incorrect upload URL to be formed (by some client that
> is not checking for a valid tag format, / are invalid in tags)

This was due to improper usage of cli parameters.

> - https://bugzilla.redhat.com/show_bug.cgi?id=1804045#c2 shows a blob upload
> initiation at POST /v2/trilio/trilio-datamover/blobs/uploads/ failing with
> 404
> 

Yes the code checks if we can perform an upload, if it gets a 404 then it cannot upload it's the local implementation where we need to construct blobs and manage them on the local file system.

> The two are fairly different failures, and it’s not obvious to me how they
> relate at all to layer IDs.

Neither of these errors are related to this specific bug. The issue is later in that we need to be able to take a local container and publish it.

> 
> 
> ---
> 
> Second, the assumption that layer blob (“compressed”) digests are maintained
> in the layer metadata so that layers can be found using that data is, as of
> the current implementation, plainly incorrect.
> 
> Layers are deduplicated by their IDs (~ usually the sequence of DiffIDs of
> the layer and all its parents), so the “same” layer can easily be pulled
> from two or more different sources, with two or more layer blob
> (“compressed”) digests; but there is only one "compressed-diff-digest" value
> per layer, so if any differently-compressed layer duplicate is downloaded,
> that duplicate different compressed digest will not be recorded anywhere.
> 

This has been working fine for all of the other containers we build/manage when fetching from the red hat registry. I'm uncertain what is different about the reported container.  IN this instance we didn't already have any of the layers for this image so I'm not sure why the digests would have changed.

> (And we are intentionally not editing the stored manifests on deduplication,
> we want to preserve the originals along with signatures. The manifests don’t
> exist in local storage to look up local layers.)

I'm not sure why the de-deplication wouldn't also be based on layers themselves (thus reusing the same digest ids).  Are you folks doing further de-duplication on the data itself vs at a layer/blob level?

> 
> ---
> 
> (In reply to Alex Schultz from comment #5)
> > We've had to implement out own container registry which requires us to serve
> > up layers. In order to upload the layers from a locally stored container,
> > we're using the existing manifest on disk to find and export the layers.
> 
> Do I understand correctly that there is have a docker/distribution server
> that serves the contents of a local containers-storage store?

No. We have implemented a registry that serves blobs up in a read-only format similar to the docker registry api. However because it's read-only (no push is allowed), we load it by managing blobs and using the manifest metadata for lookups.

> 
> 1) Still, the only reasonable way to access the store is via the
> containers/storage library.  So, that implies Go, and at that point you
> might as well use containers/image — ideally via the top-level copy.Image
> pipeline to copy images as units, or if not that and you have to serve the
> docker/distribution protocol, doing everything that copy.Image does (in this
> area, note LayerInfosForCopy at least).
> 

The project is python only.  Much of this code was written prior to the existence of many of the export abilities of buildah/podman.

> 2) For such a registry, the compressed digests seem very useless as a
> concept anyway, because the c/storage store only stores uncompressed layers,
> and it’s not in general possible to recreate the original compressed layers
> from the uncompressed state (as compression implementations change over
> time).

That's not been our experience.  

FWIW, here's our export code to write our blobs. https://github.com/openstack/tripleo-common/blob/master/tripleo_common/image/image_export.py#L80

Our code to extract the layers is https://github.com/openstack/tripleo-common/blob/bbb3d193b7bbc96f9aad77fdd5d050715d54f121/tripleo_common/image/image_uploader.py#L2000

I'm well aware this is not ideal, we need to be able to export the local images that may be built so that we can provide them with a docker api interface.

Comment 9 Nalin Dahyabhai 2020-02-24 22:13:13 UTC
(In reply to Alex Schultz from comment #8)

> FWIW, here's our export code to write our blobs.
> https://github.com/openstack/tripleo-common/blob/master/tripleo_common/image/
> image_export.py#L80
> 
> Our code to extract the layers is
> https://github.com/openstack/tripleo-common/blob/
> bbb3d193b7bbc96f9aad77fdd5d050715d54f121/tripleo_common/image/image_uploader.
> py#L2000
> 
> I'm well aware this is not ideal, we need to be able to export the local
> images that may be built so that we can provide them with a docker api
> interface.

If you can migrate this to running `skopeo copy containers-storage:image-name-or-id ...`, I would highly recommend that.  The current implementation makes some assumptions that I don't think can or should be depended on.

Comment 10 Alex Schultz 2020-02-24 22:19:11 UTC
We previously had a skopeo version and it was unusable due to slow speed.

Comment 13 Miloslav Trmač 2020-02-24 23:05:37 UTC
(In reply to Alex Schultz from comment #8)
> > Second, the assumption that layer blob (“compressed”) digests are maintained
> > in the layer metadata so that layers can be found using that data is, as of
> > the current implementation, plainly incorrect.
> > 
> > Layers are deduplicated by their IDs (~ usually the sequence of DiffIDs of
> > the layer and all its parents), so the “same” layer can easily be pulled
> > from two or more different sources, with two or more layer blob
> > (“compressed”) digests; but there is only one "compressed-diff-digest" value
> > per layer, so if any differently-compressed layer duplicate is downloaded,
> > that duplicate different compressed digest will not be recorded anywhere.
> 
> This has been working fine for all of the other containers we build/manage
> when fetching from the red hat registry.

The implementation got lucky.

---

> I'm uncertain what is different
> about the reported container.

I think the implementation just isn’t lucky any longer.

But if you want to recreate this at leisure:
> $ skopeo copy --dest-compress --dest-compress-format=gzip docker://busybox dir:t1
> $ skopeo copy --dest-compress --dest-compress-format=zstd docker://busybox dir:t2
> $ podman pull dir:t1
> $ podman pull dir:t2

In the current implementation (which is NOT a commitment and NOT a behavior to be relied upon), the layer metadata comes from t1 (gzip compression) and the manifest from t2 (Zstd compression).

---

> IN this instance we didn't already have any
> of the layers for this image so I'm not sure why the digests would have
> changed.

(If you want to dig into this, a collecting all image manifests and the full layer database might be useful — and even that might not be sufficient if some of the layers were previously downloaded by pulling an image that was later deleted without deleting the layers.

If this really happens when the layer is not duplicated, i.e. you can reproduce this starting with an empty container storage, pulling a single image from a remote registry, and doing nothing else to the storage, that would be surprising and somewhat interesting.

But the ultimate answer is very likely going to be “don’t read those files directly” all the same.)

---

> > (And we are intentionally not editing the stored manifests on deduplication,
> > we want to preserve the originals along with signatures. The manifests don’t
> > exist in local storage to look up local layers.)
> 
> I'm not sure why the de-deplication wouldn't also be based on layers
> themselves (thus reusing the same digest ids).

I’m not sure what “based on layers themselves” means.

A single DiffID == uncompressed layer digest can correspond to many different compressed digests.

If you are suggesting that the deduplication should happen based on compressed digest values instead of uncompressed digest values,
1) that would waste space for no benefit
2) locally-built images don’t have compressed digests; they only start to exist after compression = when pushing the image to a registry.

---

> > (In reply to Alex Schultz from comment #5)
> No. We have implemented a registry that serves blobs up in a read-only
> format similar to the docker registry api. However because it's read-only
> (no push is allowed), we load it by managing blobs and using the manifest
> metadata for lookups.

As mentioned before, the manifest metadata is not maintained to be authoritative for references to image’s layers in local storage.

> > 1) Still, the only reasonable way to access the store is via the
> > containers/storage library.  So, that implies Go, and at that point you
> > might as well use containers/image — ideally via the top-level copy.Image
> > pipeline to copy images as units, or if not that and you have to serve the
> > docker/distribution protocol, doing everything that copy.Image does (in this
> > area, note LayerInfosForCopy at least).
> 
> The project is python only.  Much of this code was written prior to the
> existence of many of the export abilities of buildah/podman.

Buildah/Podman push/pull images via containers/image, so the code to copy images in/out has necessarily existed before. Sure, various CLI options and push/pull optimizations have only been added over time.

> > 2) For such a registry, the compressed digests seem very useless as a
> > concept anyway, because the c/storage store only stores uncompressed layers,
> > and it’s not in general possible to recreate the original compressed layers
> > from the uncompressed state (as compression implementations change over
> > time).
> 
> That's not been our experience.  

You’ve been lucky, and you aren’t lucky any more.
 
> FWIW, here's our export code to write our blobs.
> https://github.com/openstack/tripleo-common/blob/master/tripleo_common/image/
> image_export.py#L80
> 
> Our code to extract the layers is
> https://github.com/openstack/tripleo-common/blob/
> bbb3d193b7bbc96f9aad77fdd5d050715d54f121/tripleo_common/image/image_uploader.
> py#L2000

If I understand _copy_layer_local_to_registry , that is actually prepared for the result having a different digest (it calls _copy_stream_to_registry with verify_digest=false, and export_stream renames the created file in that case to match). It’s ”just” _copy_local_to_registry that
1) makes invalid assumptions about the relationships between images/manifests/layers (which seem to happen in other places as well)
2) throws away the digest returned by _copy_layer_local_to_registry.

Looking at https://github.com/openstack/tripleo-common/blob/bbb3d193b7bbc96f9aad77fdd5d050715d54f121/tripleo_common/image/image_uploader.py#L1971 , that’s just not going to work.

It ISN’T POSSIBLE to make this work.  Compare https://github.com/containers/image/pull/157 .

Comment 14 Miloslav Trmač 2020-02-24 23:08:15 UTC
(In reply to Alex Schultz from comment #10)
> We previously had a skopeo version and it was unusable due to slow speed.

FWIW there is https://github.com/NicolasT/static-container-registry ; but if Skopeo is slow for some reason, this probably isn’t going to be any faster.

Comment 16 Miloslav Trmač 2020-02-24 23:45:01 UTC
(In reply to Alex Schultz from comment #8)
> FWIW, here's our export code to write our blobs.
> https://github.com/openstack/tripleo-common/blob/master/tripleo_common/image/
> image_export.py#L80

For the record, I can’t see anything locking the files against concurrent modification; this may well be faster, but it’s also unsafe unless you can somehow ensure that nothing else is accessing the store.