Description of problem: podman is missing registry credentials setup Version-Release number of selected component (if applicable): 4.1.0-0.ci-2019-05-07-064132 Actual results: podman run -it --rm --entrypoint=/usr/bin/cluster-kube-apiserver-operator "registry.svc.ci.openshift.org/ocp/4.1-2019-05-07-064132@sha256:b02337de9e86741066faaa375a81faa596a10c39bf26eef2e7752f6cbf6ef2ea" recovery-apiserver Trying to pull registry.svc.ci.openshift.org/ocp/4.1-2019-05-07-064132@sha256:b02337de9e86741066faaa375a81faa596a10c39bf26eef2e7752f6cbf6ef2ea...Failed unable to pull registry.svc.ci.openshift.org/ocp/4.1-2019-05-07-064132@sha256:b02337de9e86741066faaa375a81faa596a10c39bf26eef2e7752f6cbf6ef2ea: unable to pull image: Error determining manifest MIME type for docker://registry.svc.ci.openshift.org/ocp/4.1-2019-05-07-064132@sha256:b02337de9e86741066faaa375a81faa596a10c39bf26eef2e7752f6cbf6ef2ea: Error reading manifest sha256:b02337de9e86741066faaa375a81faa596a10c39bf26eef2e7752f6cbf6ef2ea in registry.svc.ci.openshift.org/ocp/4.1-2019-05-07-064132: unauthorized: authentication required Expected results: Registry auth config is placed in some standard location where the tool(s) Additional info: also `podman run` is missing `--authfile` but this can be workarouned with pre-pull using podman pull --authfile=/var/lib/kubelet/config.json "registry.svc.ci.openshift.org/ocp/4.1-2019-05-07-064132@sha256:b02337de9e86741066faaa375a81faa596a10c39bf26eef2e7752f6cbf6ef2ea" same issue exists for: oc adm release info --registry-config='/var/lib/kubelet/config.json' registry.svc.ci.openshift.org/ocp/release:4.1.0-0.ci-2019-05-07-064132 --image-for=cluster-openshift-apiserver-operator which doesn't see any default config burning those workarounds into recovery docs is gonna look bad
I do not believe this is a bug. I think that is the correct solution. Generally, people should not be running podman on nodes. I think using --authfile is the correct thing. I am going to move this to 4.2 and assign it to the containers team to look only at the lack of `--authfile` for `podman run` and if they want to support that flag.
My thinking is that is we provide a tool, it should be properly configured. If we don't want people using the tool, we should not provide it, yet disaster recovery scenarios prove that this is not black and white and those tools need to be present. This also creates different paths for OKD and OCP which may lead developers to post the insufficient steps as it doesn't require auth and a developer might expect OCP to be properly configured. I think this can almost be solved as easily as making a symlink from the default location to the kubelet file. And I'd expect `podman run` to take the authfile flag as it delegates to pulling an image.
Can you add a comment to this bug describing how a user would specify the authfile if they want to do this? If you add that, we'll move this over to a doc bug for the docs team.
A PR makes podman run take --authfile flag has been opened https://github.com/containers/libpod/pull/3737
https://github.com/containers/libpod/pull/3737#event-2547380230 landed 2019-08-09T12:09 PDT [1].
From [1], the commit landed in libpod v1.6.1 v1.6.1-rc1 v1.6.0 v1.6.0-rc2 v1.6.0-rc1 v1.5.1 v1.5.0. Check on the current machine-os-content: $ oc image info -o json $(oc adm release info --image-for=machine-os-content quay.io/openshift-release-dev/ocp-release:4.2.0-rc.1) | jq -r .config.config.Labels.version 42.80.20191004.0 $ curl -s 'https://releases-rhcos-art.cloud.privileged.psi.redhat.com/storage/releases/rhcos-4.2/42.80.20191004.0/commitmeta.json' | jq '.["rpmostree.rpmdb.pkglist"][] | select(.[0] == "podman")' [ "podman", "0", "1.4.2", "5.el8", "x86_64" ] Hrm. Maybe this needs a backport or podman bump for 4.2? [1]: https://github.com/containers/libpod/pull/3737/commits/cfdf891552704dce8020aa313f61bf85f5a6b072
I don't see this as a 4.2 blocker, moving to 4.3.
Verified in 42.80.20190925.2 ``` [core@localhost ~]$ rpm-ostree status State: idle AutomaticUpdates: disabled Deployments: ● pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:016efac63beadc65613b49820789195ee7b49d8edb8762e893199670859a2b9a CustomOrigin: Image generated via coreos-assembler Version: 42.80.20190925.2 (2019-09-25T19:53:09Z) $ rpm -q podman podman-1.4.2-5.el8.x86_64 [core@localhost ~]$ podman run --help | grep authfile --authfile string Path of the authentication file. Use REGISTRY_AUTH_FILE environment variable to override (default "/run/user/1000/containers/auth.json") [core@localhost ~]$ sudo podman run --authfile /dev/null docker.io/alpine echo "hello" Trying to pull docker.io/alpine...Getting image source signatures Copying blob 9d48c3bd43c5 done Copying config 9617696764 done Writing manifest to image destination Storing signatures hello ```
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days