Description of problem: Get the error: "unauthorized: access to the requested resource is not authorized" when running the `oc adm catalog mirror`. But, I can login the "quay.io/openshift-qe-optional-operators" successfully. [root@preserve-olm-env data]# docker login quay.io/openshift-qe-optional-operators Authenticating with existing credentials... WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded Version-Release number of selected component (if applicable): [root@preserve-olm-env data]# oc version Client Version: 4.6.0-0.nightly-2020-09-23-022756 Server Version: 4.6.0-0.nightly-2020-09-21-230455 Kubernetes Version: v1.19.0+f5121a6 How reproducible: always Steps to Reproduce: 1. Download the oc client. 2. Login the "quay.io/openshift-qe-optional-operators" repo. [root@preserve-olm-env data]# docker login quay.io/openshift-qe-optional-operators Authenticating with existing credentials... WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded 3. Run the command: `oc adm catalog mirror quay.io/openshift-qe-optional-operators/ocp4-index:23 quay.io/openshift-qe-optional-operators --manifests-only --to-manifests=mirror` Actual results: Get the error: "unauthorized: access to the requested resource is not authorized". In fact, it's crashed when checking the details logs. Expected results: The oc client should get the correct auth from the docker config file. Here is the contnen of my docker config file: [root@preserve-olm-env data]# cat ~/.docker/config.json { "auths": { "quay.io": { "auth": "xxx" }, "quay.io/openshift-qe-optional-operators": { "auth": "xxx" }, "quay.io/openshift-release-dev": { "auth": "xxx", "email": "xxx" }, "registry.redhat.io": { "auth": "xxx" }, "registry.svc.ci.openshift.org": { "auth": "xxx" } }, "HttpHeaders": { "User-Agent": "Docker-Client/19.03.9 (linux)" } } Additional info: 1, Print the details logs: [root@preserve-olm-env data]# oc adm catalog mirror quay.io/openshift-qe-optional-operators/ocp4-index:23 quay.io/openshift-qe-optional-operators --manifests-only --to-manifests=mirror --loglevel=8 I0923 06:28:46.560513 3576 config.go:140] looking for config.json at /root/.docker/config.json I0923 06:28:46.560784 3576 config.go:148] found valid config.json at /root/.docker/config.json I0923 06:28:46.561068 3576 round_trippers.go:420] GET https://quay.io/v2/ I0923 06:28:46.561082 3576 round_trippers.go:427] Request Headers: I0923 06:28:46.646872 3576 round_trippers.go:446] Response Status: 401 Unauthorized in 85 milliseconds I0923 06:28:46.646959 3576 round_trippers.go:449] Response Headers: I0923 06:28:46.646965 3576 round_trippers.go:452] Server: nginx/1.12.1 I0923 06:28:46.646970 3576 round_trippers.go:452] Date: Wed, 23 Sep 2020 10:35:48 GMT I0923 06:28:46.646974 3576 round_trippers.go:452] Content-Type: text/html; charset=utf-8 I0923 06:28:46.646979 3576 round_trippers.go:452] Content-Length: 4 I0923 06:28:46.646983 3576 round_trippers.go:452] Docker-Distribution-Api-Version: registry/2.0 I0923 06:28:46.646988 3576 round_trippers.go:452] Www-Authenticate: Bearer realm="https://quay.io/v2/auth",service="quay.io" I0923 06:28:46.649382 3576 credentials.go:108] Found secret to match https://quay.io/v2/auth (quay.io/auth): I0923 06:28:46.649444 3576 round_trippers.go:420] GET https://quay.io/v2/auth?account=kuiwang&scope=repository%3Aopenshift-qe-optional-operators%2Focp4-index%3Apull&service=quay.io I0923 06:28:46.649452 3576 round_trippers.go:427] Request Headers: I0923 06:28:46.649459 3576 round_trippers.go:431] Authorization: Basic <masked> I0923 06:28:46.973094 3576 round_trippers.go:446] Response Status: 200 OK in 323 milliseconds I0923 06:28:46.973129 3576 round_trippers.go:449] Response Headers: I0923 06:28:46.973139 3576 round_trippers.go:452] Date: Wed, 23 Sep 2020 10:35:48 GMT I0923 06:28:46.973146 3576 round_trippers.go:452] Content-Type: application/json I0923 06:28:46.973153 3576 round_trippers.go:452] Content-Length: 1082 I0923 06:28:46.973184 3576 round_trippers.go:452] Cache-Control: no-cache, no-store, must-revalidate I0923 06:28:46.973197 3576 round_trippers.go:452] X-Frame-Options: DENY I0923 06:28:46.973206 3576 round_trippers.go:452] Strict-Transport-Security: max-age=63072000; preload I0923 06:28:46.973213 3576 round_trippers.go:452] Server: nginx/1.12.1 I0923 06:28:46.973453 3576 round_trippers.go:420] HEAD https://quay.io/v2/openshift-qe-optional-operators/ocp4-index/manifests/23 I0923 06:28:46.973464 3576 round_trippers.go:427] Request Headers: I0923 06:28:46.973471 3576 round_trippers.go:431] Accept: application/vnd.docker.distribution.manifest.v1+prettyjws I0923 06:28:46.973476 3576 round_trippers.go:431] Accept: application/json I0923 06:28:46.973481 3576 round_trippers.go:431] Accept: application/vnd.docker.distribution.manifest.v2+json I0923 06:28:46.973486 3576 round_trippers.go:431] Accept: application/vnd.docker.distribution.manifest.list.v2+json I0923 06:28:46.973491 3576 round_trippers.go:431] Accept: application/vnd.oci.image.index.v1+json I0923 06:28:46.973498 3576 round_trippers.go:431] Authorization: Bearer <masked> I0923 06:28:46.995440 3576 round_trippers.go:446] Response Status: 401 Unauthorized in 21 milliseconds I0923 06:28:46.995466 3576 round_trippers.go:449] Response Headers: I0923 06:28:46.995472 3576 round_trippers.go:452] Content-Length: 112 I0923 06:28:46.995477 3576 round_trippers.go:452] Docker-Distribution-Api-Version: registry/2.0 I0923 06:28:46.995480 3576 round_trippers.go:452] Www-Authenticate: Bearer realm="https://quay.io/v2/auth",service="quay.io",scope="repository:openshift-qe-optional-operators/ocp4-index:pull" I0923 06:28:46.995485 3576 round_trippers.go:452] Server: nginx/1.12.1 I0923 06:28:46.995489 3576 round_trippers.go:452] Date: Wed, 23 Sep 2020 10:35:49 GMT I0923 06:28:46.995494 3576 round_trippers.go:452] Content-Type: application/json I0923 06:28:46.995548 3576 round_trippers.go:420] GET https://quay.io/v2/openshift-qe-optional-operators/ocp4-index/manifests/23 I0923 06:28:46.995567 3576 round_trippers.go:427] Request Headers: I0923 06:28:46.995573 3576 round_trippers.go:431] Accept: application/json I0923 06:28:46.995578 3576 round_trippers.go:431] Accept: application/vnd.docker.distribution.manifest.v2+json I0923 06:28:46.995583 3576 round_trippers.go:431] Accept: application/vnd.docker.distribution.manifest.list.v2+json I0923 06:28:46.995588 3576 round_trippers.go:431] Accept: application/vnd.oci.image.index.v1+json I0923 06:28:46.995593 3576 round_trippers.go:431] Accept: application/vnd.docker.distribution.manifest.v1+prettyjws I0923 06:28:46.995599 3576 round_trippers.go:431] Authorization: Bearer <masked> I0923 06:28:47.015261 3576 round_trippers.go:446] Response Status: 401 Unauthorized in 19 milliseconds I0923 06:28:47.015289 3576 round_trippers.go:449] Response Headers: I0923 06:28:47.015300 3576 round_trippers.go:452] Content-Length: 112 I0923 06:28:47.015305 3576 round_trippers.go:452] Docker-Distribution-Api-Version: registry/2.0 I0923 06:28:47.015309 3576 round_trippers.go:452] Www-Authenticate: Bearer realm="https://quay.io/v2/auth",service="quay.io",scope="repository:openshift-qe-optional-operators/ocp4-index:pull" I0923 06:28:47.015313 3576 round_trippers.go:452] Server: nginx/1.12.1 I0923 06:28:47.015318 3576 round_trippers.go:452] Date: Wed, 23 Sep 2020 10:35:49 GMT I0923 06:28:47.015322 3576 round_trippers.go:452] Content-Type: application/json I0923 06:28:47.015531 3576 workqueue.go:143] about to send work queue error: unable to read image quay.io/openshift-qe-optional-operators/ocp4-index:23: unauthorized: access to the requested resource is not authorized I0923 06:28:47.015632 3576 workqueue.go:54] worker 0 stopping F0923 06:28:47.015654 3576 helpers.go:115] error: unable to read image quay.io/openshift-qe-optional-operators/ocp4-index:23: unauthorized: access to the requested resource is not authorized goroutine 1 [running]: k8s.io/klog/v2.stacks(0xc000010001, 0xc00108e000, 0xc2, 0x113) /go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:996 +0xb8 k8s.io/klog/v2.(*loggingT).output(0x5224080, 0xc000000003, 0x0, 0x0, 0xc0018be000, 0x4da3cc0, 0xa, 0x73, 0x41c400) /go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:945 +0x19d k8s.io/klog/v2.(*loggingT).printDepth(0x5224080, 0x3, 0x0, 0x0, 0x2, 0xc0019559f0, 0x1, 0x1) /go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:718 +0x15e k8s.io/klog/v2.FatalDepth(...) /go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:1449 k8s.io/kubectl/pkg/cmd/util.fatal(0xc0016dc500, 0x93, 0x1) /go/src/github.com/openshift/oc/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1e8 k8s.io/kubectl/pkg/cmd/util.checkErr(0x3516c80, 0xc0018c9dc0, 0x30be460) /go/src/github.com/openshift/oc/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x958 k8s.io/kubectl/pkg/cmd/util.CheckErr(...) /go/src/github.com/openshift/oc/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115 github.com/openshift/oc/pkg/cli/admin/catalog.NewMirrorCatalog.func1(0xc00157cb00, 0xc0002a6d70, 0x2, 0x5) /go/src/github.com/openshift/oc/pkg/cli/admin/catalog/mirror.go:113 +0x7f github.com/spf13/cobra.(*Command).execute(0xc00157cb00, 0xc0002a6d20, 0x5, 0x5, 0xc00157cb00, 0xc0002a6d20) /go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:846 +0x29d github.com/spf13/cobra.(*Command).ExecuteC(0xc000050b00, 0x2, 0xc000050b00, 0x2) /go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:950 +0x349 github.com/spf13/cobra.(*Command).Execute(...) /go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:887 main.main() /go/src/github.com/openshift/oc/cmd/oc/oc.go:110 +0x81a goroutine 6 [chan receive]: k8s.io/klog/v2.(*loggingT).flushDaemon(0x5224080) /go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b created by k8s.io/klog/v2.init.0 /go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:416 +0xd6 goroutine 16 [chan receive]: k8s.io/klog.(*loggingT).flushDaemon(0x5223fa0) /go/src/github.com/openshift/oc/vendor/k8s.io/klog/klog.go:1010 +0x8b created by k8s.io/klog.init.0 /go/src/github.com/openshift/oc/vendor/k8s.io/klog/klog.go:411 +0xd6 goroutine 28 [select]: k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x30be388, 0x351aa00, 0xc000c69ce0, 0x1, 0xc0000c8360) /go/src/github.com/openshift/oc/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x13f k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x30be388, 0x12a05f200, 0x0, 0xc000f2e001, 0xc0000c8360) /go/src/github.com/openshift/oc/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98 k8s.io/apimachinery/pkg/util/wait.Until(...) /go/src/github.com/openshift/oc/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 k8s.io/apimachinery/pkg/util/wait.Forever(0x30be388, 0x12a05f200) /go/src/github.com/openshift/oc/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f created by k8s.io/component-base/logs.InitLogs /go/src/github.com/openshift/oc/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a goroutine 29 [select]: io.(*pipe).Read(0xc00148a8a0, 0xc000f1e000, 0x1000, 0x1000, 0x2988800, 0x1, 0xc000f1e000) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/io/pipe.go:57 +0xe7 io.(*PipeReader).Read(0xc000114400, 0xc000f1e000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/io/pipe.go:134 +0x4c bufio.(*Scanner).Scan(0xc00149ce00, 0x0) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/bufio/scan.go:213 +0xa4 github.com/openshift/oc/pkg/cli/admin/mustgather.newPrefixWriter.func1(0xc00149ce00, 0x351c420, 0xc000010018, 0x2ed8a0e, 0x17) /go/src/github.com/openshift/oc/pkg/cli/admin/mustgather/mustgather.go:413 +0x14d created by github.com/openshift/oc/pkg/cli/admin/mustgather.newPrefixWriter /go/src/github.com/openshift/oc/pkg/cli/admin/mustgather/mustgather.go:412 +0x1c1 goroutine 30 [semacquire]: sync.runtime_SemacquireMutex(0x522409c, 0x0, 0x1) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/runtime/sema.go:71 +0x47 sync.(*Mutex).lockSlow(0x5224098) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/sync/mutex.go:138 +0xfc sync.(*Mutex).Lock(...) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/sync/mutex.go:81 k8s.io/klog/v2.(*loggingT).output(0x5224080, 0xc000000000, 0x0, 0x0, 0xc001347f80, 0x4d84feb, 0xc, 0x3c, 0x0) /go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:882 +0x866 k8s.io/klog/v2.(*loggingT).printf(0x5224080, 0x0, 0x0, 0x0, 0x2ecbb9c, 0x12, 0x0, 0x0, 0x0) /go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:733 +0x17b k8s.io/klog/v2.Verbose.Infof(...) /go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:1315 github.com/openshift/oc/pkg/cli/image/workqueue.(*workQueue).run(0xc000374340, 0x1, 0xc00156f740) /go/src/github.com/openshift/oc/pkg/cli/image/workqueue/workqueue.go:60 +0xf6 created by github.com/openshift/oc/pkg/cli/image/workqueue.New /go/src/github.com/openshift/oc/pkg/cli/image/workqueue/workqueue.go:34 +0xcb goroutine 54 [IO wait]: internal/poll.runtime_pollWait(0x7fa2656f4e90, 0x72, 0xffffffffffffffff) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/runtime/netpoll.go:203 +0x55 internal/poll.(*pollDesc).wait(0xc00057c598, 0x72, 0x1000, 0x1006, 0xffffffffffffffff) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/internal/poll/fd_poll_runtime.go:87 +0x45 internal/poll.(*pollDesc).waitRead(...) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/internal/poll/fd_poll_runtime.go:92 internal/poll.(*FD).Read(0xc00057c580, 0xc001696000, 0x1006, 0x1006, 0x0, 0x0, 0x0) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/internal/poll/fd_unix.go:169 +0x19b net.(*netFD).Read(0xc00057c580, 0xc001696000, 0x1006, 0x1006, 0x203000, 0x140, 0x140) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/net/fd_unix.go:202 +0x4f net.(*conn).Read(0xc0001148e0, 0xc001696000, 0x1006, 0x1006, 0x0, 0x0, 0x0) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/net/net.go:184 +0x8e crypto/tls.(*atLeastReader).Read(0xc0017b5980, 0xc001696000, 0x1006, 0x1006, 0x420cd1, 0x8, 0xc0016179c8) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/crypto/tls/conn.go:760 +0x60 bytes.(*Buffer).ReadFrom(0xc0015fb3d8, 0x35169e0, 0xc0017b5980, 0x41c735, 0x2acd4e0, 0x2dc9be0) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/bytes/buffer.go:204 +0xb1 crypto/tls.(*Conn).readFromUntil(0xc0015fb180, 0x351bf40, 0xc0001148e0, 0x5, 0xc0001148e0, 0x8) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/crypto/tls/conn.go:782 +0xec crypto/tls.(*Conn).readRecordOrCCS(0xc0015fb180, 0x0, 0x0, 0xc001617d38) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/crypto/tls/conn.go:589 +0x115 crypto/tls.(*Conn).readRecord(...) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/crypto/tls/conn.go:557 crypto/tls.(*Conn).Read(0xc0015fb180, 0xc00110e000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/crypto/tls/conn.go:1233 +0x15b bufio.(*Reader).Read(0xc000278600, 0xc000145998, 0x9, 0x9, 0xc001617d38, 0x30c1d00, 0x81c005) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/bufio/bufio.go:226 +0x24f io.ReadAtLeast(0x3516780, 0xc000278600, 0xc000145998, 0x9, 0x9, 0x9, 0xc0000a6060, 0x0, 0x3516c80) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/io/io.go:310 +0x87 io.ReadFull(...) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/io/io.go:329 net/http.http2readFrameHeader(0xc000145998, 0x9, 0x9, 0x3516780, 0xc000278600, 0x0, 0x0, 0xc001769620, 0x0) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/net/http/h2_bundle.go:1479 +0x87 net/http.(*http2Framer).ReadFrame(0xc000145960, 0xc001769620, 0x0, 0x0, 0x0) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/net/http/h2_bundle.go:1737 +0xa1 net/http.(*http2clientConnReadLoop).run(0xc001617fa8, 0x0, 0x0) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/net/http/h2_bundle.go:8246 +0x8d net/http.(*http2ClientConn).readLoop(0xc000544600) /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/net/http/h2_bundle.go:8174 +0x6f created by net/http.(*http2Transport).newClientConn /opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/net/http/h2_bundle.go:7174 +0x64a I0923 06:28:47.015701 3576 workqueue.go:60] work queue exiting
Jian, I'm unable to access "quay.io/openshift-qe-optional-operators/ocp4-index:23" myself, so if you could grant access, that would be helpful. I can attempt to replicate this with a private index of my own -- in the meantime, could you attempt removing all other entries for quay.io and trying again. The command in question is using oc's packages to pull images (see https://github.com/openshift/oc/blob/d39e3084bc933deae8c07e0f7affc9ce42b5c222/pkg/cli/admin/catalog/mirror.go#L170) and I believe is correctly passing along credentials (see https://github.com/openshift/oc/blob/d39e3084bc933deae8c07e0f7affc9ce42b5c222/pkg/cli/admin/catalog/mirror.go#L175), though more digging could prove otherwise.
Hi Nick, > I'm unable to access "quay.io/openshift-qe-optional-operators/ocp4-index:23" myself, so if you could grant access, that would be helpful. Sorry, I'm not the manintainer for this repo, I only have the readhonly permission. But, I can ask someone to add you. So, could you help give me your quay account? Thanks! I couln't find the `nhale` account. > I can attempt to replicate this with a private index of my own -- in the meantime, could you attempt removing all other entries for quay.io and trying again. yes, it works well after removing the auth of the `quay.io`(see the config.json below). But, we should support the Docker config.json that contains multi accounts for a registry. You know, as the user, we have multi accounts for different repos in the quay.io registry. [root@preserve-olm-env data]# cat ~/.docker/config.json { "auths": { "quay.io/openshift-qe-optional-operators": { "auth": "xxx" }, "quay.io/openshift-release-dev": { "auth": "xxx", "email": "xxx" }, "registry.redhat.io": { "auth": "xxx" }, "registry.svc.ci.openshift.org": { "auth": "xxx" } }, "HttpHeaders": { "User-Agent": "Docker-Client/19.03.9 (linux)" } } [root@preserve-olm-env data]# docker login quay.io/openshift-qe-optional-operators Authenticating with existing credentials... WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [root@preserve-olm-env data]# oc adm catalog mirror quay.io/openshift-qe-optional-operators/ocp4-index:23 quay.io/openshift-qe-optional-operators --manifests-only --to-manifests=mirror src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/497557863 wrote database to /tmp/497557863 using database at: /tmp/497557863/index.db wrote mirroring manifests to mirror
Hi Jian, Sorry for the late response, my quay.io username is njhale I was unable to track down any meaningful causes for this issue in our use of oc. I don't think we'll get to a root cause within the week -- moving to next sprint.
Looks like this is because of the related code above. When there are multiple auth options in the config file, oc does not attempt to try all of them before returning a failure. Reassigning to the oc component for further response, but my assumption is that this is a feature request and should be routed through the RFE process.
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Keywords if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.
Unfortunately, it is not possible to configure 2 auths for a single image registry in a single authfile. This is consistent with behavior of docker libraries, moby, podman, and oc. This is particularly annoying with oc commands that both push and pull images, like 'oc adm release new' and 'oc adm catalog mirror'. The workaround is to mirror to disk first, then with 2nd command mirror from disk to registry. This is outlined in command help menus. I'm closing this as it would have to be an upstream feature request.