Bug 1894167 - OPM fails to create a pruned index image
Summary: OPM fails to create a pruned index image
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: OLM
Version: 4.6
Hardware: x86_64
OS: Linux
low
low
Target Milestone: ---
: ---
Assignee: Evan Cordell
QA Contact: Jian Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-03 16:57 UTC by James Force
Modified: 2024-03-25 16:54 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-09 18:15:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description James Force 2020-11-03 16:57:38 UTC
Description of problem:

Following the below linked documentation I'm attempting to create a custom pruned index image of 3 operators, but unfortunately not having much luck as it errors out mid process. I've also tried with a single operator and still see the same issue. 

https://docs.openshift.com/container-platform/4.6/operators/admin/olm-restricted-networks.html


Version-Release number of selected component (if applicable): 
4.6


How reproducible:
always

opm index prune \
    -f registry.redhat.io/redhat/redhat-operator-index:v4.6 \
    -p cluster-logging,elasticsearch-operator,metering-ocp \
    -t localhost:5000/olm/redhat-operator-index:v4.6 
INFO[0000] pruning the index                             packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0000] Pulling previous image registry.redhat.io/redhat/redhat-operator-index:v4.6 to get metadata  packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0000] running /usr/bin/podman pull registry.redhat.io/redhat/redhat-operator-index:v4.6  packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0003] running /usr/bin/podman pull registry.redhat.io/redhat/redhat-operator-index:v4.6  packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0006] Getting label data from previous image        packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0006] running podman inspect                        packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0006] running podman create                         packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0006] running podman cp                             packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0017] running podman rm                             packages="[cluster-logging elasticsearch-operator metering-ocp]"
Error: open index_tmp_500513138/database/index.db: no such file or directory
Usage:
  opm index prune [flags]

Flags:
  -i, --binary-image opm        container image for on-image opm command
  -c, --container-tool string   tool to interact with container images (save, build, etc.). One of: [docker, podman] (default "podman")
  -f, --from-index string       index to prune
      --generate                if enabled, just creates the dockerfile and saves it to local disk
  -h, --help                    help for prune
  -d, --out-dockerfile string   if generating the dockerfile, this flag is used to (optionally) specify a dockerfile name
  -p, --packages strings        comma separated list of packages to keep
      --permissive              allow registry load errors
  -t, --tag string              custom tag for container image being built

Global Flags:
      --skip-tls   skip TLS certificate verification for container image registries while pulling bundles or index


Steps to Reproduce:
Follow the steps as documented in the section 'Pruning an index image'
: https://docs.openshift.com/container-platform/4.6/operators/admin/olm-restricted-networks.html 


Actual results:
Error: open index_tmp_500513138/database/index.db: no such file or directory

No image, and the above listed error. I do appear to have a 'index_tmp_500513138' directory created but no 'database' sub directory, only 'merged'.


Expected results:
A new local pruned image available


Additional info:

Environment:

opm version
Version: version.Version{OpmVersion:"v1.14.3-5-gf6e5d92", GitCommit:"f6e5d9281f335472dda7110fca2c710794c97fb5", BuildDate:"2020-10-06T13:13:12Z", GoOs:"linux", GoArch:"amd64"}

oc version
Client Version: 4.6.1

podman --version
podman version 1.6.4


Let me know if you require any further details and i'm happy to provide them.

Comment 1 Robert Bohne 2020-11-05 10:56:57 UTC
A work-a-round I use:

podman run --name co -ti registry.redhat.io/redhat/certified-operator-index:v4.6
podman cp co:/database/index.db .
podman stop co; podman rm co
cp  index.db index.db-back
opm registry prune -p "openshiftartifactoryha-operator" -d "index.db"
cat > Containerfile  <<EOF
FROM registry.redhat.io/redhat/certified-operator-index:v4.6
COPY index.db /database/index.db
EOF
podman build -t  ${LOCAL_REGISTRY}/redhat/certified-operator-index:v4.6  .
podman run -p50051:50051 -it --rm ${LOCAL_REGISTRY}/redhat/certified-operator-index:v4.6
grpcurl -plaintext localhost:50051 api.Registry/ListPackages
{
  "name": "openshiftartifactoryha-operator"
}
podman push ${LOCAL_REGISTRY}/redhat/certified-operator-index:v4.6
oc create -f - <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: jfrog
  namespace: openshift-marketplace
spec:
  displayName: jfrog
  image: ${LOCAL_REGISTRY}/redhat/certified-operator-index:v4.6
  publisher: jfrog
  sourceType: grpc
EOF

Comment 2 Robert Bohne 2020-11-05 13:15:12 UTC
Updated Podman to version2 solve the problem. 

Version:      2.1.1
API Version:  2.0.0
Go Version:   go1.13.15
Built:        Sat Oct 31 00:23:30 2020
OS/Arch:      linux/amd64


I heard from an RH colleague Podman  1.9.3-2  works too.

Comment 3 James Force 2020-11-05 21:52:36 UTC
Thanks Robert your answer was much appreciated. I have it working now.

Comment 4 Ankita Thomas 2020-11-09 11:18:49 UTC
This is due to a bug with podman, https://github.com/containers/podman/issues/6596, where an nested directory was being created with an existing destination directory during cp

podman v2.0.0+ has the fix for the issue

Comment 5 Kevin Rizza 2020-11-09 18:15:40 UTC
Closing this out. This issue appears to be directly related to that above referenced podman issue, and there's nothing on the operator-registry that can be done to resolve this problem.

Comment 6 Jian Zhang 2020-12-03 07:59:46 UTC
I can reproduce this bug with podmand 1.4.4.
[jzhang@dhcp-140-36 operator-registry]$ opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.6 -p cluster-logging,elasticsearch-operator,metering-ocp -t quay.io/olmqe/redhat-operator-index:v4.6
INFO[0000] pruning the index                             packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0000] Pulling previous image registry.redhat.io/redhat/redhat-operator-index:v4.6 to get metadata  packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0000] running /usr/bin/podman pull registry.redhat.io/redhat/redhat-operator-index:v4.6  packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0051] running /usr/bin/podman pull registry.redhat.io/redhat/redhat-operator-index:v4.6  packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0057] Getting label data from previous image        packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0057] running podman inspect                        packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0057] running podman create                         packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0057] running podman cp                             packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0062] running podman rm                             packages="[cluster-logging elasticsearch-operator metering-ocp]"
Error: open index_tmp_718246003/database/index.db: no such file or directory
Usage:
  opm index prune [flags]

Flags:
  -i, --binary-image opm        container image for on-image opm command
  -c, --container-tool string   tool to interact with container images (save, build, etc.). One of: [docker, podman] (default "podman")
  -f, --from-index string       index to prune
      --generate                if enabled, just creates the dockerfile and saves it to local disk
  -h, --help                    help for prune
  -d, --out-dockerfile string   if generating the dockerfile, this flag is used to (optionally) specify a dockerfile name
  -p, --packages strings        comma separated list of packages to keep
      --permissive              allow registry load errors
  -t, --tag string              custom tag for container image being built

Global Flags:
      --skip-tls   skip TLS certificate verification for container image registries while pulling bundles or index

[jzhang@dhcp-140-36 operator-registry]$ podman version
Version:            1.4.4
RemoteAPI Version:  1
Go Version:         go1.10.3
OS/Arch:            linux/amd64

After update to 1.9.3, it works well.
[jzhang@dhcp-140-36 podman]$ opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.6 -p cluster-logging,elasticsearch-operator,metering-ocp -t quay.io/olmqe/redhat-operator-index:v4.6
INFO[0000] pruning the index                             packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0000] Pulling previous image registry.redhat.io/redhat/redhat-operator-index:v4.6 to get metadata  packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0000] running /usr/bin/podman pull registry.redhat.io/redhat/redhat-operator-index:v4.6  packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0003] running /usr/bin/podman pull registry.redhat.io/redhat/redhat-operator-index:v4.6  packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0006] Getting label data from previous image        packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0006] running podman inspect                        packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0006] running podman create                         packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0007] running podman cp                             packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0009] running podman rm                             packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0010] deleting packages                             pkg=3scale-operator
INFO[0010] input has been sanitized                      pkg=3scale-operator
INFO[0010] packages: [3scale-operator]                   pkg=3scale-operator
INFO[0010] deleting packages                             pkg=advanced-cluster-management
INFO[0010] input has been sanitized                      pkg=advanced-cluster-management
INFO[0010] packages: [advanced-cluster-management]       pkg=advanced-cluster-management
INFO[0010] deleting packages                             pkg=amq-broker
INFO[0010] input has been sanitized                      pkg=amq-broker
INFO[0010] packages: [amq-broker]                        pkg=amq-broker
INFO[0010] deleting packages                             pkg=amq-broker-lts
INFO[0010] input has been sanitized                      pkg=amq-broker-lts
INFO[0010] packages: [amq-broker-lts]                    pkg=amq-broker-lts
INFO[0010] deleting packages                             pkg=amq-online
INFO[0010] input has been sanitized                      pkg=amq-online
INFO[0010] packages: [amq-online]                        pkg=amq-online
INFO[0010] deleting packages                             pkg=amq-streams
INFO[0010] input has been sanitized                      pkg=amq-streams
INFO[0010] packages: [amq-streams]                       pkg=amq-streams
INFO[0010] deleting packages                             pkg=amq7-interconnect-operator
INFO[0010] input has been sanitized                      pkg=amq7-interconnect-operator
INFO[0010] packages: [amq7-interconnect-operator]        pkg=amq7-interconnect-operator
INFO[0010] deleting packages                             pkg=apicast-operator
INFO[0010] input has been sanitized                      pkg=apicast-operator
INFO[0010] packages: [apicast-operator]                  pkg=apicast-operator
INFO[0010] deleting packages                             pkg=awx-resource-operator
INFO[0010] input has been sanitized                      pkg=awx-resource-operator
INFO[0010] packages: [awx-resource-operator]             pkg=awx-resource-operator
INFO[0010] deleting packages                             pkg=businessautomation-operator
INFO[0010] input has been sanitized                      pkg=businessautomation-operator
INFO[0010] packages: [businessautomation-operator]       pkg=businessautomation-operator
INFO[0010] deleting packages                             pkg=cluster-kube-descheduler-operator
INFO[0010] input has been sanitized                      pkg=cluster-kube-descheduler-operator
INFO[0010] packages: [cluster-kube-descheduler-operator]  pkg=cluster-kube-descheduler-operator
INFO[0010] deleting packages                             pkg=clusterresourceoverride
INFO[0010] input has been sanitized                      pkg=clusterresourceoverride
INFO[0010] packages: [clusterresourceoverride]           pkg=clusterresourceoverride
INFO[0010] deleting packages                             pkg=codeready-workspaces
INFO[0010] input has been sanitized                      pkg=codeready-workspaces
INFO[0010] packages: [codeready-workspaces]              pkg=codeready-workspaces
INFO[0010] deleting packages                             pkg=compliance-operator
INFO[0010] input has been sanitized                      pkg=compliance-operator
INFO[0010] packages: [compliance-operator]               pkg=compliance-operator
INFO[0010] deleting packages                             pkg=container-security-operator
INFO[0010] input has been sanitized                      pkg=container-security-operator
INFO[0010] packages: [container-security-operator]       pkg=container-security-operator
INFO[0010] deleting packages                             pkg=datagrid
INFO[0010] input has been sanitized                      pkg=datagrid
INFO[0010] packages: [datagrid]                          pkg=datagrid
INFO[0010] deleting packages                             pkg=eap
INFO[0010] input has been sanitized                      pkg=eap
INFO[0010] packages: [eap]                               pkg=eap
INFO[0010] deleting packages                             pkg=file-integrity-operator
INFO[0010] input has been sanitized                      pkg=file-integrity-operator
INFO[0010] packages: [file-integrity-operator]           pkg=file-integrity-operator
INFO[0010] deleting packages                             pkg=fuse-apicurito
INFO[0010] input has been sanitized                      pkg=fuse-apicurito
INFO[0010] packages: [fuse-apicurito]                    pkg=fuse-apicurito
INFO[0010] deleting packages                             pkg=fuse-console
INFO[0010] input has been sanitized                      pkg=fuse-console
INFO[0010] packages: [fuse-console]                      pkg=fuse-console
INFO[0010] deleting packages                             pkg=fuse-online
INFO[0010] input has been sanitized                      pkg=fuse-online
INFO[0010] packages: [fuse-online]                       pkg=fuse-online
INFO[0010] deleting packages                             pkg=jaeger-product
INFO[0010] input has been sanitized                      pkg=jaeger-product
INFO[0010] packages: [jaeger-product]                    pkg=jaeger-product
INFO[0010] deleting packages                             pkg=kiali-ossm
INFO[0010] input has been sanitized                      pkg=kiali-ossm
INFO[0010] packages: [kiali-ossm]                        pkg=kiali-ossm
INFO[0010] deleting packages                             pkg=kubevirt-hyperconverged
INFO[0010] input has been sanitized                      pkg=kubevirt-hyperconverged
INFO[0010] packages: [kubevirt-hyperconverged]           pkg=kubevirt-hyperconverged
INFO[0010] deleting packages                             pkg=local-storage-operator
INFO[0010] input has been sanitized                      pkg=local-storage-operator
INFO[0010] packages: [local-storage-operator]            pkg=local-storage-operator
INFO[0011] deleting packages                             pkg=mtc-operator
INFO[0011] input has been sanitized                      pkg=mtc-operator
INFO[0011] packages: [mtc-operator]                      pkg=mtc-operator
INFO[0011] deleting packages                             pkg=nfd
INFO[0011] input has been sanitized                      pkg=nfd
INFO[0011] packages: [nfd]                               pkg=nfd
INFO[0011] deleting packages                             pkg=ocs-operator
INFO[0011] input has been sanitized                      pkg=ocs-operator
INFO[0011] packages: [ocs-operator]                      pkg=ocs-operator
INFO[0011] deleting packages                             pkg=openshift-pipelines-operator-rh
INFO[0011] input has been sanitized                      pkg=openshift-pipelines-operator-rh
INFO[0011] packages: [openshift-pipelines-operator-rh]   pkg=openshift-pipelines-operator-rh
INFO[0011] deleting packages                             pkg=performance-addon-operator
INFO[0011] input has been sanitized                      pkg=performance-addon-operator
INFO[0011] packages: [performance-addon-operator]        pkg=performance-addon-operator
INFO[0011] deleting packages                             pkg=ptp-operator
INFO[0011] input has been sanitized                      pkg=ptp-operator
INFO[0011] packages: [ptp-operator]                      pkg=ptp-operator
INFO[0011] deleting packages                             pkg=quay-bridge-operator
INFO[0011] input has been sanitized                      pkg=quay-bridge-operator
INFO[0011] packages: [quay-bridge-operator]              pkg=quay-bridge-operator
INFO[0011] deleting packages                             pkg=quay-operator
INFO[0011] input has been sanitized                      pkg=quay-operator
INFO[0011] packages: [quay-operator]                     pkg=quay-operator
INFO[0011] deleting packages                             pkg=red-hat-camel-k
INFO[0011] input has been sanitized                      pkg=red-hat-camel-k
INFO[0011] packages: [red-hat-camel-k]                   pkg=red-hat-camel-k
INFO[0011] deleting packages                             pkg=rh-service-binding-operator
INFO[0011] input has been sanitized                      pkg=rh-service-binding-operator
INFO[0011] packages: [rh-service-binding-operator]       pkg=rh-service-binding-operator
INFO[0011] deleting packages                             pkg=rhsso-operator
INFO[0011] input has been sanitized                      pkg=rhsso-operator
INFO[0011] packages: [rhsso-operator]                    pkg=rhsso-operator
INFO[0011] deleting packages                             pkg=serverless-operator
INFO[0011] input has been sanitized                      pkg=serverless-operator
INFO[0011] packages: [serverless-operator]               pkg=serverless-operator
INFO[0011] deleting packages                             pkg=service-registry-operator
INFO[0011] input has been sanitized                      pkg=service-registry-operator
INFO[0011] packages: [service-registry-operator]         pkg=service-registry-operator
INFO[0011] deleting packages                             pkg=servicemeshoperator
INFO[0011] input has been sanitized                      pkg=servicemeshoperator
INFO[0011] packages: [servicemeshoperator]               pkg=servicemeshoperator
INFO[0011] deleting packages                             pkg=sriov-network-operator
INFO[0011] input has been sanitized                      pkg=sriov-network-operator
INFO[0011] packages: [sriov-network-operator]            pkg=sriov-network-operator
INFO[0011] deleting packages                             pkg=vertical-pod-autoscaler
INFO[0011] input has been sanitized                      pkg=vertical-pod-autoscaler
INFO[0011] packages: [vertical-pod-autoscaler]           pkg=vertical-pod-autoscaler
INFO[0011] deleting packages                             pkg=web-terminal
INFO[0011] input has been sanitized                      pkg=web-terminal
INFO[0011] packages: [web-terminal]                      pkg=web-terminal
INFO[0011] Generating dockerfile                         packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0011] writing dockerfile: index.Dockerfile258265055  packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0011] running podman build                          packages="[cluster-logging elasticsearch-operator metering-ocp]"
INFO[0011] [podman build --format docker -f index.Dockerfile258265055 -t quay.io/olmqe/redhat-operator-index:v4.6 .]  packages="[cluster-logging elasticsearch-operator metering-ocp]"

[jzhang@dhcp-140-36 podman]$ podman version
Version:            1.9.3
RemoteAPI Version:  1
Go Version:         go1.15.0
OS/Arch:            linux/amd64

Comment 7 Oscar Casal Sanchez 2021-01-05 16:32:38 UTC
Hello,

When using RHEL7 is not available a greater version than 1.6.4 from podman, then, does it mean that they need to upgrade to RHEL8? Same error that it was reported here was seen in RHEL7 + OCP 4.6:

~~~
# ./opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.6 -p 3scale-operator,advanced-cluster-management,amq-broker,amq-broker-lts,amq-online,amq-streams,amq7-interconnect-operator,cluster-kube-descheduler-operator,cluster-logging,compliance-operator,container-security-operator,elasticsearch-operator,jaeger-product,kiali-ossm,local-storage-operator,metering-ocp,ocs-operator,servicemeshoperator,sriov-network-operator,vertical-pod-autoscaler,web-terminal -t dockersit2registry.fhlmc.com/docker-redhat-local-repo/olm/redhat-operator-index:v4.6

INFO[0000] pruning the index                             packages="[3scale-operator advanced-cluster-management amq-broker amq-broker-lts amq-online amq-streams amq7-interconnect-operator cluster-kube-descheduler-operator cluster-logging compliance-operator container-security-operator elasticsearch-operator jaeger-product kiali-ossm local-storage-operator metering-ocp ocs-operator servicemeshoperator sriov-network-operator vertical-pod-autoscaler web-terminal]"
INFO[0000] Pulling previous image registry.redhat.io/redhat/redhat-operator-index:v4.6 to get metadata  packages="[3scale-operator advanced-cluster-management amq-broker amq-broker-lts amq-online amq-streams amq7-interconnect-operator cluster-kube-descheduler-operator cluster-logging compliance-operator container-security-operator elasticsearch-operator jaeger-product kiali-ossm local-storage-operator metering-ocp ocs-operator servicemeshoperator sriov-network-operator vertical-pod-autoscaler web-terminal]"
INFO[0000] running /bin/podman pull registry.redhat.io/redhat/redhat-operator-index:v4.6  packages="[3scale-operator advanced-cluster-management amq-broker amq-broker-lts amq-online amq-streams amq7-interconnect-operator cluster-kube-descheduler-operator cluster-logging compliance-operator container-security-operator elasticsearch-operator jaeger-product kiali-ossm local-storage-operator metering-ocp ocs-operator servicemeshoperator sriov-network-operator vertical-pod-autoscaler web-terminal]"
INFO[0002] running /bin/podman pull registry.redhat.io/redhat/redhat-operator-index:v4.6  packages="[3scale-operator advanced-cluster-management amq-broker amq-broker-lts amq-online amq-streams amq7-interconnect-operator cluster-kube-descheduler-operator cluster-logging compliance-operator container-security-operator elasticsearch-operator jaeger-product kiali-ossm local-storage-operator metering-ocp ocs-operator servicemeshoperator sriov-network-operator vertical-pod-autoscaler web-terminal]"
INFO[0004] Getting label data from previous image        packages="[3scale-operator advanced-cluster-management amq-broker amq-broker-lts amq-online amq-streams amq7-interconnect-operator cluster-kube-descheduler-operator cluster-logging compliance-operator container-security-operator elasticsearch-operator jaeger-product kiali-ossm local-storage-operator metering-ocp ocs-operator servicemeshoperator sriov-network-operator vertical-pod-autoscaler web-terminal]"
INFO[0004] running podman inspect                        packages="[3scale-operator advanced-cluster-management amq-broker amq-broker-lts amq-online amq-streams amq7-interconnect-operator cluster-kube-descheduler-operator cluster-logging compliance-operator container-security-operator elasticsearch-operator jaeger-product kiali-ossm local-storage-operator metering-ocp ocs-operator servicemeshoperator sriov-network-operator vertical-pod-autoscaler web-terminal]"
INFO[0004] running podman create                         packages="[3scale-operator advanced-cluster-management amq-broker amq-broker-lts amq-online amq-streams amq7-interconnect-operator cluster-kube-descheduler-operator cluster-logging compliance-operator container-security-operator elasticsearch-operator jaeger-product kiali-ossm local-storage-operator metering-ocp ocs-operator servicemeshoperator sriov-network-operator vertical-pod-autoscaler web-terminal]"
INFO[0004] running podman cp                             packages="[3scale-operator advanced-cluster-management amq-broker amq-broker-lts amq-online amq-streams amq7-interconnect-operator cluster-kube-descheduler-operator cluster-logging compliance-operator container-security-operator elasticsearch-operator jaeger-product kiali-ossm local-storage-operator metering-ocp ocs-operator servicemeshoperator sriov-network-operator vertical-pod-autoscaler web-terminal]"
INFO[0339] running podman rm                             packages="[3scale-operator advanced-cluster-management amq-broker amq-broker-lts amq-online amq-streams amq7-interconnect-operator cluster-kube-descheduler-operator cluster-logging compliance-operator container-security-operator elasticsearch-operator jaeger-product kiali-ossm local-storage-operator metering-ocp ocs-operator servicemeshoperator sriov-network-operator vertical-pod-autoscaler web-terminal]"
Error: open index_tmp_413048166/database/index.db: no such file or directory
Usage:
  opm index prune [flags]
~~~

Or perhaps is it a different issue?

Regards,
Oscar

Comment 8 Red Hat Bugzilla 2023-09-15 01:31:11 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days


Note You need to log in before you can comment on or make changes to this bug.