Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1894382

Summary: Operator Lifecycle Manager on restricted networks(opm index prune) not working on Power(ppc64le) clusters
Product: OpenShift Container Platform Reporter: Amit Ghatwal <aghatwal>
Component: DocumentationAssignee: Alex Dellapenta <adellape>
Status: CLOSED CURRENTRELEASE QA Contact: Jian Zhang <jiazha>
Severity: medium Docs Contact: Vikram Goyal <vigoyal>
Priority: medium    
Version: 4.6CC: adellape, ankithom, aos-bugs, jiazha, jokerman, mbenitez, sniemann, syurtkor, tdale, wolfgang.voesch
Target Milestone: ---   
Target Release: ---   
Hardware: ppc64le   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-03-02 04:53:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Amit Ghatwal 2020-11-04 07:52:37 UTC
Description of problem:
On a restricted power cluster(OCP4.6) , am trying configure OLM to install and manage Operators from the local sources . Following the steps given - https://docs.openshift.com/container-platform/4.6/operators/admin/olm-restricted-networks.html#olm-mirror-catalog_olm-restricted-networks

Version-Release number of selected component (if applicable):
# oc get clusterversion
NAME      VERSION      AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-rc.2   True        False         20d     Cluster version is 4.6.0-rc.2

# oc get nodes
NAME       STATUS   ROLES    AGE   VERSION
master-0   Ready    master   20d   v1.19.0+d59ce34
master-1   Ready    master   21d   v1.19.0+d59ce34
master-2   Ready    master   21d   v1.19.0+d59ce34
worker-0   Ready    worker   20d   v1.19.0+d59ce34
worker-1   Ready    worker   20d   v1.19.0+d59ce34

# oc get co
NAME                                       VERSION      AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.6.0-rc.2   True        False         False      8m13s
cloud-credential                           4.6.0-rc.2   True        False         False      21d
cluster-autoscaler                         4.6.0-rc.2   True        False         False      20d
config-operator                            4.6.0-rc.2   True        False         False      21d
console                                    4.6.0-rc.2   True        False         False      4m44s
csi-snapshot-controller                    4.6.0-rc.2   True        False         False      17h
dns                                        4.6.0-rc.2   True        False         False      20d
etcd                                       4.6.0-rc.2   True        False         False      20d
image-registry                             4.6.0-rc.2   True        False         False      17h
ingress                                    4.6.0-rc.2   True        False         False      20d
insights                                   4.6.0-rc.2   True        False         True       21d
kube-apiserver                             4.6.0-rc.2   True        False         False      20d
kube-controller-manager                    4.6.0-rc.2   True        False         False      20d
kube-scheduler                             4.6.0-rc.2   True        False         False      20d
kube-storage-version-migrator              4.6.0-rc.2   True        False         False      17h
machine-api                                4.6.0-rc.2   True        False         False      21d
machine-approver                           4.6.0-rc.2   True        False         False      21d
machine-config                             4.6.0-rc.2   True        False         False      20d
marketplace                                4.6.0-rc.2   True        False         False      17h
monitoring                                 4.6.0-rc.2   True        False         False      17h
network                                    4.6.0-rc.2   True        False         False      21d
node-tuning                                4.6.0-rc.2   True        False         False      21d
openshift-apiserver                        4.6.0-rc.2   True        False         False      18h
openshift-controller-manager               4.6.0-rc.2   True        False         False      5d5h
openshift-samples                          4.6.0-rc.2   True        False         False      20d
operator-lifecycle-manager                 4.6.0-rc.2   True        False         False      21d
operator-lifecycle-manager-catalog         4.6.0-rc.2   True        False         False      21d
operator-lifecycle-manager-packageserver   4.6.0-rc.2   True        False         False      17h
service-ca                                 4.6.0-rc.2   True        False         False      21d
storage                                    4.6.0-rc.2   True        False         False      21d


# opm version
Version: version.Version{OpmVersion:"v1.14.3-5-gf6e5d92", GitCommit:"f6e5d9281f335472dda7110fca2c710794c97fb5", BuildDate:"2020-10-06T13:14:18Z", GoOs:"linux", GoArch:"ppc64le"}
( opm cli  for power installed as per steps from here - https://docs.openshift.com/container-platform/4.6/cli_reference/opm-cli.html#opm-cli ) 

# oc get OperatorHub cluster -o json
{
    "apiVersion": "config.openshift.io/v1",
    "kind": "OperatorHub",
    "metadata": {
        "annotations": {
            "release.openshift.io/create-only": "true"
        },
        "creationTimestamp": "2020-10-14T07:22:50Z",
        "generation": 4,
        "managedFields": [
            {
                "apiVersion": "config.openshift.io/v1",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:metadata": {
                        "f:annotations": {
                            ".": {},
                            "f:release.openshift.io/create-only": {}
                        }
                    },
                    "f:spec": {}
                },
                "manager": "cluster-version-operator",
                "operation": "Update",
                "time": "2020-10-14T07:22:50Z"
            },
            {
                "apiVersion": "config.openshift.io/v1",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:spec": {
                        "f:disableAllDefaultSources": {}
                    }
                },
                "manager": "kubectl-patch",
                "operation": "Update",
                "time": "2020-11-03T09:45:14Z"
            },
            {
                "apiVersion": "config.openshift.io/v1",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:status": {
                        ".": {},
                        "f:sources": {}
                    }
                },
                "manager": "marketplace-operator",
                "operation": "Update",
                "time": "2020-11-04T00:08:14Z"
            }
        ],
        "name": "cluster",
        "resourceVersion": "9091033",
        "selfLink": "/apis/config.openshift.io/v1/operatorhubs/cluster",
        "uid": "3946b66c-49b0-4905-b6c2-ed634d98fe96"
    },
    "spec": {
        "disableAllDefaultSources": true
    },
    "status": {
        "sources": [
            {
                "disabled": true,
                "name": "redhat-operators",
                "status": "Success"
            },
            {
                "disabled": true,
                "name": "certified-operators",
                "status": "Success"
            },
            {
                "disabled": true,
                "name": "community-operators",
                "status": "Success"
            },
            {
                "disabled": true,
                "name": "redhat-marketplace",
                "status": "Success"
            }
        ]
    }
}


How reproducible:


Steps to Reproduce: ( as per here- https://docs.openshift.com/container-platform/4.6/operators/admin/olm-restricted-networks.html#olm-mirror-catalog_olm-restricted-networks) running all commands from bastion node(power/ppc64le) of the cluster.
1. Disabled the default OperatorHub sources

2. Pruning an index image using below ( keeping only codeready-workspaces )
# opm index prune -f brew.registry.redhat.io/rh-osbs/iib:24657 -p codeready-workspaces -t 192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index:v4.6

3. # podman ps -a
CONTAINER ID  IMAGE                         COMMAND               CREATED     STATUS         PORTS                   NAMES
fb020744d505  docker.io/ppc64le/registry:2  serve /etc/docker...  4 days ago  Up 4 days ago  0.0.0.0:4000->5000/tcp  local-registry

4.  Pushed this newly pruned+tagged image to target registry ( 192.168.25.171:4000 ) hosted on the bastion node.

5. podman images | grep olm-mirror-ppc64le
192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index   v4.6     7c2b9af8d8de   About an hour ago   76.5 MB

6. podman images | grep  iib
brew.registry.redhat.io/rh-osbs/iib                            24657    97105a61cd86   9 hours ago         779 MB

7. Confirmed that this iib image is multi-arch and supported for power using below
# podman run --rm -it  brew.registry.redhat.io/rh-osbs/iib:24657 version
Version: version.Version{OpmVersion:"v1.14.3-5-gf6e5d92", GitCommit:"f6e5d9281f335472dda7110fca2c710794c97fb5", BuildDate:"2020-10-31T16:21:21Z", GoOs:"linux", GoArch:"ppc64le"}

8. However after i have pruned this image  using `opm index prune` and pushed to  target registry , it gives below weird error ..looks like its amd64 now

# podman run --rm -it  192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index:v4.6 version
standard_init_linux.go:211: exec user process caused "exec format error"

Actual results:
Ideally the resultant pruned image using "opm index prune" command should have been power image but turns out that this image is turned to "amd64" only going by above "exec format error" output.

Expected results:
Expected the resultant pruned+tagged iib( image to be power arch based which is created using "opm index prune" command


Additional info:

With all this , the end result is that pods (Creating a catalog from an index image) on power cluster is going in crashloopbackoff state.

# oc get pods -n openshift-marketplace
NAME                                    READY   STATUS             RESTARTS   AGE
marketplace-operator-865d576b76-npct6   1/1     Running            0          17h
my-operator-catalog-6qqwt               0/1     CrashLoopBackOff   6          6m58s

# oc describe pods my-operator-catalog-6qqwt -n openshift-marketplace
Name:         my-operator-catalog-6qqwt
Namespace:    openshift-marketplace
Priority:     0
Node:         worker-1/192.168.25.241
Start Time:   Wed, 04 Nov 2020 02:35:31 -0500
Labels:       catalogsource.operators.coreos.com/update=my-operator-catalog
              olm.catalogSource=
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.131.0.15"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.131.0.15"
                    ],
                    "default": true,
                    "dns": {}
                }]
              openshift.io/scc: anyuid
Status:       Running
IP:           10.131.0.15
IPs:
  IP:  10.131.0.15
Containers:
  registry-server:
    Container ID:   cri-o://e22f354f1d56a6b26458620e11181bc62385d991c68741a2c0836a00f2a1fc07
    Image:          192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index:v4.6
    Image ID:       192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index@sha256:f1c4964e79eda596b5c6baef9b1cd66ce68219ce60e29d9b57dd9b87a89d2a19
    Port:           50051/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 04 Nov 2020 02:41:41 -0500
      Finished:     Wed, 04 Nov 2020 02:41:41 -0500
    Ready:          False
    Restart Count:  6
    Requests:
      cpu:        10m
      memory:     50Mi
    Liveness:     exec [grpc_health_probe -addr=:50051] delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:    exec [grpc_health_probe -addr=:50051] delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-85s5q (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-85s5q:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-85s5q
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age                     From               Message
  ----     ------          ----                    ----               -------
  Normal   Scheduled       7m25s                   default-scheduler  Successfully assigned openshift-marketplace/my-operator-catalog-6qqwt to worker-1
  Normal   AddedInterface  7m24s                   multus             Add eth0 [10.131.0.15/23]
  Normal   Pulled          7m23s                   kubelet            Successfully pulled image "192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index:v4.6" in 10.504717ms
  Normal   Pulled          7m22s                   kubelet            Successfully pulled image "192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index:v4.6" in 15.923612ms
  Normal   Pulled          6m58s                   kubelet            Successfully pulled image "192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index:v4.6" in 14.400411ms
  Normal   Pulling         6m25s (x4 over 7m23s)   kubelet            Pulling image "192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index:v4.6"
  Normal   Pulled          6m25s                   kubelet            Successfully pulled image "192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index:v4.6" in 10.457365ms
  Normal   Created         6m24s (x4 over 7m23s)   kubelet            Created container registry-server
  Normal   Started         6m24s (x4 over 7m23s)   kubelet            Started container registry-server
  Warning  BackOff         2m19s (x27 over 7m21s)  kubelet            Back-off restarting failed container

Comment 1 Ankita Thomas 2020-11-05 21:17:42 UTC
This is an issue with the docs, https://docs.openshift.com/container-platform/4.6/operators/admin/olm-restricted-networks.html#olm-updating-index-image_olm-restricted-networks.

The default registry base image doesn't support multiarch, opm index add should be passing the correct version of the downstream registry base image with the --binary-image flag.

Comment 2 Amit Ghatwal 2020-11-06 05:16:50 UTC
Hi Ankita,

This is what i had to do on my power host and my configs for OCP4.6.

# opm index prune -f brew.registry.redhat.io/rh-osbs/iib:24657 -p codeready-workspaces --generate

This created below dockerfile
# cat index.Dockerfile
FROM quay.io/operator-framework/upstream-opm-builder
LABEL operators.operatorframework.io.index.database.v1=/database/index.db
ADD database/index.db /database/index.db
EXPOSE 50051
ENTRYPOINT ["/bin/opm"]
CMD ["registry", "serve", "--database", "/database/index.db"]

I built the images as is however it was still complaining with exec errors.
# podman build -t 192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index:v4.6 -f index.Dockerfile

STEP 1: FROM quay.io/operator-framework/upstream-opm-builder
STEP 2: LABEL operators.operatorframework.io.index.database.v1=/database/index.db
--> 606aad04d9b
STEP 3: ADD database/index.db /database/index.db
--> 8e33c3fd510
STEP 4: EXPOSE 50051
--> 8738fc8b0e6
STEP 5: ENTRYPOINT ["/bin/opm"]
--> ab9d6892ffc
STEP 6: CMD ["registry", "serve", "--database", "/database/index.db"]
STEP 7: COMMIT 192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index:v4.6
--> ff8cc964d6a
ff8cc964d6a60871c55a1e069981965c8d010676dce02ba5cd4f24003df85a30

# podman images
REPOSITORY                                                     TAG      IMAGE ID       CREATED          SIZE
192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index   v4.6     ff8cc964d6a6   10 seconds ago   76.5 MB

The image was still giving me exec erors as seen below

# podman run --rm -it  192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index:v4.6 version
standard_init_linux.go:211: exec user process caused "exec format error"

In above dockerfile - "quay.io/operator-framework/upstream-opm-builder" isn't multi-arch , so had to replace it with - "registry.redhat.io/openshift4/ose-operator-registry:v4.6" which is multi-arch

Using above change , am able to build an image("192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index:v4.6") for power and it seems to have the correct power binaries now...

# podman run --rm -it  192.168.25.171:4000/olm-mirror-ppc64le/redhat-operator-index:v4.6 version
Version: version.Version{OpmVersion:"v1.14.3-5-gf6e5d92", GitCommit:"f6e5d9281f335472dda7110fca2c710794c97fb5", BuildDate:"2020-10-06T13:14:18Z", GoOs:"linux", GoArch:"ppc64le"}

Thanks @kevin Rizza(krizza) for the valuable inputs

Dear Ankita,

Can the openshift docs be updated accordingly so that restricted networks steps exists for Power , S390x(IBM arches) . ?


Regards,
Amit

Comment 3 Amit Ghatwal 2020-11-06 05:19:02 UTC
Using this multi-arch operator registry image - https://catalog.redhat.com/software/containers/openshift4/ose-operator-registry/5cddd0bed70cc57c44b2e1f3 helped.

Comment 4 Amit Ghatwal 2021-01-18 12:11:05 UTC
Hi All,

As was mentioned here - https://bugzilla.redhat.com/show_bug.cgi?id=1894382#c1 have confirmed using --binary-image flag able to get the pruned images for Power.

# opm index prune -f brew.registry.redhat.io/rh-osbs/iib:39813 -i registry.redhat.io/openshift4/ose-operator-registry:v4.6 -p openshift-pipelines-operator-rh -t registry.ghatwala-addon04-70b4.161.156.139.117.nip.io:5000/openshift-pipelines-rh/redhat-operator-index:v4.6
INFO[0000] pruning the index                             packages="[openshift-pipelines-operator-rh]"
INFO[0000] Pulling previous image brew.registry.redhat.io/rh-osbs/iib:39813 to get metadata  packages="[openshift-pipelines-operator-rh]"
INFO[0000] running /usr/bin/podman pull brew.registry.redhat.io/rh-osbs/iib:39813  packages="[openshift-pipelines-operator-rh]"
INFO[0007] running /usr/bin/podman pull brew.registry.redhat.io/rh-osbs/iib:39813  packages="[openshift-pipelines-operator-rh]"
INFO[0014] Getting label data from previous image        packages="[openshift-pipelines-operator-rh]"
INFO[0014] running podman inspect                        packages="[openshift-pipelines-operator-rh]"
INFO[0014] running podman create                         packages="[openshift-pipelines-operator-rh]"
INFO[0014] running podman cp                             packages="[openshift-pipelines-operator-rh]"
INFO[0022] running podman rm                             packages="[openshift-pipelines-operator-rh]"
INFO[0023] deleting packages                             pkg=3scale-operator
INFO[0023] input has been sanitized                      pkg=3scale-operator
INFO[0023] packages: [3scale-operator]                   pkg=3scale-operator
INFO[0023] deleting packages                             pkg=advanced-cluster-management
INFO[0023] input has been sanitized                      pkg=advanced-cluster-management
INFO[0023] packages: [advanced-cluster-management]       pkg=advanced-cluster-management
INFO[0024] deleting packages                             pkg=amq-broker
INFO[0024] input has been sanitized                      pkg=amq-broker
INFO[0024] packages: [amq-broker]                        pkg=amq-broker
INFO[0024] deleting packages                             pkg=amq-broker-lts
INFO[0024] input has been sanitized                      pkg=amq-broker-lts
INFO[0024] packages: [amq-broker-lts]                    pkg=amq-broker-lts
INFO[0024] deleting packages                             pkg=amq-online
INFO[0024] input has been sanitized                      pkg=amq-online
INFO[0024] packages: [amq-online]                        pkg=amq-online
INFO[0025] deleting packages                             pkg=amq-streams
INFO[0025] input has been sanitized                      pkg=amq-streams
INFO[0025] packages: [amq-streams]                       pkg=amq-streams
INFO[0025] deleting packages                             pkg=amq7-interconnect-operator
INFO[0025] input has been sanitized                      pkg=amq7-interconnect-operator
INFO[0025] packages: [amq7-interconnect-operator]        pkg=amq7-interconnect-operator
INFO[0025] deleting packages                             pkg=apicast-operator
INFO[0025] input has been sanitized                      pkg=apicast-operator
INFO[0025] packages: [apicast-operator]                  pkg=apicast-operator
INFO[0025] deleting packages                             pkg=awx-resource-operator
INFO[0025] input has been sanitized                      pkg=awx-resource-operator
INFO[0025] packages: [awx-resource-operator]             pkg=awx-resource-operator
INFO[0026] deleting packages                             pkg=businessautomation-operator
INFO[0026] input has been sanitized                      pkg=businessautomation-operator
INFO[0026] packages: [businessautomation-operator]       pkg=businessautomation-operator
INFO[0026] deleting packages                             pkg=cluster-kube-descheduler-operator
INFO[0026] input has been sanitized                      pkg=cluster-kube-descheduler-operator
INFO[0026] packages: [cluster-kube-descheduler-operator]  pkg=cluster-kube-descheduler-operator
INFO[0026] deleting packages                             pkg=cluster-logging
INFO[0026] input has been sanitized                      pkg=cluster-logging
INFO[0026] packages: [cluster-logging]                   pkg=cluster-logging
INFO[0026] deleting packages                             pkg=clusterresourceoverride
INFO[0026] input has been sanitized                      pkg=clusterresourceoverride
INFO[0026] packages: [clusterresourceoverride]           pkg=clusterresourceoverride
INFO[0026] deleting packages                             pkg=codeready-workspaces
INFO[0026] input has been sanitized                      pkg=codeready-workspaces
INFO[0026] packages: [codeready-workspaces]              pkg=codeready-workspaces
INFO[0026] deleting packages                             pkg=compliance-operator
INFO[0026] input has been sanitized                      pkg=compliance-operator
INFO[0026] packages: [compliance-operator]               pkg=compliance-operator
INFO[0026] deleting packages                             pkg=container-security-operator
INFO[0026] input has been sanitized                      pkg=container-security-operator
INFO[0026] packages: [container-security-operator]       pkg=container-security-operator
INFO[0026] deleting packages                             pkg=datagrid
INFO[0026] input has been sanitized                      pkg=datagrid
INFO[0026] packages: [datagrid]                          pkg=datagrid
INFO[0026] deleting packages                             pkg=eap
INFO[0026] input has been sanitized                      pkg=eap
INFO[0026] packages: [eap]                               pkg=eap
INFO[0026] deleting packages                             pkg=elasticsearch-operator
INFO[0026] input has been sanitized                      pkg=elasticsearch-operator
INFO[0026] packages: [elasticsearch-operator]            pkg=elasticsearch-operator
INFO[0026] deleting packages                             pkg=file-integrity-operator
INFO[0026] input has been sanitized                      pkg=file-integrity-operator
INFO[0026] packages: [file-integrity-operator]           pkg=file-integrity-operator
INFO[0026] deleting packages                             pkg=fuse-apicurito
INFO[0026] input has been sanitized                      pkg=fuse-apicurito
INFO[0026] packages: [fuse-apicurito]                    pkg=fuse-apicurito
INFO[0026] deleting packages                             pkg=fuse-console
INFO[0026] input has been sanitized                      pkg=fuse-console
INFO[0026] packages: [fuse-console]                      pkg=fuse-console
INFO[0026] deleting packages                             pkg=fuse-online
INFO[0026] input has been sanitized                      pkg=fuse-online
INFO[0026] packages: [fuse-online]                       pkg=fuse-online
INFO[0026] deleting packages                             pkg=jaeger-product
INFO[0026] input has been sanitized                      pkg=jaeger-product
INFO[0026] packages: [jaeger-product]                    pkg=jaeger-product
INFO[0026] deleting packages                             pkg=jws1
INFO[0026] input has been sanitized                      pkg=jws1
INFO[0026] packages: [jws1]                              pkg=jws1
INFO[0026] deleting packages                             pkg=kiali-ossm
INFO[0026] input has been sanitized                      pkg=kiali-ossm
INFO[0026] packages: [kiali-ossm]                        pkg=kiali-ossm
INFO[0026] deleting packages                             pkg=kubevirt-hyperconverged
INFO[0026] input has been sanitized                      pkg=kubevirt-hyperconverged
INFO[0026] packages: [kubevirt-hyperconverged]           pkg=kubevirt-hyperconverged
INFO[0026] deleting packages                             pkg=local-storage-operator
INFO[0026] input has been sanitized                      pkg=local-storage-operator
INFO[0026] packages: [local-storage-operator]            pkg=local-storage-operator
INFO[0026] deleting packages                             pkg=metering-ocp
INFO[0026] input has been sanitized                      pkg=metering-ocp
INFO[0026] packages: [metering-ocp]                      pkg=metering-ocp
INFO[0026] deleting packages                             pkg=mtc-operator
INFO[0026] input has been sanitized                      pkg=mtc-operator
INFO[0026] packages: [mtc-operator]                      pkg=mtc-operator
INFO[0026] deleting packages                             pkg=nfd
INFO[0026] input has been sanitized                      pkg=nfd
INFO[0026] packages: [nfd]                               pkg=nfd
INFO[0026] deleting packages                             pkg=ocs-operator
INFO[0026] input has been sanitized                      pkg=ocs-operator
INFO[0026] packages: [ocs-operator]                      pkg=ocs-operator
INFO[0027] deleting packages                             pkg=openshift-jenkins-operator
INFO[0027] input has been sanitized                      pkg=openshift-jenkins-operator
INFO[0027] packages: [openshift-jenkins-operator]        pkg=openshift-jenkins-operator
INFO[0027] deleting packages                             pkg=performance-addon-operator
INFO[0027] input has been sanitized                      pkg=performance-addon-operator
INFO[0027] packages: [performance-addon-operator]        pkg=performance-addon-operator
INFO[0027] deleting packages                             pkg=ptp-operator
INFO[0027] input has been sanitized                      pkg=ptp-operator
INFO[0027] packages: [ptp-operator]                      pkg=ptp-operator
INFO[0027] deleting packages                             pkg=quay-bridge-operator
INFO[0027] input has been sanitized                      pkg=quay-bridge-operator
INFO[0027] packages: [quay-bridge-operator]              pkg=quay-bridge-operator
INFO[0027] deleting packages                             pkg=quay-operator
INFO[0027] input has been sanitized                      pkg=quay-operator
INFO[0027] packages: [quay-operator]                     pkg=quay-operator
INFO[0027] deleting packages                             pkg=red-hat-camel-k
INFO[0027] input has been sanitized                      pkg=red-hat-camel-k
INFO[0027] packages: [red-hat-camel-k]                   pkg=red-hat-camel-k
INFO[0027] deleting packages                             pkg=rh-service-binding-operator
INFO[0027] input has been sanitized                      pkg=rh-service-binding-operator
INFO[0027] packages: [rh-service-binding-operator]       pkg=rh-service-binding-operator
INFO[0027] deleting packages                             pkg=rhsso-operator
INFO[0027] input has been sanitized                      pkg=rhsso-operator
INFO[0027] packages: [rhsso-operator]                    pkg=rhsso-operator
INFO[0027] deleting packages                             pkg=serverless-operator
INFO[0027] input has been sanitized                      pkg=serverless-operator
INFO[0027] packages: [serverless-operator]               pkg=serverless-operator
INFO[0027] deleting packages                             pkg=service-registry-operator
INFO[0027] input has been sanitized                      pkg=service-registry-operator
INFO[0027] packages: [service-registry-operator]         pkg=service-registry-operator
INFO[0027] deleting packages                             pkg=servicemeshoperator
INFO[0027] input has been sanitized                      pkg=servicemeshoperator
INFO[0027] packages: [servicemeshoperator]               pkg=servicemeshoperator
INFO[0027] deleting packages                             pkg=sriov-network-operator
INFO[0027] input has been sanitized                      pkg=sriov-network-operator
INFO[0027] packages: [sriov-network-operator]            pkg=sriov-network-operator
INFO[0027] deleting packages                             pkg=vertical-pod-autoscaler
INFO[0027] input has been sanitized                      pkg=vertical-pod-autoscaler
INFO[0027] packages: [vertical-pod-autoscaler]           pkg=vertical-pod-autoscaler
INFO[0027] deleting packages                             pkg=web-terminal
INFO[0027] input has been sanitized                      pkg=web-terminal
INFO[0027] packages: [web-terminal]                      pkg=web-terminal
INFO[0027] deleting packages                             pkg=windows-machine-config-operator
INFO[0027] input has been sanitized                      pkg=windows-machine-config-operator
INFO[0027] packages: [windows-machine-config-operator]   pkg=windows-machine-config-operator
INFO[0027] Generating dockerfile                         packages="[openshift-pipelines-operator-rh]"
INFO[0027] writing dockerfile: index.Dockerfile898575371  packages="[openshift-pipelines-operator-rh]"
INFO[0027] running podman build                          packages="[openshift-pipelines-operator-rh]"
INFO[0027] [podman build --format docker -f index.Dockerfile898575371 -t registry.ghatwala-addon04-70b4.161.156.139.117.nip.io:5000/openshift-pipelines-rh/redhat-operator-index:v4.6 .]  packages="[openshift-pipelines-operator-rh]"

# podman images | grep openshift-pipelines-rh
registry.ghatwala-addon04-70b4.161.156.139.117.nip.io:5000/openshift-pipelines-rh/redhat-operator-index  v4.6                                                                        26a389b4f135  9 minutes ago   769 MB


# podman run --rm -it registry.ghatwala-addon04-70b4.161.156.139.117.nip.io:5000/openshift-pipelines-rh/redhat-operator-index:v4.6 version
Version: version.Version{OpmVersion:"v1.14.3-22-ge86c799b", GitCommit:"e86c799beecfbba0b2d679702248e3ef526ae0ee", BuildDate:"2020-12-16T07:03:34Z", GoOs:"linux", GoArch:"ppc64le"}

So confirmed that using "--binary-image" flag , able to get pruned images for power (4.6).

# opm index prune --help | grep binary
  -i, --binary-image opm        container image for on-image opm command

Comment 5 Tom Dale 2021-02-01 17:43:40 UTC
Thank you, ` -i registry.redhat.io/openshift4/ose-operator-registry:v4.6`  also needed for opm index prune on s390x. Any idea when we can get this added to the docs?

Comment 6 Amit Ghatwal 2021-02-02 05:23:39 UTC
Hey Tom,

The doc PE have already created a PR - https://github.com/openshift/openshift-docs/pull/28642  for this , can you please post there about validation on Z too using above in disconnected/restricted Z environment.

Comment 7 Silke Niemann 2021-02-26 08:12:59 UTC
This BZ can be closed. The PR has been merged and update will be reflected in 4.6 and later docs.

Comment 8 Alex Dellapenta 2021-03-26 19:39:29 UTC
Following up on this via https://bugzilla.redhat.com/show_bug.cgi?id=1943150.