Bug 2000098
| Summary: | ODF operator 4.9.0-120.ci fails to install on s390x due to incompatible ibm-storage-odf-plugin Docker image | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Dirk Haubenreisser <dhaubenr> |
| Component: | odf-operator | Assignee: | Nitin Goyal <nigoyal> |
| Status: | CLOSED ERRATA | QA Contact: | Raz Tamir <ratamir> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.9 | CC: | aaaggarw, badhikar, ebenahar, jrivera, madam, muagarwa, nigoyal, ocs-bugs, odf-bz-bot, pbalogh, rcyriac, rperiyas, sbalusu, svenkat |
| Target Milestone: | --- | Keywords: | Reopened |
| Target Release: | ODF 4.9.0 | ||
| Hardware: | s390x | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | v4.9.0-148.ci | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-12-13 17:45:30 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 2004824 | ||
| Bug Blocks: | |||
|
Description
Dirk Haubenreisser
2021-09-01 11:50:57 UTC
Nitin, can we do the initial triage and decide to which component this BZ belongs @dhaubenr How do you know that container is not running because it is incompatible? Can you pls show us the logs of the console pod? Added need info on Bipul to pitch in and ask for more info if required. @nigoyal The info you are asking for is part of the initial bug description: --- oc get deployment odf-console -n openshift-storage -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR odf-console 0/1 1 0 23m odf-console,ibm-console quay.io/rhceph-dev/odf-console:4.9-7.e4545ee.master,docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0 app=odf-console --- oc describe pod odf-console-78ffd4995c-mszv8 -n openshift-storage ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 23m default-scheduler Successfully assigned openshift-storage/odf-console-78ffd4995c-mszv8 to worker-001.m4208001ocs.lnxne.boe Warning FailedMount 23m (x7 over 23m) kubelet MountVolume.SetUp failed for volume "ibm-console-serving-cert" : secret "ibm-console-serving-cert" not found Warning FailedMount 23m (x7 over 23m) kubelet MountVolume.SetUp failed for volume "odf-console-serving-cert" : secret "odf-console-serving-cert" not found Normal AddedInterface 22m multus Add eth0 [10.131.0.28/23] from openshift-sdn Normal Pulling 22m kubelet Pulling image "quay.io/rhceph-dev/odf-console:4.9-7.e4545ee.master" Normal Pulling 21m kubelet Pulling image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0" Normal Pulled 21m kubelet Successfully pulled image "quay.io/rhceph-dev/odf-console:4.9-7.e4545ee.master" in 48.069343604s Normal Created 21m kubelet Created container odf-console Normal Started 21m kubelet Started container odf-console Normal Pulled 21m kubelet Successfully pulled image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0" in 29.43909624s Normal Pulled 21m kubelet Container image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0" already present on machine Normal Created 21m (x2 over 21m) kubelet Created container ibm-console Normal Started 21m (x2 over 21m) kubelet Started container ibm-console Warning BackOff 3m39s (x84 over 21m) kubelet Back-off restarting failed container --- oc logs odf-console-78ffd4995c-mszv8 -c ibm-console -n openshift-storage standard_init_linux.go:228: exec user process caused: exec format error --- In addition to that, the other container of the odf-console pod is running just fine: oc logs odf-console-78ffd4995c-mszv8 -c odf-console -n openshift-storage Starting up http-server, serving ./dist through https Available on: https://127.0.0.1:9001 https://10.131.0.28:9001 Hit CTRL-C to stop the server This tells me that the Docker image for container 'odf-console' (quay.io/rhceph-dev/odf-console:4.9-7.e4545ee.master) is available for s390x (and being used by the ODF operator) whereas the Docker image for container 'ibm-console' (docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0) is not. In fact if you look at the Docker manifest of said Docker image you'll see that it's only compatible with x86_64: docker manifest inspect --verbose docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0 | jq .Descriptor.platform { "architecture": "amd64", "os": "linux" } I think that main problem here is also that we rely on some upstream image lockated in: docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0 Which should not be in DS product build. If we depend on some image, we should have it part of the build and do its own multi platform image. Just my 2 cents. On Power architecture (ppc64le), we are also facing the same issue. We are not able to deploy ODF4.9 as docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0 is x86 only due to which odf-console-* pod is in CrashLoopBackOff state. [root@rdr-odf49-sao01-bastion-0 ~]# oc get pods -n openshift-storage NAME READY STATUS RESTARTS AGE noobaa-operator-6cc4fbb6ff-ht7lw 1/1 Running 0 2m53s ocs-metrics-exporter-597f6f89d-hwjlh 1/1 Running 0 2m51s ocs-operator-7b847f6896-x2jr6 1/1 Running 0 2m51s odf-console-84d85c6cd6-8675h 1/2 CrashLoopBackOff 2 (21s ago) 2m55s odf-operator-controller-manager-79894545f4-fl6zv 2/2 Running 0 2m55s rook-ceph-operator-7679b98976-8988l 1/1 Running 0 2m51s Events section of odf-console pod : oc describe pod odf-console-84d85c6cd6-8675h -n openshift-storage Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 11m default-scheduler Successfully assigned openshift-storage/odf-console-84d85c6cd6-8675h to rdr-odf49-sao01-worker-0 Warning FailedMount 10m (x6 over 11m) kubelet MountVolume.SetUp failed for volume "odf-console-serving-cert" : secret "odf-console-serving-cert" not found Warning FailedMount 10m (x6 over 11m) kubelet MountVolume.SetUp failed for volume "ibm-console-serving-cert" : secret "ibm-console-serving-cert" not found Normal AddedInterface 10m multus Add eth0 [10.129.2.19/23] from openshift-sdn Normal Pulling 10m kubelet Pulling image "quay.io/rhceph-dev/odf-console:4.9-9.74a2cf5.master" Normal Pulled 9m24s kubelet Successfully pulled image "quay.io/rhceph-dev/odf-console:4.9-9.74a2cf5.master" in 1m1.002095761s Normal Pulling 9m23s kubelet Pulling image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0" Normal Created 9m23s kubelet Created container odf-console Normal Started 9m23s kubelet Started container odf-console Normal Pulled 8m48s kubelet Successfully pulled image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0" in 34.863022444s Normal Pulled 8m47s kubelet Container image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0" already present on machine Normal Created 8m46s (x2 over 8m48s) kubelet Created container ibm-console Normal Started 8m46s (x2 over 8m47s) kubelet Started container ibm-console Warning BackOff 62s (x37 over 8m46s) kubelet Back-off restarting failed container [root@rdr-odf49-sao01-bastion-0 ~]# oc logs -f pod/odf-console-84d85c6cd6-8675h -n openshift-storage -c ibm-console standard_init_linux.go:228: exec user process caused: exec format error [root@rdr-odf49-sao01-bastion-0 ~]# podman run docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0 standard_init_linux.go:228: exec user process caused: exec format error Assigning it to Ning as he is the owner of IBM console plugin. No further logs required for me. Agree with the assessment made by Petr. Clearing needinfo. Fix is in latest image bundle, change base image and build images to support multiple architecture. docker.io/ibmcom/ibm-storage-odf-operator:0.2.1 docker.io/ibmcom/ibm-storage-odf-operator-bundle:0.2.1 docker.io/ibmcom/ibm-storage-odf-block-driver:0.2.1 docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1 I am still seeing this error for IBM System P: [root@nx124-49-e519-mon01-bastion-0 ~]# oc get pods -n openshift-storage NAME READY STATUS RESTARTS AGE noobaa-operator-756bff65f7-b9nl9 1/1 Running 0 13m ocs-metrics-exporter-6b45c4f775-tk664 1/1 Running 0 13m ocs-operator-6d66555f78-mgks5 1/1 Running 0 13m odf-console-5ff5c95d7d-8jnsk 1/2 ImagePullBackOff 0 13m odf-operator-controller-manager-6c7c89bc6-jh65m 2/2 Running 0 13m rook-ceph-operator-b9b4fdc6-7pfwb 1/1 Running 0 13m [root@nx124-49-e519-mon01-bastion-0 ~]# Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned openshift-storage/odf-console-5ff5c95d7d-8jnsk to nx124-49-e519-mon01-worker-0 Warning FailedMount 10m (x6 over 10m) kubelet MountVolume.SetUp failed for volume "ibm-console-serving-cert" : secret "ibm-console-serving-cert" not found Warning FailedMount 10m (x6 over 10m) kubelet MountVolume.SetUp failed for volume "odf-console-serving-cert" : secret "odf-console-serving-cert" not found Normal AddedInterface 10m multus Add eth0 [10.129.2.16/23] from openshift-sdn Normal Pulling 9m55s kubelet Pulling image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1" Normal Pulled 9m46s kubelet Successfully pulled image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1" in 9.152182645s Normal Started 9m45s kubelet Started container ibm-console Normal Created 9m45s kubelet Created container ibm-console Warning Failed 9m44s (x2 over 9m55s) kubelet Error: ErrImagePull Warning Failed 9m44s (x2 over 9m55s) kubelet Failed to pull image "quay.io/ocs-dev/odf-console:4.9-13.93f7248.master": rpc error: code = Unknown desc = reading manifest 4.9-13.93f7248.master in quay.io/ocs-dev/odf-console: manifest unknown: manifest unknown Normal Pulling 9m44s (x2 over 9m55s) kubelet Pulling image "quay.io/ocs-dev/odf-console:4.9-13.93f7248.master" Warning Failed 9m43s kubelet Error: ImagePullBackOff Normal BackOff 36s (x40 over 9m43s) kubelet Back-off pulling image "quay.io/ocs-dev/odf-console:4.9-13.93f7248.master" I see the code successfully pulled the new image docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1 but there are other issues. This time it is not able to pull the odf-console. This image does not exist at all. It looks like you generated these builds yourself. May I know the reason why did you change the odf-console image in the CSV? $ docker pull quay.io/ocs-dev/odf-console:4.9-13.93f7248.master Error response from daemon: manifest for quay.io/ocs-dev/odf-console:4.9-13.93f7248.master not found: manifest unknown: manifest unknown We see the same error on Z as well, with the latest 4.9 image 4.9.0-136.ci from the quay repo. #oc get po -n openshift-storage NAME READY STATUS RESTARTS AGE noobaa-operator-7f8dd999cc-wc9hf 1/1 Running 0 24m ocs-metrics-exporter-6f75b94dc-xvrdb 1/1 Running 0 24m ocs-operator-56765d64c6-vdr95 1/1 Running 0 24m odf-console-7dc8c79cbf-psh9l 1/2 ImagePullBackOff 0 24m odf-operator-controller-manager-6ccc6d9d5c-nhdlh 2/2 Running 0 24m rook-ceph-operator-85b9f66b75-2sggl 1/1 Running 0 24m Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 85s default-scheduler Successfully assigned openshift-storage/odf-console-7dc8c79cbf-psh9l to worker-0.ocpm4202001.lnxne.boe Warning FailedMount 69s (x6 over 85s) kubelet MountVolume.SetUp failed for volume "ibm-console-serving-cert" : secret "ibm-console-serving-cert" not found Warning FailedMount 69s (x6 over 85s) kubelet MountVolume.SetUp failed for volume "odf-console-serving-cert" : secret "odf-console-serving-cert" not found Normal AddedInterface 52s multus Add eth0 [10.128.2.148/23] from openshift-sdn Normal Pulling 50s kubelet Pulling image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1" Normal Pulled 39s kubelet Successfully pulled image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1" in 11.372020685s Normal Started 38s kubelet Started container ibm-console Normal Created 38s kubelet Created container ibm-console Normal Pulling 37s (x2 over 52s) kubelet Pulling image "quay.io/ocs-dev/odf-console:4.9-13.93f7248.master" Warning Failed 36s (x2 over 50s) kubelet Error: ErrImagePull Warning Failed 36s (x2 over 50s) kubelet Failed to pull image "quay.io/ocs-dev/odf-console:4.9-13.93f7248.master": rpc error: code = Unknown desc = reading manifest 4.9-13.93f7248.master in quay.io/ocs-dev/odf-console: manifest unknown: manifest unknown Warning Failed 35s kubelet Error: ImagePullBackOff Normal BackOff 22s (x2 over 35s) kubelet Back-off pulling image "quay.io/ocs-dev/odf-console:4.9-13.93f7248.master" # podman pull quay.io/ocs-dev/odf-console:4.9-13.93f7248.master Trying to pull quay.io/ocs-dev/odf-console:4.9-13.93f7248.master... manifest unknown: manifest unknown Error: Error initializing source docker://quay.io/ocs-dev/odf-console:4.9-13.93f7248.master: Error reading manifest 4.9-13.93f7248.master in quay.io/ocs-dev/odf-console: manifest unknown: manifest unknown # Nitin, We did not make any changes to csv. We work only on ocs-ci and even on ocs-ci we did not make any changes recently. Hi, The ImagePullBackOff problem is introduced with recent changes in the UpStream and Boris has fixed them in DownStream also. It should work with the new builds. fix is included in the latest build ocs-registry:4.9.0-138.ci Tried with latest code, now failing due to certs. [root@nx124-49-f131-mon01-bastion-0 ~]# oc get csv -n openshift-storage NAME DISPLAY VERSION REPLACES PHASE noobaa-operator.v4.9.0-138.ci NooBaa Operator 4.9.0-138.ci Succeeded ocs-operator.v4.9.0-138.ci OpenShift Container Storage 4.9.0-138.ci Succeeded odf-operator.v4.9.0-138.ci OpenShift Data Foundation 4.9.0-138.ci Installing [root@nx124-49-f131-mon01-bastion-0 ~]# From oc describe pod odf-console-656c57bf57-r9lxv -n openshift-storage Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m42s default-scheduler Successfully assigned openshift-storage/odf-console-656c57bf57-r9lxv to nx124-49-f131-mon01-worker-1 Warning FailedMount 4m26s (x6 over 4m42s) kubelet MountVolume.SetUp failed for volume "ibm-console-serving-cert" : secret "ibm-console-serving-cert" not found Warning FailedMount 4m26s (x6 over 4m42s) kubelet MountVolume.SetUp failed for volume "odf-console-serving-cert" : secret "odf-console-serving-cert" not found Normal AddedInterface 4m4s multus Add eth0 [10.128.2.15/23] from openshift-sdn Normal Pulling 4m4s kubelet Pulling image "quay.io/rhceph-dev/odf-console@sha256:3915b99c44f75f937b94ed18e22a50d91a964afebac2bbdca2592b77a1c371de" Normal Pulled 2m53s kubelet Successfully pulled image "quay.io/rhceph-dev/odf-console@sha256:3915b99c44f75f937b94ed18e22a50d91a964afebac2bbdca2592b77a1c371de" in 1m10.879616487s Normal Pulling 2m52s kubelet Pulling image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1" Normal Created 2m52s kubelet Created container odf-console Normal Started 2m52s kubelet Started container odf-console Normal Pulled 2m42s kubelet Successfully pulled image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1" in 9.539791305s Normal Created 2m39s (x2 over 2m41s) kubelet Created container ibm-console Normal Started 2m38s (x2 over 2m40s) kubelet Started container ibm-console Warning BackOff 2m37s kubelet Back-off restarting failed container Normal Pulled 80s (x2 over 2m40s) kubelet Container image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1" already present on machine ==> Failed QA (In reply to Sridhar Venkat (IBM) from comment #19) > Tried with latest code, now failing due to certs. > > [root@nx124-49-f131-mon01-bastion-0 ~]# oc get csv -n openshift-storage > NAME DISPLAY VERSION > REPLACES PHASE > noobaa-operator.v4.9.0-138.ci NooBaa Operator 4.9.0-138.ci > Succeeded > ocs-operator.v4.9.0-138.ci OpenShift Container Storage 4.9.0-138.ci > Succeeded > odf-operator.v4.9.0-138.ci OpenShift Data Foundation 4.9.0-138.ci > Installing > [root@nx124-49-f131-mon01-bastion-0 ~]# > > > From oc describe pod odf-console-656c57bf57-r9lxv -n openshift-storage > > Events: > Type Reason Age From Message > ---- ------ ---- ---- ------- > Normal Scheduled 4m42s default-scheduler > Successfully assigned openshift-storage/odf-console-656c57bf57-r9lxv to > nx124-49-f131-mon01-worker-1 > Warning FailedMount 4m26s (x6 over 4m42s) kubelet > MountVolume.SetUp failed for volume "ibm-console-serving-cert" : secret > "ibm-console-serving-cert" not found > Warning FailedMount 4m26s (x6 over 4m42s) kubelet > MountVolume.SetUp failed for volume "odf-console-serving-cert" : secret > "odf-console-serving-cert" not found > Normal AddedInterface 4m4s multus Add > eth0 [10.128.2.15/23] from openshift-sdn > Normal Pulling 4m4s kubelet Pulling > image > "quay.io/rhceph-dev/odf-console@sha256: > 3915b99c44f75f937b94ed18e22a50d91a964afebac2bbdca2592b77a1c371de" > Normal Pulled 2m53s kubelet > Successfully pulled image > "quay.io/rhceph-dev/odf-console@sha256: > 3915b99c44f75f937b94ed18e22a50d91a964afebac2bbdca2592b77a1c371de" in > 1m10.879616487s > Normal Pulling 2m52s kubelet Pulling > image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1" > Normal Created 2m52s kubelet Created > container odf-console > Normal Started 2m52s kubelet Started > container odf-console > Normal Pulled 2m42s kubelet > Successfully pulled image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1" in > 9.539791305s > Normal Created 2m39s (x2 over 2m41s) kubelet Created > container ibm-console > Normal Started 2m38s (x2 over 2m40s) kubelet Started > container ibm-console > Warning BackOff 2m37s kubelet > Back-off restarting failed container > Normal Pulled 80s (x2 over 2m40s) kubelet > Container image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1" already > present on machine I just realized that this is a new error, not the old one. @jrivera - do you have an idea what might cause this? I guess the way they were using args is causing it and we had a PR for it which is no longer in consideration as we are removing the IBM console from the ODF. https://github.com/red-hat-storage/odf-operator/pull/93 I am even thinking to close this Bug Seeing same problem in 4.9.0-140 as well. Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m28s default-scheduler Successfully assigned openshift-storage/odf-console-5d7bb665f6-7k85f to rdr-sri-a2dc-mon01-worker-0 Warning FailedMount 4m13s (x6 over 4m28s) kubelet MountVolume.SetUp failed for volume "odf-console-serving-cert" : secret "odf-console-serving-cert" not found Warning FailedMount 4m13s (x6 over 4m28s) kubelet MountVolume.SetUp failed for volume "ibm-console-serving-cert" : secret "ibm-console-serving-cert" not found Normal AddedInterface 3m49s multus Add eth0 [10.128.2.13/23] from openshift-sdn Normal Pulling 3m49s kubelet Pulling image "quay.io/rhceph-dev/odf-console@sha256:3915b99c44f75f937b94ed18e22a50d91a964afebac2bbdca2592b77a1c371de" Normal Pulled 2m53s kubelet Successfully pulled image "quay.io/rhceph-dev/odf-console@sha256:3915b99c44f75f937b94ed18e22a50d91a964afebac2bbdca2592b77a1c371de" in 56.006186314s Normal Pulling 2m52s kubelet Pulling image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1" Normal Created 2m52s kubelet Created container odf-console Normal Started 2m52s kubelet Started container odf-console Normal Pulled 2m45s kubelet Successfully pulled image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1" in 7.212649093s Normal Pulled 2m44s kubelet Container image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.1" already present on machine Normal Created 2m43s (x2 over 2m44s) kubelet Created container ibm-console Normal Started 2m43s (x2 over 2m44s) kubelet Started container ibm-console Warning BackOff 2m42s (x2 over 2m43s) kubelet Back-off restarting failed container [root@rdr-sri-a2dc-mon01-bastion-0 ~]# oc get csv -n openshift-storage NAME DISPLAY VERSION REPLACES PHASE noobaa-operator.v4.9.0-140.ci NooBaa Operator 4.9.0-140.ci Succeeded ocs-operator.v4.9.0-140.ci OpenShift Container Storage 4.9.0-140.ci Succeeded odf-operator.v4.9.0-140.ci OpenShift Data Foundation 4.9.0-140.ci Installing [root@rdr-sri-a2dc-mon01-bastion-0 ~]# I will attempt to deploy any future builds. Closing the bug as we are removing ibm-console from the odf via https://github.com/red-hat-storage/odf-operator/pull/94 (In reply to Nitin Goyal from comment #24) > Closing the bug as we are removing ibm-console from the odf via > https://github.com/red-hat-storage/odf-operator/pull/94 I don't think we should be closing this just yet. It should be verified that this problem is gone after the above change has made it into the builds. Moving the BZ to POST because of the above patch. For IBM System P, we could successfully deploy OCP and ODF 4.9 with the latest code (4.9.0-148-ci) using ocs-ci. I verified that ODF / OCS release 4.9.0-148-ci can be successfully installed on OCP@s390x. I did a manual installation vis OperatorHub (including ODF console) as well as an automated installation via ocs-ci. Both approaches worked fine. From my point of view it's OK to close this ticket as resolved + verified. Dirik can you pls move the bug to verified state. @nigoyal Hi Nitin, I don't think I have the proper permissions to set the state of this ticket to 'resolved'. I am only given the option to set it to 'closed'. Can you please mark it as 'resolved' on my behalf? Thank you! Moved to VERIFIED, based on comment 27 Clearing my need info as Rejy has moved it to verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.9.0 enhancement, security, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:5086 |