Bug 2082254
Summary: | OCP 4.11 - Install fails because of: pods "management-ingress-63029-5cf6789dd6-" is forbidden: unable to validate against any security context constraint | ||
---|---|---|---|
Product: | Red Hat Advanced Cluster Management for Kubernetes | Reporter: | Constantin Vultur <cvultur> |
Component: | Core Services / Observability | Assignee: | Subbarao Meduri <smeduri> |
Status: | CLOSED ERRATA | QA Contact: | Xiang Yin <xiyin> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhacm-2.5.z | CC: | akandath, amagrawa, cqu, dbewley, jagray, keyoung, mbukatov, mcornea, ngangadh, njean, nmanos, smeduri, vboulos |
Target Milestone: | --- | Flags: | cqu:
qe_test_coverage-
bot-tracker-sync: rhacm-2.5.z+ |
Target Release: | rhacm-2.5.2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-09-13 20:06:21 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Constantin Vultur
2022-05-05 17:03:40 UTC
*** Bug 2083723 has been marked as a duplicate of this bug. *** I was able to get around the management ingress issue and complete the install by providing the ingress serviceaccount with the access it needs. I ran `oc adm policy add-scc-to-user privileged -z management-ingress-622b1-sa -n open-cluster-management` where `management-ingress-622b1-sa` was the name of the service account used in the deployment (The suffix is generated and subject to change on any given install). This should help get around the problem until the team can identify a more concrete solution to dealing with OCP's security changes. The workaround looks ok, however it seems we have another issue preventing ACM to get successfully installed, the MCH isn't ready because ManagedClusterLeaseUpdateStopped. # oc get mch multiclusterhub -n ocm -o json | jq .status { "components": { "cluster-backup-chart-sub": { "lastTransitionTime": "2022-06-07T14:31:49Z", "reason": "InstallSuccessful", "status": "True", "type": "Deployed" }, "cluster-lifecycle-sub": { "lastTransitionTime": "2022-06-07T14:32:52Z", "reason": "InstallSuccessful", "status": "True", "type": "Deployed" }, "console-chart-sub": { "lastTransitionTime": "2022-06-07T14:31:50Z", "reason": "InstallSuccessful", "status": "True", "type": "Deployed" }, "grc-sub": { "lastTransitionTime": "2022-06-07T14:31:50Z", "reason": "InstallSuccessful", "status": "True", "type": "Deployed" }, "local-cluster": { "lastTransitionTime": "2022-06-07T18:02:32Z", "message": "Registration agent stopped updating its lease.", "reason": "ManagedClusterLeaseUpdateStopped", "status": "Unknown", "type": "ManagedClusterConditionAvailable" }, "management-ingress-sub": { "lastTransitionTime": "2022-06-07T14:31:49Z", "reason": "InstallSuccessful", "status": "True", "type": "Deployed" }, "multicluster-engine": { "lastTransitionTime": "2022-06-07T18:02:32Z", "reason": "ComponentsAvailable", "status": "True", "type": "Available" }, "multicluster-engine-csv": { "lastTransitionTime": "2022-06-07T18:02:32Z", "message": "install strategy completed with no errors", "reason": "InstallSucceeded", "status": "True", "type": "Available" }, "multicluster-engine-sub": { "lastTransitionTime": "2022-06-07T18:02:32Z", "message": "installPlanApproval: Automatic. installPlan: multicluster-engine/install-jk7tg", "reason": "AtLatestKnown", "status": "True", "type": "Available" }, "multiclusterhub-repo": { "lastTransitionTime": "2022-06-07T14:30:37Z", "reason": "MinimumReplicasAvailable", "status": "True", "type": "Available" }, "policyreport-sub": { "lastTransitionTime": "2022-06-07T14:31:49Z", "reason": "InstallSuccessful", "status": "True", "type": "Deployed" }, "search-prod-sub": { "lastTransitionTime": "2022-06-07T14:31:50Z", "reason": "InstallSuccessful", "status": "True", "type": "Deployed" }, "volsync-addon-controller-sub": { "lastTransitionTime": "2022-06-07T14:31:49Z", "reason": "InstallSuccessful", "status": "True", "type": "Deployed" } }, "conditions": [ { "lastTransitionTime": "2022-06-07T14:30:31Z", "lastUpdateTime": "2022-06-07T14:30:32Z", "message": "created new resource: CustomResourceDefinition managedclusteractions.action.open-cluster-management.io", "reason": "NewResourceCreated", "status": "True", "type": "Progressing" } ], "desiredVersion": "2.5.0", "phase": "Installing" } *** Bug 2083723 has been marked as a duplicate of this bug. *** *** Bug 2097304 has been marked as a duplicate of this bug. *** Verified on 2.5.2-DOWNSTREAM-2022-08-02-15-08-54 and 2.5.2-RC4 builds Correction: Verified on 2.5.2-DOWNSTREAM-2022-08-02-15-08-54 and 2.5.2-FC3 builds Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Advanced Cluster Management 2.5.2 security fixes and bug fixes), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6507 |