Hide Forgot
We'd considered creating a cluster-profile for this use case, but ended up going with [1]. We should remove references to the dead profile from these manifests: $ oc adm release extract --to manifests registry.ci.openshift.org/ocp/release:4.8.0-0.nightly-2021-04-09-222447 $ grep -lr include.release.openshift.io/single-node-production-edge manifests manifests/0000_50_console-operator_ocs-install-tour-quickstart.yaml manifests/0000_50_cluster-samples-operator_09-servicemonitor-rbac.yaml manifests/0000_50_cluster-samples-operator_08-openshift-imagestreams.yaml manifests/0000_50_cluster-samples-operator_07-clusteroperator.yaml manifests/0000_50_cluster-samples-operator_06-servicemonitor.yaml manifests/0000_50_cluster-samples-operator_06-operator.yaml manifests/0000_50_cluster-samples-operator_06-metricsservice.yaml manifests/0000_50_cluster-samples-operator_05-kube-system-rbac.yaml manifests/0000_50_cluster-samples-operator_04-openshift-rbac.yaml manifests/0000_50_cluster-samples-operator_03-rbac.yaml manifests/0000_50_cluster-samples-operator_03-rbac-proxies-role.yaml manifests/0000_50_cluster-samples-operator_03-rbac-proxies-role-binding.yaml manifests/0000_50_cluster-samples-operator_02-sa.yaml manifests/0000_50_cluster-samples-operator_010-prometheus-rules.yaml manifests/0000_50_cluster-samples-operator_01-namespace.yaml manifests/0000_50_cluster-node-tuning-operator_60-clusteroperator.yaml manifests/0000_50_cluster-node-tuning-operator_50-operator.yaml manifests/0000_50_cluster-node-tuning-operator_40-rbac.yaml manifests/0000_50_cluster-node-tuning-operator_30-monitoring.yaml manifests/0000_50_cluster-node-tuning-operator_20-crd-tuned.yaml manifests/0000_50_cluster-node-tuning-operator_20-crd-profile.yaml manifests/0000_50_cluster-node-tuning-operator_10-namespace.yaml [1]: https://github.com/openshift/enhancements/pull/688
Still one to go: $ oc adm release extract --to manifests registry.ci.openshift.org/ocp/release:4.8.0-0.nightly-2021-04-14-211524 Extracted release payload from digest sha256:804affcde06c397c02da28032d69f08e5f18358c1cb7726d1da7329ebf4877c8 created at 2021-04-14T21:18:45Z $ grep -lr include.release.openshift.io/single-node-production-edge manifests manifests/0000_50_console-operator_ocs-install-tour-quickstart.yaml
Looks good to me: $ oc adm release extract --to manifests registry.ci.openshift.org/ocp/release:4.8.0-0.nightly-2021-04-19-175100 $ grep -lr include.release.openshift.io/single-node-production-edge manifests ...no hits...
@W. Trevor King Hey, can you verify the bug and change status to verified.
I usually leave actual verification to the QA contact, but in this case, comment 4 has me happy enough that I might if they don't move this to VERIFIED in the next few days.
Verified: Version: 4.8.0-fc.7 [root@sealusa35 ~]# oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-fc.7 True False 43h Cluster version is 4.8.0-fc.7 [root@sealusa35 ~]# oc adm release extract --to manifests registry.ci.openshift.org/ocp/release:4.8.0-fc.7 Extracted release payload from digest sha256:cdc9e7d9fbf86acaffdb5cc4fa471be84a15a5a70161cffbc3a17dda865b00b4 created at 2021-06-01T09:10:11Z [root@sealusa35 ~]# grep -lr include.release.openshift.io/single-node-production-edge manifests [root@sealusa35 ~]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438