We are seeing the following failure in PRs" level=info msg="Credentials loaded from the \"default\" profile in file \"/etc/openshift-installer/.awscred\"" level=warning msg="Found override for release image. Please be warned, this is not advised" level=info msg="Consuming Install Config from target directory" level=info msg="Creating infrastructure resources..." level=info msg="Waiting up to 30m0s for the Kubernetes API at https://api.ci-op-tbkq25p2-63f8c.origin-ci-int-aws.dev.rhcloud.com:6443..." level=info msg="API v1.17.1 up" level=info msg="Waiting up to 30m0s for bootstrapping to complete..." level=info msg="Destroying the bootstrap resources..." level=info msg="Waiting up to 30m0s for the cluster at https://api.ci-op-tbkq25p2-63f8c.origin-ci-int-aws.dev.rhcloud.com:6443 to initialize..." level=info msg="Cluster operator insights Disabled is False with : " level=info msg="Cluster operator monitoring Available is False with : " level=info msg="Cluster operator monitoring Progressing is True with RollOutInProgress: Rolling out the stack." level=error msg="Cluster operator monitoring Degraded is True with UpdatingClusterMonitoringOperatorFailed: Failed to rollout the stack. Error: running task Updating Cluster Monitoring Operator failed: reconciling Cluster Monitoring Operator Service failed: updating Service object failed: Service \"cluster-monitoring-operator\" is invalid: [spec.ipFamily: Invalid value: \"null\": field is immutable, spec.ipFamily: Required value]" level=error msg="Cluster operator network Degraded is True with ApplyOperatorConfig: Error while updating operator configuration: could not apply (/v1, Kind=Service) openshift-multus/multus-admission-controller-monitor-service: could not update object (/v1, Kind=Service) openshift-multus/multus-admission-controller-monitor-service: Service \"multus-admission-controller-monitor-service\" is invalid: [spec.ipFamily: Invalid value: \"null\": field is immutable, spec.ipFamily: Required value]" level=fatal msg="failed to initialize the cluster: Cluster operator monitoring is still updating" https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/pr-logs/pull/openshift_cluster-kube-apiserver-operator/722/pull-ci-openshift-cluster-kube-apiserver-operator-master-e2e-aws/3249
checked with OCP/CI build, no such issue now
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581