Verified on 4.4.0-0.nightly-2020-04-25-191512 which is upgrade from 4.3.17: $ oc describe imagepruners.imageregistry.operator.openshift.io Name: cluster Namespace: Labels: <none> Annotations: <none> API Version: imageregistry.operator.openshift.io/v1 Kind: ImagePruner Metadata: Creation Timestamp: 2020-04-26T02:19:03Z Generation: 1 Resource Version: 34500 Self Link: /apis/imageregistry.operator.openshift.io/v1/imagepruners/cluster UID: 5ff74afc-06ce-414f-8e57-0b5378f86746 Spec: Failed Jobs History Limit: 3 Keep Tag Revisions: 3 Schedule: Successful Jobs History Limit: 3 Suspend: true Status: Conditions: Last Transition Time: 2020-04-26T02:19:03Z Message: Pruner CronJob has been created Reason: Ready Status: True Type: Available Last Transition Time: 2020-04-26T02:19:03Z Message: Pruner completed successfully Reason: Complete Status: False Type: Failed Last Transition Time: 2020-04-26T02:19:03Z Message: The pruner job has been suspended. Reason: Suspended Status: False Type: Scheduled Observed Generation: 1 Events: <none>
I also can see the alert like below: $ oc get PrometheusRule -n openshift-image-registry image-registry-operator-alerts -o yaml - name: ImagePruner rules: - alert: ImagePruningDisabled annotations: message: | Automatic image pruning is not enabled. Regular pruning of images no longer referenced by ImageStreams is strongly recommended to ensure your cluster remains healthy. To remove this warning, install the image pruner by creating an imagepruner.imageregistry.operator.openshift.io resource with the name `cluster`. Ensure that the `suspend` field is set to `false`. expr: image_registry_operator_image_pruner_install_status < 2 labels: severity: warning
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581
I performed an in-place upgrade from 4.3.29 to 4.4.14 (both the latest stable release at the time) and ran into this issue today.