The dns-operator and dns-default pods do not provide a termination message, hindering debugging efforts when the pods are crash looping. At minimum, the pod's terminationMessagePolicy should be "FallbackToLogsOnError". See https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/#customizing-the-termination-message Expected Results: The termination message should appear in a pod container's .status.lastState.terminated.message field.
Moving to 4.2 since this should not block the release. But if it lands soon, please change it back to 4.2.
Fixed by https://github.com/openshift/cluster-dns-operator/pull/108
verified with 4.2.0-0.nightly-2019-06-25-003324 and issue has been fixed. $ oc get deployment dns-operator -o yaml -n openshift-dns-operator spec: template: spec: containers: - command: - dns-operator terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError $ oc get ds dns-default -o yaml -n openshift-dns spec: template: spec: containers: name: dns terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError <---snip---> name: dns-node-resolver terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922