Description of problem:
Running 'oc adm node drain' without --ignore-daemonsets --delete-local-data flags resulting in error:
error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): openshift-cluster-node-tuning-operator/tuned-2kls9, openshift-dns/dns-default-c675s, openshift-image-registry/node-ca-d57th, openshift-machine-config-operator/machine-config-daemon-c5kx4, openshift-monitoring/node-exporter-jf5cr, openshift-multus/multus-cxqwz, openshift-ovn-kubernetes/ovnkube-node-zbzpv, openshift-ovn-kubernetes/ovs-node-kbjgf
cannot delete Pods with local storage (use --delete-local-data to override): openshift-image-registry/image-registry-5bcb67497d-gvq6b
Version-Release number of selected component (if applicable):
Client Version: 4.5
Server Version: 4.5
Always with 4.5 client version
Steps to Reproduce:
Run 'oc adm drain NODE'
Error reported, drain aborted
Node drained, application pods evicted
1. When used Client Version 4.4 there is no problem with running the command without flags
2. The similar problem reported for CNV (see linked bug). Our setup is BareMetal without CNV
*** Bug 1835629 has been marked as a duplicate of this bug. ***
This is not a bug, this was reported upstream in https://github.com/kubernetes/kubectl/issues/803
and only kubectl 1.17 and accompanying oc 4.4 are affected. If you try older version of oc 4.3 or 4.2
you'll get a similar error. I'm closing this as not a bug and I'll try to cherry-pick the fix into 4.4.
It looks like to be able to merge this to 4.4, it needs to go through a QA, so sending for qa.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.