+++ This bug was initially created as a clone of Bug #1835628 +++
Description of problem:
Running 'oc adm node drain' without --ignore-daemonsets --delete-local-data flags resulting in error:
error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): openshift-cluster-node-tuning-operator/tuned-2kls9, openshift-dns/dns-default-c675s, openshift-image-registry/node-ca-d57th, openshift-machine-config-operator/machine-config-daemon-c5kx4, openshift-monitoring/node-exporter-jf5cr, openshift-multus/multus-cxqwz, openshift-ovn-kubernetes/ovnkube-node-zbzpv, openshift-ovn-kubernetes/ovs-node-kbjgf
cannot delete Pods with local storage (use --delete-local-data to override): openshift-image-registry/image-registry-5bcb67497d-gvq6b
Version-Release number of selected component (if applicable):
Client Version: 4.5
Server Version: 4.5
Always with 4.5 client version
Steps to Reproduce:
Run 'oc adm drain NODE'
Error reported, drain aborted
Node drained, application pods evicted
1. When used Client Version 4.4 there is no problem with running the command without flags
2. The similar problem reported for CNV (see linked bug). Our setup is BareMetal without CNV
--- Additional comment from Maciej Szulik on 2020-05-14 13:40:00 CEST ---
--- Additional comment from Maciej Szulik on 2020-05-14 14:13:21 CEST ---
This is not a bug, this was reported upstream in https://github.com/kubernetes/kubectl/issues/803
and only kubectl 1.17 and accompanying oc 4.4 are affected. If you try older version of oc 4.3 or 4.2
you'll get a similar error. I'm closing this as not a bug and I'll try to cherry-pick the fix into 4.4.
Actually, it's the opposite, we need to pick https://github.com/kubernetes/kubernetes/pull/87361 fix so that
oc adm drain warns about daemonsets and local storage.
PR waiting in queue https://github.com/openshift/oc/pull/420
Confirmed with latest oc: with DaemonSets or Volumes attached should give you the warning and abort the drain.
[root@dhcp-140-138 ~]# oc version -o yaml
[root@dhcp-140-138 ~]# oc adm drain node/ip-10-0-187-6.us-east-2.compute.internal
error: unable to drain node "ip-10-0-187-6.us-east-2.compute.internal", aborting command...
There are pending nodes to be drained:
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): openshift-cluster-node-tuning-operator/tuned-m4rdr, openshift-dns/dns-default-6lhrh, openshift-image-registry/node-ca-6lkn2, openshift-machine-config-operator/machine-config-daemon-hzg4r, openshift-monitoring/node-exporter-n4k45, openshift-multus/multus-647pc, openshift-sdn/ovs-mzkjd, openshift-sdn/sdn-tnd4z
cannot delete Pods with local storage (use --delete-local-data to override): openshift-monitoring/alertmanager-main-2, openshift-monitoring/kube-state-metrics-5595b5958b-bzcpj
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.