Description of problem: oc get nodemaintenances should print more meaningful name of node in maintenance for CLI. openshift-api should generate more meaningful name when putting node in maintenance The reason is not from UI perspective but if one were viewing from CLI then they have addition 'oc describe nodemaintenance <name>' in order to determine which node is under maintenance. oc get node Steps to reproduce: 1. Deploy cluster with CNV (which has node-maintenace-operator) 2. Compute->Nodes -> 3 dots on right of any node, Start Maintenance Note: UI easily shows the node "Under maintenance", however, if you were to look at it from CLI, it uses some "nm-xxxx". Here is API creation (taken from Chrome - Inspect) - API is creating https://console-openshift-console.apps.ocp-edge-cluster-rdu2-0.qe.lab.redhat.com/api/kubernetes/apis/kubevirt.io/v1alpha1/nodemaintenances {"apiVersion":"kubevirt.io/v1alpha1","items":[{"apiVersion":"kubevirt.io/v1alpha1","kind":"NodeMaintenance","metadata":{"creationTimestamp":"2020-04-21T17:40:29Z","finalizers":["foregroundDeleteNodeMaintenance"],"generateName":"nm-","generation":1,"name":"nm-qrmbd","resourceVersion":"85387","selfLink":"/apis/kubevirt.io/v1alpha1/nodemaintenances/nm-qrmbd","uid":"3c1e99ef-60a0-4a55-b661-b32347f5ac92"},"spec":{"nodeName":"master-0-0","reason":"replace server"},"status":{"evictionPods":2,"phase":"Succeeded","totalpods":27}}],"kind":"NodeMaintenanceList","metadata":{"continue":"","resourceVersion":"87977","selfLink":"/apis/kubevirt.io/v1alpha1/nodemaintenances"}} The CLI user has to go another step further to reveal its the master-0-0 node (in this case) # view from CLI [root@sealusa9 ~]# oc get nodemaintenances NAME AGE nm-qrmbd 5m54s [root@sealusa9 ~]# oc describe nodemaintenance nm-qrmbd Name: nm-qrmbd Namespace: Labels: <none> Annotations: <none> API Version: kubevirt.io/v1alpha1 Kind: NodeMaintenance Metadata: Creation Timestamp: 2020-04-21T17:40:29Z Finalizers: foregroundDeleteNodeMaintenance Generate Name: nm- Generation: 1 Resource Version: 85387 Self Link: /apis/kubevirt.io/v1alpha1/nodemaintenances/nm-qrmbd UID: 3c1e99ef-60a0-4a55-b661-b32347f5ac92 Spec: Node Name: master-0-0 Reason: replace server Status: Eviction Pods: 2 Phase: Succeeded Totalpods: 27 Events: <none>
I believe the name is generated based on what it is created with. Did you use the UI or CLI here?
Michael, can you have a chat with Jiri Tomasek and see how they are creating the NMO objects. We'd want them to use the node name as a prefix (at least, possibly with no suffix)
root@sealusa6 ~]# oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.0-0.ci-2020-07-21-114552 True False 15h Cluster version is 4.6.0-0.ci-2020-07-21-114552 I put worker-0-3 into node maintenance and we now have a meaningful name from CLI!! [root@sealusa6 ~]# oc get nodemaintenances NAME AGE worker-0-3-8b9hx 3m39s [root@sealusa6 ~]# oc describe nodemaintenance worker-0-3-8b9hx Name: worker-0-3-8b9hx Namespace: Labels: <none> Annotations: <none> API Version: nodemaintenance.kubevirt.io/v1beta1 Kind: NodeMaintenance Metadata: Creation Timestamp: 2020-07-28T13:15:15Z Finalizers: foregroundDeleteNodeMaintenance Generate Name: worker-0-3- Generation: 1 Managed Fields: API Version: nodemaintenance.kubevirt.io/v1beta1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:generateName: f:spec: .: f:nodeName: f:reason: Manager: Mozilla Operation: Update Time: 2020-07-28T13:15:15Z API Version: nodemaintenance.kubevirt.io/v1beta1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: v:"foregroundDeleteNodeMaintenance": f:status: .: f:evictionPods: f:phase: f:totalpods: Manager: node-maintenance-operator Operation: Update Time: 2020-07-28T13:16:08Z Resource Version: 905700 Self Link: /apis/nodemaintenance.kubevirt.io/v1beta1/nodemaintenances/worker-0-3-8b9hx UID: 63f8bef3-ae6c-4857-a418-ee4de7e034d8 Spec: Node Name: worker-0-3 Reason: test Status: Eviction Pods: 4 Phase: Succeeded Totalpods: 16 Events: <none>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196