Bug 1826457 - oc get nodemaintenances should print more meaningful name of node in maintenance for CLI. openshift-api should generate more meaningful name when putting node in maintenance
Summary: oc get nodemaintenances should print more meaningful name of node in maintena...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Console Metal3 Plugin
Version: 4.4
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 4.6.0
Assignee: Jiri Tomasek
QA Contact: Yanping Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-21 17:58 UTC by mlammon
Modified: 2020-10-27 15:58 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 15:58:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift console pull 5436 0 None closed Bug 1826457: Include node name in node maintenance CR name by default. 2020-09-30 19:08:44 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 15:58:41 UTC

Description mlammon 2020-04-21 17:58:20 UTC
Description of problem:
oc get nodemaintenances should print more meaningful name of node in maintenance for CLI.  openshift-api should generate more meaningful name when putting node in maintenance

The reason is not from UI perspective but if one were viewing from CLI then they have addition 'oc describe nodemaintenance <name>'
in order to determine which node is under maintenance.  oc get node


Steps to reproduce:
1. Deploy cluster with CNV (which has node-maintenace-operator)
2. Compute->Nodes -> 3 dots on right of any node, Start Maintenance

Note: UI easily shows the node "Under maintenance", however, if you
were to look at it from CLI, it uses some "nm-xxxx".

Here is API creation (taken from Chrome - Inspect) - API is creating
https://console-openshift-console.apps.ocp-edge-cluster-rdu2-0.qe.lab.redhat.com/api/kubernetes/apis/kubevirt.io/v1alpha1/nodemaintenances

{"apiVersion":"kubevirt.io/v1alpha1","items":[{"apiVersion":"kubevirt.io/v1alpha1","kind":"NodeMaintenance","metadata":{"creationTimestamp":"2020-04-21T17:40:29Z","finalizers":["foregroundDeleteNodeMaintenance"],"generateName":"nm-","generation":1,"name":"nm-qrmbd","resourceVersion":"85387","selfLink":"/apis/kubevirt.io/v1alpha1/nodemaintenances/nm-qrmbd","uid":"3c1e99ef-60a0-4a55-b661-b32347f5ac92"},"spec":{"nodeName":"master-0-0","reason":"replace server"},"status":{"evictionPods":2,"phase":"Succeeded","totalpods":27}}],"kind":"NodeMaintenanceList","metadata":{"continue":"","resourceVersion":"87977","selfLink":"/apis/kubevirt.io/v1alpha1/nodemaintenances"}}

The CLI user has to go another step further to reveal its the master-0-0 node (in this case)


# view from CLI
[root@sealusa9 ~]# oc get nodemaintenances
NAME       AGE
nm-qrmbd   5m54s
[root@sealusa9 ~]# oc describe nodemaintenance nm-qrmbd
Name:         nm-qrmbd
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  kubevirt.io/v1alpha1
Kind:         NodeMaintenance
Metadata:
  Creation Timestamp:  2020-04-21T17:40:29Z
  Finalizers:
    foregroundDeleteNodeMaintenance
  Generate Name:     nm-
  Generation:        1
  Resource Version:  85387
  Self Link:         /apis/kubevirt.io/v1alpha1/nodemaintenances/nm-qrmbd
  UID:               3c1e99ef-60a0-4a55-b661-b32347f5ac92
Spec:
  Node Name:  master-0-0
  Reason:     replace server
Status:
  Eviction Pods:  2
  Phase:          Succeeded
  Totalpods:      27
Events:           <none>

Comment 3 Andrew Beekhof 2020-04-23 00:21:39 UTC
I believe the name is generated based on what it is created with.
Did you use the UI or CLI here?

Comment 4 Andrew Beekhof 2020-05-13 12:47:27 UTC
Michael, can you have a chat with Jiri Tomasek and see how they are creating the NMO objects.
We'd want them to use the node name as a prefix (at least, possibly with no suffix)

Comment 7 mlammon 2020-07-28 13:21:32 UTC

root@sealusa6 ~]# oc get clusterversion
NAME      VERSION                        AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.ci-2020-07-21-114552   True        False         15h     Cluster version is 4.6.0-0.ci-2020-07-21-114552


I put worker-0-3 into node maintenance and we now have a meaningful name from CLI!!

[root@sealusa6 ~]# oc get nodemaintenances
NAME               AGE
worker-0-3-8b9hx   3m39s


[root@sealusa6 ~]# oc describe nodemaintenance worker-0-3-8b9hx
Name:         worker-0-3-8b9hx
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  nodemaintenance.kubevirt.io/v1beta1
Kind:         NodeMaintenance
Metadata:
  Creation Timestamp:  2020-07-28T13:15:15Z
  Finalizers:
    foregroundDeleteNodeMaintenance
  Generate Name:  worker-0-3-
  Generation:     1
  Managed Fields:
    API Version:  nodemaintenance.kubevirt.io/v1beta1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:generateName:
      f:spec:
        .:
        f:nodeName:
        f:reason:
    Manager:      Mozilla
    Operation:    Update
    Time:         2020-07-28T13:15:15Z
    API Version:  nodemaintenance.kubevirt.io/v1beta1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"foregroundDeleteNodeMaintenance":
      f:status:
        .:
        f:evictionPods:
        f:phase:
        f:totalpods:
    Manager:         node-maintenance-operator
    Operation:       Update
    Time:            2020-07-28T13:16:08Z
  Resource Version:  905700
  Self Link:         /apis/nodemaintenance.kubevirt.io/v1beta1/nodemaintenances/worker-0-3-8b9hx
  UID:               63f8bef3-ae6c-4857-a418-ee4de7e034d8
Spec:
  Node Name:  worker-0-3
  Reason:     test
Status:
  Eviction Pods:  4
  Phase:          Succeeded
  Totalpods:      16
Events:           <none>

Comment 9 errata-xmlrpc 2020-10-27 15:58:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.