Bug 1862555 - New machineset not applying node labels correctly
Summary: New machineset not applying node labels correctly
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.7.0
Assignee: Gal Zaidman
QA Contact: Lucie Leistnerova
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-31 17:18 UTC by John Berninger
Modified: 2023-02-08 08:58 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-15 14:41:28 UTC
Target Upstream Version:
Embargoed:
lmartinh: needinfo-


Attachments (Terms of Use)
oc logs -c machine-controller machine-api-controllers-8555b46f47-ppwrx -n openshift-machine-api (30.16 KB, text/plain)
2020-08-17 13:52 UTC, John Berninger
no flags Details

Description John Berninger 2020-07-31 17:18:28 UTC
Description of problem:
When I create a new machineset following the instructions at https://docs.openshift.com/container-platform/4.5/machine_management/creating-infrastructure-machinesets.html, the nodes that are created do not have the labels contained in the MachineSet definition.

Version-Release number of selected component (if applicable):
OCP 4.5

How reproducible:
Always

Steps to Reproduce:
1. Install a 4.5 IPI cluster on RHV
2. Create new machineset with definition below
3. Wait for node to spin up, check labels, notice desired label is not present.

Actual results:
New node:
```$ oc get node <name> --show-labels
NAME                     STATUS   ROLES    AGE   VERSION           LABELS
ocp4-69ftl-infra-crb2h   Ready    worker   24h   v1.18.3+b74c5ed   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=ocp4-69ftl-infra-crb2h,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos
```

Expected results:
Expect to see "node-role.kubernetes.io/infra=" in `oc get node` output

Additional info:
Machineset definition:
```apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  generation: 1
  labels:
    machine.openshift.io/cluster-api-cluster: ocp4-69ftl
  name: ocp4-69ftl-infra
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: ocp4-69ftl
      machine.openshift.io/cluster-api-machineset: ocp4-69ftl-infra
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: ocp4-69ftl
        machine.openshift.io/cluster-api-machine-role: infra
        machine.openshift.io/cluster-api-machine-type: infra
        machine.openshift.io/cluster-api-machineset: ocp4-69ftl-infra
    spec:
      metadata: 
        labels:
          node-role.kubernetes.io/infra: ""
      providerSpec:
        value:
          apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1
          cluster_id: aa612074-813c-4cbf-a83d-4d4fbd533ad0
          cpu:
            cores: 4
            sockets: 1
            threads: 1
          credentialsSecret:
            name: ovirt-credentials
          id: ""
          kind: OvirtMachineProviderSpec
          memory_mb: 8192
          metadata:
            creationTimestamp: null
          name: ""
          os_disk:
            size_gb: 120
          template_name: ocp4-69ftl-rhcos
          type: server
          userDataSecret:
            name: worker-user-data
          tags:
          - ocp4-69ftl-infra
```

Comment 1 Luis Martinho 2020-08-12 15:25:09 UTC
Hi,

I have a customer with time constrains trying to go Live in a couple of weeks and they also want to create infra nodes in IPI RHV.

In  Openshift 4.5.5 RHV IPI, I also tried to create a new machineSet for infra nodes, configured as [1] and the nodes created are still showing the label as worker, instead of infra.

Was the new created machineSet configured correctly in IPI RHV? (the configuration I did was similar to the one from the description). Or do you suggest to configure in a different way?

Please can this problem also be analised for OCP 4.5? As this customer also have time constrains it would be important to know what to expect from this bug, in terms of time to fix and/or complexity to fix this problem. And finally get an ETA for getting this fixed.

Thank you
Luis


[1] template -> metadata -> label -> 
            machine.openshift.io/cluster-api-machine-role: infra
            machine.openshift.io/cluster-api-machine-type: infra

label -> "node-role.kubernetes.io/infra: """

Comment 2 Luis Martinho 2020-08-12 15:38:23 UTC
Adding to the previous questions, which data will you need to troubleshoot this?

Comment 8 John Berninger 2020-08-17 13:52:32 UTC
Created attachment 1711619 [details]
oc logs -c machine-controller machine-api-controllers-8555b46f47-ppwrx -n openshift-machine-api

Comment 9 John Berninger 2020-08-17 13:55:20 UTC
The process described in KCS 5297401 (https://access.redhat.com/solutions/5297401) did resolve the issue and apply the 'infra' label correctly, so I can confirm there is a workaround available.

Comment 16 Gal Zaidman 2020-11-15 14:41:28 UTC
Closing this as not a bug because:
- Verified this is working by creating the same machine set, seeing that the machines are running and have the correct labels (verification was done on ocp 4.6)
- There is no reply from the customer for ~2 months

Fell free to reopen if you feel this is still needed


Note You need to log in before you can comment on or make changes to this bug.