Current state: cluster-api-provider has no logic for handling affinity groups What we want: Cluster API provider should expose logic for adding a created machine to existing affinity groups by adding a field for affinity groups in the machine spec. This means that when you create a machine set you can decide that all the machines will join an *existing affinity group* in the cluster. What is not supported: 1. Adding the machine to an affinity group that is not in the cluster, meaning that if the affinity group doesn't exist when the machine is created then it will not be added to the group. 2. Updating the field of the machine is not supported, if you update the field on an existing machine nothing will happen. 3. Creating/Deleting Affinity groups from the cluster How to test? Positive flow: 1. create an 2 affinity groups on the RHV cluster (can be done via UI), for example "test-affinity-group-1" and "test-affinity-group-2" 2. create a machine set with the affinity group, for example: ```yaml apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ovirt11-CLUSTERNAME machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: ovirt11-CLUSTERNAME machine.openshift.io/cluster-api-machineset: ovirt11-CLUSTERNAME-worker-1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: ovirt11-CLUSTERNAME machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: ovirt11-CLUSTERNAME-worker-1 spec: providerSpec: value: affinity_groups_names: - test-affinity-group-1 - test-affinity-group-2 apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: CLUSTERID cpu: cores: 8 sockets: 1 threads: 1 credentialsSecret: name: ovirt-credentials kind: OvirtMachineProviderSpec memory_mb: 16000 os_disk: size_gb: 31 template_name: ovirt11-CLUSTERNAME-rhcos type: server userDataSecret: name: worker-user-data ``` 3. Verify that 1 machine is created and added to the affinity groups. 4. scale to 2 replicas and make sure that another machine is created and added to the affinity groups 5. scale down and make sure the machine is deleted and removed from the affinity groups Negative flow suggestions: - try an affinity group that doesn't exist in the RHV cluster -> the Machine should fail to create, machine is in down state on RHV and logs show errors. - try an affinity group that exist on a different RHV cluster -> the Machine should fail to create, machine is in down state on RHV and logs show errors.
I created machineset after I added Affinity Group to the env - machine was created without Affinity Group OCP version - 4.8.0-0.nightly-2021-04-02-002210
ocp version : 4.8.0-0.nightly-2021-04-09-222447 rhv version: 4.4.5.10-0.1 step: 1) check the creation of machineset with existing Affinity group- machine created successfully with a correct Affinity group 2) check scale up - add 2 machines to replicas - another 2 machines were create on rhv with a correct Affinity group (oc edit machineset primary-m6qrv-worker-34) 3) check scale down - change replicas from 3 to 1 -2 vms were deleted 4) check the creation of machineset with unexisting Affinity group - correct error message appear -affinity group test122 was not found on cluster 502babf8-9c05-4738-b5c9-2ac8c33a9648 5) check the creation of machineset with Affinity group that exist on different cluster - correct error message appear -affinity group test2 was not found on cluster 502babf8-9c05-4738-b5c9-2ac8c33a9648 result: Affinity group appear on relevant machines
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438