Bug 1833256 - [vSphere] Default resource pool resolves to multiple instances
Summary: [vSphere] Default resource pool resolves to multiple instances
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.5.0
Assignee: Alexander Demicev
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks: 1826017
TreeView+ depends on / blocked
 
Reported: 2020-05-08 08:43 UTC by Jianwei Hou
Modified: 2020-07-13 17:36 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-13 17:36:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift machine-api-operator pull 585 0 None closed Bug 1833256: [vSphere] Fail machine if multiple resource pools found 2020-12-10 12:57:27 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:36:28 UTC

Comment 1 Alexander Demicev 2020-05-11 10:29:32 UTC
It should be possible to specify resource pool in provider spec.

providerSpec:
   workspace:
     resourcePool: ""

Comment 2 Jianwei Hou 2020-05-11 14:25:17 UTC
Thank you very much Alexander.

My reporting was incorrect. The actual problem was: when there is no resource pool, the issue can be reproduced at installation time.

I created a resource pool and updated the machineset following comment 1, machines were created and successfully became worker.

Comment 3 Jianwei Hou 2020-05-11 14:35:58 UTC
Still this looks like a problem when vSphere is configured differently.

A machineset without resourcePool can work on vcsa-qe.vmware.devcluster.openshift.com
```
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  creationTimestamp: "2020-05-11T09:01:50Z"
  generation: 2
  labels:
    machine.openshift.io/cluster-api-cluster: jima-ipishared-5ddjd
  name: jima-ipishared-5ddjd-worker-new
  namespace: openshift-machine-api
  resourceVersion: "234618"
  selfLink: /apis/machine.openshift.io/v1beta1/namespaces/openshift-machine-api/machinesets/jima-ipishared-5ddjd-worker-new
  uid: c1d9e6ac-10ed-479f-bd07-3aba0be782af
spec:
  replicas: 2
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: jima-ipishared-5ddjd
      machine.openshift.io/cluster-api-machineset: jima-ipishared-5ddjd-worker
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: jima-ipishared-5ddjd
        machine.openshift.io/cluster-api-machine-role: worker
        machine.openshift.io/cluster-api-machine-type: worker
        machine.openshift.io/cluster-api-machineset: jima-ipishared-5ddjd-worker
    spec:
      metadata: {}
      providerSpec:
        value:
          apiVersion: vsphereprovider.openshift.io/v1beta1
          credentialsSecret:
            name: vsphere-cloud-credentials
          diskGiB: 50
          kind: VSphereMachineProviderSpec
          memoryMiB: 8192
          metadata:
            creationTimestamp: null
          network:
            devices:
            - networkName: VM Network
          numCPUs: 4
          numCoresPerSocket: 1
          snapshot: ""
          template: jima-ipishared-5ddjd-rhcos
          userDataSecret:
            name: worker-user-data
          workspace:
            datacenter: dc1
            datastore: nvme-ds1
            folder: /dc1/vm/jima-ipishared-5ddjd
            server: vcsa-qe.vmware.devcluster.openshift.com
status:
  availableReplicas: 2
  fullyLabeledReplicas: 2
  observedGeneration: 2
  readyReplicas: 2
  replicas: 2
```

A machineset without resourcePool can not work on dhcp-8-30-198.lab.eng.rdu2.redhat.com until a resoucePool is added to its providerSpec

```
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  creationTimestamp: "2020-05-11T12:54:54Z"
  generation: 7
  labels:
    machine.openshift.io/cluster-api-cluster: cloud-s2bh4
  name: cloud-s2bh4-worker
  namespace: openshift-machine-api
  resourceVersion: "38442"
  selfLink: /apis/machine.openshift.io/v1beta1/namespaces/openshift-machine-api/machinesets/cloud-s2bh4-worker
  uid: 27412538-cd50-4f5b-a52b-0977dff83eb4
spec:
  replicas: 3
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: cloud-s2bh4
      machine.openshift.io/cluster-api-machineset: cloud-s2bh4-worker
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: cloud-s2bh4
        machine.openshift.io/cluster-api-machine-role: worker
        machine.openshift.io/cluster-api-machine-type: worker
        machine.openshift.io/cluster-api-machineset: cloud-s2bh4-worker
    spec:
      metadata: {}
      providerSpec:
        value:
          apiVersion: vsphereprovider.openshift.io/v1beta1
          credentialsSecret:
            name: vsphere-cloud-credentials
          diskGiB: 120
          kind: VSphereMachineProviderSpec
          memoryMiB: 8192
          metadata:
            creationTimestamp: null
          network:
            devices:
            - networkName: VM Network
          numCPUs: 2
          numCoresPerSocket: 1
          snapshot: ""
          template: cloud-s2bh4-rhcos
          userDataSecret:
            name: worker-user-data
          workspace:
            datacenter: Datacenter
            datastore: datastore1
            folder: /Datacenter/vm/cloud-s2bh4
            resourcepool: cloud
            server: dhcp-8-30-198.lab.eng.rdu2.redhat.com
status:
  availableReplicas: 2
  fullyLabeledReplicas: 3
  observedGeneration: 7
  readyReplicas: 2
  replicas: 3
```

Comment 8 Jianwei Hou 2020-05-25 06:23:42 UTC
Tested in 4.5.0-0.nightly-2020-05-24-223848, with two resource pools of same name, machine becomes failed with "multiple resource pools found, specify one in config".

The original issue can be workarounded by setting a resourcepool on the machineset's providerSpec.

Comment 9 errata-xmlrpc 2020-07-13 17:36:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.