Description of problem: When OCP installer creates master nodes, they are numbered from 0 onwards. Example (six is name of my OCP cluster): ocp46ipi-pfrz6-master-0 Ready master 3d23h v1.19.0-rc.2+514f31a ocp46ipi-pfrz6-master-1 Ready master 3d23h v1.19.0-rc.2+514f31a ocp46ipi-pfrz6-master-2 Ready master 3d23h v1.19.0-rc.2+514f31a However, worker nodes created by the installer have all number 0 suffixed: ocp46ipi-pfrz6-worker-0-4v8qp Ready worker 3d22h v1.19.0-rc.2+514f31a ocp46ipi-pfrz6-worker-0-bxhlf Ready worker 3d22h v1.19.0-rc.2+514f31a ocp46ipi-pfrz6-worker-0-xwhws Ready worker 3d22h v1.19.0-rc.2+514f31a I understand this is because of the Machine Set whose name is "ocp46ipi-pfrz6-worker-0" and the individual worker nodes are just variations of this Machine Set's name. However, it would be better if we could achieve the same sequential numbering that we have with masters. And if not, then I suggest to remove number 0 from the default worker Machine Set altogether, because that number does not give any valuable information and is only confusing since it *looks like* the nodes are supposed to be ordered, but actually they aren't. Version-Release number of the following components: - openshift-install 4.6.0-fc.4 - Red Hat Virtualization-4.4.x How reproducible: 100 % Steps to Reproduce: 1. Run openshift-install Actual results: # oc get nodes --config=kubeconfig Flag --config has been deprecated, use --kubeconfig instead NAME STATUS ROLES AGE VERSION ocp46ipi-pfrz6-master-0 Ready master 3d23h v1.19.0-rc.2+514f31a ocp46ipi-pfrz6-master-1 Ready master 3d23h v1.19.0-rc.2+514f31a ocp46ipi-pfrz6-master-2 Ready master 3d23h v1.19.0-rc.2+514f31a ocp46ipi-pfrz6-worker-0-4v8qp Ready worker 3d22h v1.19.0-rc.2+514f31a ocp46ipi-pfrz6-worker-0-bxhlf Ready worker 3d22h v1.19.0-rc.2+514f31a ocp46ipi-pfrz6-worker-0-xwhws Ready worker 3d22h v1.19.0-rc.2+514f31a Expected results: Worker nodes should either be numbered sequentially or the number should be removed from Machine Set name
Closing as duplicate of bug #1817954 *** This bug has been marked as a duplicate of bug 1817954 ***