Description of problem: Currently, we support mixing VMware [1] or KVM [2] cluster nodes with bare-metal cluster nodes. We explicitly do **not** support mixing RHV with bare-metal [3] or mixing cluster nodes on different hypervisor platforms [4]. We don't make any statement at all (AFAICT) regarding the mixing of other types of virtualized cluster nodes (including cloud) with bare-metal cluster nodes, or mixing different bare-metal platforms. So this RFE is starting out for: - Case 1: Generalized virtual + bare-metal - Case 2: Generalized virtual + other virtual - Case 3: Generalized bare-metal + other bare-metal If no technical limitation prevents it, it would be nice to offer general support for mixed platforms provided that each platform is supported for RHEL HA. Otherwise, it would be good to document why VMware + bare-metal and KVM + bare-metal are special cases. We can do this in a private note on the support policies articles, for the support team's reference. I'm setting this to low priority due to lack of documented demand. This is more of a proactive "nice to have," in case we can avoid unnecessary obstacles in the future. It would be fine to split out the above cases into their own BZs. I'm starting them out in one for discussion purposes. Apologies if this has already been discussed or if we have other BZs open. [1] Support Policies for RHEL High Availability Clusters - VMware Virtual Machines as Cluster Members (https://access.redhat.com/articles/3131271) [2] Support Policies for RHEL High Availability Clusters - RHEL libvirt/KVM Virtual Machines as Cluster Members (https://access.redhat.com/articles/3131301) [3] Support Policies for RHEL High Availability Clusters - RHV Virtual Machines as Cluster Members (https://access.redhat.com/articles/3131291) [4] Support Policies for RHEL High Availability Clusters - General Conditions with Virtualized Cluster Members (https://access.redhat.com/articles/3131111)
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.