Description of problem: upgrading a cluster is a task which requires proper planning wrt to workload distribution and management. spawning new VMs by different users is a scenario that adds complexity to this mechanism, and we would like a way to prevent those operations while allowing normal operation of the cluster while having this policy. we would like to have a new cluster scheduling policy with the following criteria: -block new VMs from running -allow HA VMs failover -allow live migration -ability to change to this policy on a live manner -API/SDK/Ansible support for this change of policy the output (fail VM run) should indicate that cluster maintenance policy is in place, and thus preventing running new VMs.
please finally remove the InClusterUpgrade one. It was supposed to be part of the effort by rmohr when it was introducing in 3.6.z, but it still exists. It's useless and misleading ever since 4.0
Ok, so we discussed this with Moran and we are fine with the following limitation of scope, but it has to be explicitly documented: - manually starting a new HA VM is not going to be covered by the policy (will be allowed) - any user can create HA VM
Moving to 4.1.4, as it seem to have missed 4.1.3.
*** Bug 1145764 has been marked as a duplicate of this bug. ***
- still missing a patch to hide/remove the InClusterUpgrade policy which has a confusing name to what this RFE is trying to address (upgrading clusters) - need to update documentation to remove the InClusterUpgrade from admin guide as well to prevent the implied need to use it for "older" to "newer" OS updates
Verified on rhevm-4.1.4.1-0.1.el7.noarch Change cluster scheduling policy to cluster maintenance Scenario 1: =========== 1) Start not HA VM - FAILED Scenario 2: =========== 1) Migrate not HA VM(VM was started before policy) - SUCCSEEDED Scenario 3: =========== 1) Update non HA VM to be HA and start it - SUCCSEEDED Scenario 4: =========== 1) Kill HA VM, the engine must restart it - SUCCEEDED Scenario 5: =========== 1) Put host with HA and not HA VM to maintenance - SUCCEEDED
Created attachment 1351574 [details] ClusterInUpgrade filter ClusterInUpgrade filter still appears in New Schedule Policy window. Cluster_Maintenance filter has not been added.
This is expected. We removed the predefined in_cluster_upgrade policy and added a predefined cluster_maintenance one. But we kept the policy units intact. Both ClusterInMaintenance and InClusterUpgrade pair (filter + weight) should still be visible in the drag'n'drop boxes.