====================== RFE TEMPLATE ========================= 1. Proposed title of this feature request RFE for allowing to change the migration parameter values globally through the gui for all RHEV-H 3. What is the nature and description of the request? > Customer wants to open a feature request on being able to push configs to all rhevh machines. Especially in a large environment changing the config on every rhevh machine would be a fairly daunting task. That or allow you to change the settings globally in the gui which would be even better. > Customer wants to add a feature to be added in rhevm to change the values for the following parameters : "migration_max_bandwidth" and "max_outgoing_migrations" or "DefaultMaximumMigrationDowntime" from the RHEV-M gui. 4. Why does the customer need this? (List the business requirements here) > Customer Reply: Single location to manage all hypervisors makes deployments in large enviroments much easier, if you have to edit every rhevh machine on by one every time you do an update it makes errors much more likely. Also changing rhevh manually does make it so that its not the "clean room" hypervisor that it is when its a pure install. 5. How would the customer like to achieve this? (List the functional requirements here) > Customer Reply: The real issue is that migrations can time out when the VM has high usage, we were able to work-around the issue by changing the timeout on the migration while also changing the max bandwidth. As I can assume there is technical reasons for those defaults being able to change them would make it possible to migrate higher usage vm's. > For example for us we have some very busy vm's and use max_bandwidth 5000, max outgoing 1 and max downtime 1 second so that the live migrations work. 6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented. 7. Is there already an existing RFE upstream or in Red Hat Bugzilla? > Yes it was closed : Bug#1058562, it was requested by the same customer, we updated him about why it was closed, but this customer wants these features to be added in the rhevm. 8. Does the customer have any specific timeline dependencies and which release would they like to target (i.e. RHEL5, RHEL6)? > Customer Reply: Sooner the better :) 9. Is the sales team involved in this request and do they have any additional input? 10. List any affected packages or components. 11. Would the customer be able to assist in testing this functionality if implemented? > Yes.
(In reply to Gajanan from comment #0) > > Customer wants to open a feature request on being able to push configs to all rhevh machines. Especially in a large environment changing the config on every rhevh machine would be a fairly daunting task. That or allow you to change the settings globally in the gui which would be even better. Deferring to Fabian to respond, but if this is only about migration parameters then please see below > > Customer wants to add a feature to be added in rhevm to change the values for the following parameters : "migration_max_bandwidth" and "max_outgoing_migrations" or "DefaultMaximumMigrationDowntime" from the RHEV-M gui. ... > > Customer Reply: The real issue is that migrations can time out when the VM has high usage, we were able to work-around the issue by changing the timeout on the migration while also changing the max bandwidth. As I can assume there is technical reasons for those defaults being able to change them would make it possible to migrate higher usage vm's. RFE bug 1252426 is supposed to improve convergence of migrations and also the ability to use different settings/policies for VMs
For RHEV-H 3 we do not plan to introduce such a feature. In RHEV-H 4 the same tooling which is used for RHEL can very likely also be used for RHEV-H, i.e. configuration management tools or something like cluster ssh. Would such tools be sufficient for the customer?
*** Bug 1323940 has been marked as a duplicate of this bug. ***
The scope of this RFE is to ensure that changes to configuration files in /etc will be kept accros updates and reboots.
To QE, we have to clarify this RFE bug before qa_ack, is this REF split two RFE in 4.0? One is in Engine(bug 1252426 - (migration_improvements) [RFE] Migration improvements (convergence, bandwidth utilization) ), and this bug is in Node RFE, right? If so, according to comment 9, what changes will be on Node? Does QE only need to set %post script to add commands to set configuration item in kickstart file? and check the value before and after the updates? How does QE fully verify this feature enhancement? Thanks.
This feature can be verified by: 1. Install NGN 2. Configure it 3. Add it to Engine 4. Spawn a Vm with storage etc 5. update 6. reboot 7. Spawn the Vm again After step 7: - All configuration from step 2-4 should be the same - everything should work as in step 4.
according to comment 11, qa_ack+ on this bug.
According comment 11, test steps include upgrade, but due to Bug 1340378 and Bug 1340382, can not upgrade via jenkins. And for downstream builds, do not support upgrade now. So I will verify this bug after Bug 1340378 is verified or downstream build can support upgrade.
Douglas, please check the ovirt-4.0-snapshot builds, if updates is working with them, then please move this bug to ON_QA.
I tested this issue on ovirt-node-ng-installer-ovirt-4.0-2016062004.iso, after upgrade via "yum update" successful on host, reboot and login host, the host on rhevm side is not up, it is "Non responsive". Test version: 1. Before update: imgbased-0.7.0-0.201606081307gitfb92e93.el7.centos.noarch ovirt-node-ng-image-update-placeholder-4.0.0-1.el7.noarch kernel-3.10.0-327.18.2.el7.x86_64 ovirt-release-host-node-4.0.0-1.el7.noarch 2. After update: imgbased-0.7.0-0.201606081307gitfb92e93.el7.centos.noarch ovirt-node-ng-image-update-placeholder-4.0.0-5.201606240219.el7.noarch kernel-3.10.0-327.22.2.el7.x86_64 ovirt-release-host-node-4.0.0-5.el7.noarch Test steps: 1. Install NGN 2. Configure it 3. Add it to Engine 4.0 4. Spawn a Vm with nfs storage 5. update NGN via "yum update" 6. reboot NGN 7. Spawn the Vm again Actual results: 1. After step7, NGN status on engine side is not up, it is "Non responsive", so the Vm can not be up. Expected results: 1. After step7, NGN status on engine side is up, and the Vm can be up normal. I will continue to test this issue later with other build tomorrow to confirm if it is not fixed.
Test version: Before update: redhat-virtualization-host-4.0-20160714.3 imgbased-0.7.2-0.1.el7ev.noarch redhat-release-virtualization-host-4.0-0.20.el7.x86_64 kernel-3.10.0-327.22.2.el7.x86_64 After update: redhat-virtualization-host-4.0-20160714.5 imgbased-0.7.2-0.1.el7ev.noarch redhat-release-virtualization-host-4.0-0.20.el7.x86_64 kernel-3.10.0-327.22.2.el7.x86_64 redhat-virtualization-host-image-update-4.0-20160714.5.el7.noarch Test steps: 1. Install RHVH redhat-virtualization-host-4.0-20160714.3 2. Reboot and login RHVH, register RHVH to rhsm, enable repos 3. Add it to rhevm 4.0, Spawn a Vm with nfs storage 4. Update RHVH via "yum update" 5. reboot RHVH 6. Check the RHVH and Vm status in rhevm side Test results: 1. After step4, update RHVH successful 2. After step6, RHVH and Vm status is up in rhevm side So this issue is fixed on redhat-virtualization-host-4.0-20160714.3, change the status to VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-1743.html