Bug 1323941
Summary: | [RFE] Keep changes to configuration files between reboots and updates | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Gajanan <gchakkar> |
Component: | RFEs | Assignee: | Fabian Deutsch <fdeutsch> |
Status: | CLOSED ERRATA | QA Contact: | Huijuan Zhao <huzhao> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 3.3.0 | CC: | dfediuck, dougsland, fdeutsch, gchakkar, gklein, lbopf, lsurette, melewis, mgoldboi, michal.skrivanek, rbalakri, srevivo, tlitovsk, ycui, ykaul |
Target Milestone: | ovirt-4.0.0-rc3 | Keywords: | FutureFeature |
Target Release: | 4.0.0 | Flags: | huzhao:
testing_plan_complete+
|
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | ovirt-node-ng-installer-ovirt-4.0-snapshot-2016062108.iso | Doc Type: | Enhancement |
Doc Text: |
With this update, a regular file system has been added for /etc on Red Hat Virtualization Hosts. This allows better alignment with Red Hat Enterprise Linux hosts and means the same set of tools can be used for both Red Hat Virtualization Hosts and Red Hat Enterprise Linux hosts.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2016-08-23 20:34:27 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Node | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1340378, 1340382 | ||
Bug Blocks: |
Description
Gajanan
2016-04-05 06:46:36 UTC
(In reply to Gajanan from comment #0) > > Customer wants to open a feature request on being able to push configs to all rhevh machines. Especially in a large environment changing the config on every rhevh machine would be a fairly daunting task. That or allow you to change the settings globally in the gui which would be even better. Deferring to Fabian to respond, but if this is only about migration parameters then please see below > > Customer wants to add a feature to be added in rhevm to change the values for the following parameters : "migration_max_bandwidth" and "max_outgoing_migrations" or "DefaultMaximumMigrationDowntime" from the RHEV-M gui. ... > > Customer Reply: The real issue is that migrations can time out when the VM has high usage, we were able to work-around the issue by changing the timeout on the migration while also changing the max bandwidth. As I can assume there is technical reasons for those defaults being able to change them would make it possible to migrate higher usage vm's. RFE bug 1252426 is supposed to improve convergence of migrations and also the ability to use different settings/policies for VMs For RHEV-H 3 we do not plan to introduce such a feature. In RHEV-H 4 the same tooling which is used for RHEL can very likely also be used for RHEV-H, i.e. configuration management tools or something like cluster ssh. Would such tools be sufficient for the customer? *** Bug 1323940 has been marked as a duplicate of this bug. *** The scope of this RFE is to ensure that changes to configuration files in /etc will be kept accros updates and reboots. To QE, we have to clarify this RFE bug before qa_ack, is this REF split two RFE in 4.0? One is in Engine(bug 1252426 - (migration_improvements) [RFE] Migration improvements (convergence, bandwidth utilization) ), and this bug is in Node RFE, right? If so, according to comment 9, what changes will be on Node? Does QE only need to set %post script to add commands to set configuration item in kickstart file? and check the value before and after the updates? How does QE fully verify this feature enhancement? Thanks. This feature can be verified by: 1. Install NGN 2. Configure it 3. Add it to Engine 4. Spawn a Vm with storage etc 5. update 6. reboot 7. Spawn the Vm again After step 7: - All configuration from step 2-4 should be the same - everything should work as in step 4. according to comment 11, qa_ack+ on this bug. According comment 11, test steps include upgrade, but due to Bug 1340378 and Bug 1340382, can not upgrade via jenkins. And for downstream builds, do not support upgrade now. So I will verify this bug after Bug 1340378 is verified or downstream build can support upgrade. Douglas, please check the ovirt-4.0-snapshot builds, if updates is working with them, then please move this bug to ON_QA. I tested this issue on ovirt-node-ng-installer-ovirt-4.0-2016062004.iso, after upgrade via "yum update" successful on host, reboot and login host, the host on rhevm side is not up, it is "Non responsive". Test version: 1. Before update: imgbased-0.7.0-0.201606081307gitfb92e93.el7.centos.noarch ovirt-node-ng-image-update-placeholder-4.0.0-1.el7.noarch kernel-3.10.0-327.18.2.el7.x86_64 ovirt-release-host-node-4.0.0-1.el7.noarch 2. After update: imgbased-0.7.0-0.201606081307gitfb92e93.el7.centos.noarch ovirt-node-ng-image-update-placeholder-4.0.0-5.201606240219.el7.noarch kernel-3.10.0-327.22.2.el7.x86_64 ovirt-release-host-node-4.0.0-5.el7.noarch Test steps: 1. Install NGN 2. Configure it 3. Add it to Engine 4.0 4. Spawn a Vm with nfs storage 5. update NGN via "yum update" 6. reboot NGN 7. Spawn the Vm again Actual results: 1. After step7, NGN status on engine side is not up, it is "Non responsive", so the Vm can not be up. Expected results: 1. After step7, NGN status on engine side is up, and the Vm can be up normal. I will continue to test this issue later with other build tomorrow to confirm if it is not fixed. Test version: Before update: redhat-virtualization-host-4.0-20160714.3 imgbased-0.7.2-0.1.el7ev.noarch redhat-release-virtualization-host-4.0-0.20.el7.x86_64 kernel-3.10.0-327.22.2.el7.x86_64 After update: redhat-virtualization-host-4.0-20160714.5 imgbased-0.7.2-0.1.el7ev.noarch redhat-release-virtualization-host-4.0-0.20.el7.x86_64 kernel-3.10.0-327.22.2.el7.x86_64 redhat-virtualization-host-image-update-4.0-20160714.5.el7.noarch Test steps: 1. Install RHVH redhat-virtualization-host-4.0-20160714.3 2. Reboot and login RHVH, register RHVH to rhsm, enable repos 3. Add it to rhevm 4.0, Spawn a Vm with nfs storage 4. Update RHVH via "yum update" 5. reboot RHVH 6. Check the RHVH and Vm status in rhevm side Test results: 1. After step4, update RHVH successful 2. After step6, RHVH and Vm status is up in rhevm side So this issue is fixed on redhat-virtualization-host-4.0-20160714.3, change the status to VERIFIED. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-1743.html |