Bug 1323941 - [RFE] Keep changes to configuration files between reboots and updates
Summary: [RFE] Keep changes to configuration files between reboots and updates
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: RFEs
Version: 3.3.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: ovirt-4.0.0-rc3
: 4.0.0
Assignee: Fabian Deutsch
QA Contact: Huijuan Zhao
URL:
Whiteboard:
: 1323940 (view as bug list)
Depends On: 1340378 1340382
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-05 06:46 UTC by Gajanan
Modified: 2019-10-10 11:50 UTC (History)
15 users (show)

Fixed In Version: ovirt-node-ng-installer-ovirt-4.0-snapshot-2016062108.iso
Doc Type: Enhancement
Doc Text:
With this update, a regular file system has been added for /etc on Red Hat Virtualization Hosts. This allows better alignment with Red Hat Enterprise Linux hosts and means the same set of tools can be used for both Red Hat Virtualization Hosts and Red Hat Enterprise Linux hosts.
Clone Of:
Environment:
Last Closed: 2016-08-23 20:34:27 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:
huzhao: testing_plan_complete+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1743 0 normal SHIPPED_LIVE Red Hat Virtualization Manager 4.0 GA Enhancement (ovirt-engine) 2016-09-02 21:54:01 UTC

Description Gajanan 2016-04-05 06:46:36 UTC
====================== RFE TEMPLATE =========================

1. Proposed title of this feature request

RFE for allowing to change the migration parameter values globally through the gui for all RHEV-H

3. What is the nature and description of the request?

> Customer wants to open a feature request on being able to push configs to all rhevh machines. Especially in a large environment changing the config on every rhevh machine would be a fairly daunting task. That or allow you to change the settings globally in the gui which would be even better.

> Customer wants to add a feature to be added in rhevm to change the values for the following parameters : "migration_max_bandwidth" and "max_outgoing_migrations" or "DefaultMaximumMigrationDowntime" from the RHEV-M gui. 


4. Why does the customer need this? (List the business requirements here)

> Customer Reply: Single location to manage all hypervisors makes deployments in large enviroments much easier, if you have to edit every rhevh machine on by one every time you do an update it makes errors much more likely. Also changing rhevh manually does make it so that its not the "clean room" hypervisor that it is when its a pure install.


5. How would the customer like to achieve this? (List the functional requirements here)

> Customer Reply: The real issue is that migrations can time out when the VM has high usage, we were able to work-around the issue by changing the timeout on the migration while also changing the max bandwidth. As I can assume there is technical reasons for those defaults being able to change them would make it possible to migrate higher usage vm's.

> For example for us we have some very busy vm's and use max_bandwidth 5000, max outgoing 1 and max downtime 1 second so that the live migrations work.

6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.

7. Is there already an existing RFE upstream or in Red Hat Bugzilla?

> Yes it was closed : Bug#1058562, it was requested by the same customer, we updated him about why it was closed, but this customer wants these features to be added in the rhevm. 

8. Does the customer have any specific timeline dependencies and which release would they like to target (i.e. RHEL5, RHEL6)?

> Customer Reply: Sooner the better :)

9. Is the sales team involved in this request and do they have any additional input?

10. List any affected packages or components.

11. Would the customer be able to assist in testing this functionality if implemented?

> Yes.

Comment 2 Michal Skrivanek 2016-04-05 10:17:43 UTC
(In reply to Gajanan from comment #0)
> > Customer wants to open a feature request on being able to push configs to all rhevh machines. Especially in a large environment changing the config on every rhevh machine would be a fairly daunting task. That or allow you to change the settings globally in the gui which would be even better.

Deferring to Fabian to respond, but if this is only about migration parameters then please see below

 
> > Customer wants to add a feature to be added in rhevm to change the values for the following parameters : "migration_max_bandwidth" and "max_outgoing_migrations" or "DefaultMaximumMigrationDowntime" from the RHEV-M gui. 
...
> > Customer Reply: The real issue is that migrations can time out when the VM has high usage, we were able to work-around the issue by changing the timeout on the migration while also changing the max bandwidth. As I can assume there is technical reasons for those defaults being able to change them would make it possible to migrate higher usage vm's.

RFE bug 1252426 is supposed to improve convergence of migrations and also the ability to use different settings/policies for VMs

Comment 3 Fabian Deutsch 2016-04-05 10:52:33 UTC
For RHEV-H 3 we do not plan to introduce such a feature.

In RHEV-H 4 the same tooling which is used for RHEL can very likely also be used for RHEV-H, i.e. configuration management tools or something like cluster ssh.

Would such tools be sufficient for the customer?

Comment 4 Michal Skrivanek 2016-04-06 05:58:21 UTC
*** Bug 1323940 has been marked as a duplicate of this bug. ***

Comment 7 Fabian Deutsch 2016-04-12 13:56:42 UTC
The scope of this RFE is to ensure that changes to configuration files in /etc will be kept accros updates and reboots.

Comment 10 Ying Cui 2016-04-28 08:22:27 UTC
To QE, we have to clarify this RFE bug before qa_ack, is this REF split two RFE in 4.0? One is in Engine(bug 1252426 - (migration_improvements) [RFE] Migration improvements (convergence, bandwidth utilization) ), and this bug is in Node RFE, right?
If so, according to comment 9, what changes will be on Node? Does QE only need to set %post script to add commands to set configuration item in kickstart file? and check the value before and after the updates? How does QE fully verify this feature enhancement? Thanks.

Comment 11 Fabian Deutsch 2016-04-28 09:05:33 UTC
This feature can be verified by:

1. Install NGN
2. Configure it
3. Add it to Engine
4. Spawn a Vm with storage etc
5. update
6. reboot
7. Spawn the Vm again

After step 7:
- All configuration from step 2-4 should be the same
- everything should work as in step 4.

Comment 12 Ying Cui 2016-04-29 02:55:31 UTC
according to comment 11, qa_ack+ on this bug.

Comment 14 Huijuan Zhao 2016-06-02 06:27:26 UTC
According comment 11, test steps include upgrade,  but due to Bug 1340378 and Bug 1340382, can not upgrade via jenkins.
And for downstream builds, do not support upgrade now.
So I will verify this bug after Bug 1340378 is verified or downstream build can support upgrade.

Comment 15 Fabian Deutsch 2016-06-03 18:18:33 UTC
Douglas, please check the ovirt-4.0-snapshot builds, if updates is working with them, then please move this bug to ON_QA.

Comment 17 Huijuan Zhao 2016-07-14 10:54:56 UTC
I tested this issue on ovirt-node-ng-installer-ovirt-4.0-2016062004.iso,  after upgrade via "yum update" successful on host, reboot and login host, the host on rhevm side is not up, it is "Non responsive".

Test version:
1. Before update:
imgbased-0.7.0-0.201606081307gitfb92e93.el7.centos.noarch
ovirt-node-ng-image-update-placeholder-4.0.0-1.el7.noarch
kernel-3.10.0-327.18.2.el7.x86_64
ovirt-release-host-node-4.0.0-1.el7.noarch
2. After update:
imgbased-0.7.0-0.201606081307gitfb92e93.el7.centos.noarch
ovirt-node-ng-image-update-placeholder-4.0.0-5.201606240219.el7.noarch
kernel-3.10.0-327.22.2.el7.x86_64
ovirt-release-host-node-4.0.0-5.el7.noarch

Test steps:
1. Install NGN
2. Configure it
3. Add it to Engine 4.0
4. Spawn a Vm with nfs storage 
5. update NGN via "yum update"
6. reboot NGN
7. Spawn the Vm again

Actual results:
1. After step7, NGN status on engine side is not up, it is "Non responsive", so the Vm can not be up.

Expected results:
1. After step7, NGN status on engine side is up, and the Vm can be up normal.

I will continue to test this issue later with other build tomorrow to confirm if it is not fixed.

Comment 18 Huijuan Zhao 2016-07-15 10:45:39 UTC
Test version:
Before update:
redhat-virtualization-host-4.0-20160714.3
imgbased-0.7.2-0.1.el7ev.noarch
redhat-release-virtualization-host-4.0-0.20.el7.x86_64
kernel-3.10.0-327.22.2.el7.x86_64
After update:
redhat-virtualization-host-4.0-20160714.5
imgbased-0.7.2-0.1.el7ev.noarch
redhat-release-virtualization-host-4.0-0.20.el7.x86_64
kernel-3.10.0-327.22.2.el7.x86_64
redhat-virtualization-host-image-update-4.0-20160714.5.el7.noarch


Test steps:
1. Install RHVH redhat-virtualization-host-4.0-20160714.3
2. Reboot and login RHVH, register RHVH to rhsm, enable repos
3. Add it to rhevm 4.0, Spawn a Vm with nfs storage 
4. Update RHVH via "yum update"
5. reboot RHVH
6. Check the RHVH and Vm status in rhevm side

Test results:
1. After step4, update RHVH successful
2. After step6, RHVH and Vm status is up in rhevm side

So this issue is fixed on redhat-virtualization-host-4.0-20160714.3, change the status to VERIFIED.

Comment 20 errata-xmlrpc 2016-08-23 20:34:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-1743.html


Note You need to log in before you can comment on or make changes to this bug.