Bug 1421098 - Support for el7 rhv-h-ng with rhv-3.6's vdsm-v4.17.z
Summary: Support for el7 rhv-h-ng with rhv-3.6's vdsm-v4.17.z
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node-ng
Version: 3.6.10
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ovirt-3.6.11
: ---
Assignee: Ryan Barry
QA Contact: Huijuan Zhao
URL:
Whiteboard:
: 1290340 (view as bug list)
Depends On: 1417534 1425194 1425372 1425378 1425502 1425660 1435887
Blocks: 1430513
TreeView+ depends on / blocked
 
Reported: 2017-02-10 10:49 UTC by Dan Kenigsberg
Modified: 2021-08-30 13:35 UTC (History)
16 users (show)

Fixed In Version: redhat-virtualization-host-3.6-20170404.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-05-09 17:04:46 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1403846 0 unspecified CLOSED keep 3.6 in the supportedEngines reported by VDSM 2021-02-22 00:41:40 UTC
Red Hat Issue Tracker RHV-43271 0 None None None 2021-08-30 12:49:58 UTC
Red Hat Knowledge Base (Solution) 2978401 0 None None None 2017-03-22 19:40:01 UTC
Red Hat Product Errata RHEA-2017:1212 0 normal SHIPPED_LIVE redhat-virtualization-host bug fix and enhancement update for RHV 3.6.11 2017-05-09 21:03:29 UTC

Internal Links: 1403846

Description Dan Kenigsberg 2017-02-10 10:49:02 UTC
Description of problem:

Many customers are still on el6-rhv3.5-vintage.

Currently, to get to a modern host, they need to do two consecutive
reinstalls: el6-3.5-vintage -> (reinstall) el7-3.6-vintage ->
(reinstall) el7-4.y-ngn

The intermediate stage is required, because 3.6 is the only version
that is supported by engine 3.y and 4.y.

We could replace one of the reinstalls with a place upgrade, if
el7-3.6-ngn becomes supported:

el6-3.5-vintage -> (reinstall) el7-3.6-ngn -> (upgrade) el7-4.y-ngn


Please generate el7-3.6-ngn (based on most stable el7-3.7-ngn code) with latest v4.17.z and its ovirt dependencies (mom, imageio) taken from their ovirt-3.6 branch.

Comment 1 Dan Kenigsberg 2017-02-10 10:54:45 UTC
The main purpose of this image is to serve as a stepping stone for an upgrade form el6-rhv-3.5 to el7-rvh-4.y.

QE should match the purpose, by testing the image thoroughly with both 3.5 and 4.1 Engines.

Comment 2 Huijuan Zhao 2017-02-15 05:57:22 UTC
Douglas, for the first step:  el6-3.5-vintage(+ engine 3.6) -> (reinstall) el7-3.6-ngn(+ engine 3.6), QE would like to confirm detail steps to migrate/import VM on NFS/iSCSI/FC/Local storage before and after reinstallation. 

Are these steps the same as RFEs [VERIFIED][4.1]Bug 1376454 and 1320556?
Thanks!

Comment 3 Douglas Schilling Landgraf 2017-02-15 21:09:07 UTC
(In reply to Huijuan Zhao from comment #2)
> Douglas, for the first step:  el6-3.5-vintage(+ engine 3.6) -> (reinstall)
> el7-3.6-ngn(+ engine 3.6), QE would like to confirm detail steps to
> migrate/import VM on NFS/iSCSI/FC/Local storage before and after
> reinstallation. 
> 
> Are these steps the same as RFEs [VERIFIED][4.1]Bug 1376454 and 1320556?
> Thanks!

In case you have two hosts, might be the first step migrate the virtual machines to the new server (make sure cluster allow it). On the other hand, if you want to reinstall the same server, I believe the documentation you share seems good.

Comment 8 Huijuan Zhao 2017-02-21 06:23:54 UTC
Encountered Bug 1425194 during testing, so assign this bug.

Comment 9 Huijuan Zhao 2017-02-22 05:49:47 UTC
I tested with below steps, encountered some issues. 
Please review if the steps are correct.

Test version:
1. RHEVH/RHVH:
Build 1:
el6-3.5-vintage, rhev-hypervisor6-6.8-20160707.3.iso
Build 2:
el7-3.6-ngn, RHVH-3.6-20170217.5-RHVH-x86_64-dvd1.iso
Build 3:
el7-4.y-ngn, redhat-virtualization-host-4.1-20170208.0
2. RHEVM 3.6
3.6.9.2-0.1.el6

Test steps:
**NFS storage
=====================================
1. Install el6-3.5-vintage rhev-hypervisor6-6.8-20160707.3.iso, add el6-3.5-vintage to engine 3.6(with 3.5 cluster), setup nfs storage to host, and create vm on host, vm can get dhcp IP.
Below are detailed steps:
Original env: el6-3.5-vintage + engine3.6(3.5 cluster)
================
  1.1 Install rhev-hypervisor6-6.8-20160707.3.iso
  1.2 Register host into RHEVM 3.6
  1.3 Adde data storage (NFS)
  1.4 Adde ISO storage (NFS)
  1.5 Create disk
  1.6 Create virtual machine (disk attached), set network as dhcp mode, can get dhcp IP
  1.7 Before the migration, change the interval of Ovf update task to make sure all vms will be in the storage. This task is usually executed each 60minutes.
      # engine-config -s OvfUpdateIntervalInMinutes=1
      # service ovirt-engine restart (Wait at least 1 minute after restart)

2. Set maintenance of host el6-3.5-vintage in engine 3.6, and delete host(delete related DC and cluster) from engine 3.6 side.

3. Re-install RHVH-3.6-20170217.5-RHVH-x86_64-dvd1.iso in the same host, add it to engine 3.6(3.6 cluster), migrate vm to RHVH-3.6.
Below are detailed steps:
On el7-3.6-ngn + engine3.6(3.6 cluster) side:
===============
  3.1 Initial Setup for RHVH and RHEVM:
      - Install el7-3.6-ngn
      - Registered host into RHEVM 3.6
      - Enable different NFS Storage as data domain in the datacenter,
        just to make it UP.

  3.2 Now import the NFS Storage from RHEVM:
     - Storage tab
        -> Import Domain
              -> Provide the NFS Storage from step 1.3

  3.3 In the Datacenter tab, select storage and active the imported domain

  3.4 Import the VM/Disk:
     - Storage tab
        -> Select the domain imported
            -> Select VM Import subtab
                -> Select VM and Click import
  3.5 After import VM, run VM (Bug 1425378)

4. Login RHVH 3.6, setup local repos, upgrade to RHVH-4.1 from rhvh side(Bug 1425660)
   # yum update

5. After upgrade, reboot RHVH and login RHVH-4.1 (Bug 1425372, Bug 1425194)


Test results:
In step 3.5, step 4, step 5, encountered Bug 1425378, Bug 1425660, Bug 1425372, Bug 1425194.


Also encounter these bugs when tested with local disk data storage.

Comment 12 Huijuan Zhao 2017-02-23 06:13:15 UTC
Update step 1.2 in comment 9:

=====================================
1. Install el6-3.5-vintage rhev-hypervisor6-6.8-20160707.3.iso, add el6-3.5-vintage to engine 3.6(with 3.5 cluster), setup nfs storage to host, and create vm on host, vm can get dhcp IP.
Below are detailed steps:
Original env: el6-3.5-vintage + engine3.6(3.5 cluster)
================
  1.1 Install rhev-hypervisor6-6.8-20160707.3.iso
  1.2 Register host into RHEVM 3.6
    - Create Data Center with 3.5 compatibility version 
    - In networks tab,  create a new network named "rhevm"
    - Create cluster(3.5 Compatibility Version), using "rhevm" as Management Network
    - Add host to this cluster

Comment 15 Huijuan Zhao 2017-03-09 07:00:51 UTC
Thanks Dan Kenigsberg, Douglas, jiawu, cshao and ycui to clarify the testing steps, below is detailed testing steps, please review. 
Any question please correct me.

Test version:
1. RHEVH/RHVH:
Build 1:
el6-3.5-vintage, rhev-hypervisor6-6.8-20160707.3.iso
Build 2:
el7-3.6-ngn, rhvh-3.6-0.20170307.0
Build 3:
el7-4.y-ngn
2. RHEVM 3.6
3.6.10.2-0.2.el6

Test steps:
**NFS storage

1. Install el6-3.5-vintage(rhev-hypervisor6-6.8-20160707.3.iso) on host 1
  
2. In RHEVM 3.6, add host 1(el6-3.5-vintage) to RHEVM 3.6 (3.5 cluster), add NFS data/ISO storage, create VM on host 1. Below are detail steps:
    -2.1 Create Data Center with 3.5 compatibility version
    -2.2 In networks tab,  create a new network named "rhevm"
    -2.3 Create cluster35(3.5 Compatibility Version), using "rhevm" as Management Network
     Note: Select the Management Network as "rhevm"
    -2.4 Add host 1 (rhev-hypervisor6-6.8-20160707.3) to this cluster
    -2.5 Add Data Storage to Data Center in step 2.1 (NFS storage)
    -2.6 Add ISO storage (NFS storage)
    -2.7 Create virtual machine (Create disk, disk attached), set network as dhcp mode, VM can get dhcp IP

3. Install el7-3.6-ngn(rhvh-3.6-0.20170307.0) on another host (host 2)

4. Add host 2 (rhvh-3.6-0.20170307.0) to RHEVM 3.6 with the same cluster35(created in step 2.3)

5. Enable InClusterUpgrade option in RHEVM 3.6:
    # engine-config -s CheckMixedRhelVersions=false --cver=3.5
    # service ovirt-engine restart
    In RHEVM 3.6 UI:
     -> Click the Clusters tab.
        -> Select the cluster35(created in step 2.3) and click Edit.
        -> Click the Scheduling Policy tab.
        -> Select "InClusterUpgrade" from the Select Policy drop-down list.

6. Migrate the virtual machine from host 1 to host 2 in RHEVM 3.6 UI:
     -> In the RHEVM 3.6 Virtual Machine tab
        -> Select the VM(created in step 2.7), click "Migrate"
           -> Select Destination host
              -> Select host 2 -> click "OK"

7. Set maintenance of host 1 (el6-3.5-vintage) in RHEVM 3.6 UI, delete host 1 from RHEVM 3.6

8. Re-install host 1 with el7-3.6-ngn(rhvh-3.6-0.20170307.0), add host 1 back to cluster35(created in step2.3)

9. In RHEVM 3.6 UI, change cluster to 3.6:
   - Edit cluster35(created in step 2.3) from 3.5 to 3.6 Compatibility Version
   - Edit Date Center(created in step 2.1) from 3.5 to 3.6 Compatibility Version

10. In host 2, setup local repos and upgrade to el7-4.y-ngn from host side:
    # yum update

11. Reboot host 2, enter to new build el7-4.y-ngn.
    - Check the host 2 status should be up in RHEVM 3.6(3.6 cluster)
    - Run/Stop the VM in RHEVM 3.6

12. Upgrade host 1 to el7-4.y-ngn according to step 10 and step 11.

Comment 17 Douglas Schilling Landgraf 2017-03-09 21:22:04 UTC
Hi Yaniv,

Could you please review comment#15 ?

Comment 18 Yaniv Lavi 2017-03-12 13:18:33 UTC
(In reply to Douglas Schilling Landgraf from comment #17)
> Hi Yaniv,
> 
> Could you please review comment#15 ?

Done. Looks good to me.

Comment 19 Huijuan Zhao 2017-03-13 05:18:19 UTC
(In reply to Yaniv Dary from comment #18)
> (In reply to Douglas Schilling Landgraf from comment #17)
> > Hi Yaniv,
> > 
> > Could you please review comment#15 ?
> 
> Done. Looks good to me.

Thanks Yaniv and Douglas.
I will verify this bug later according to comment 15 due to many other higher priority testing task.

Comment 20 Huijuan Zhao 2017-03-27 09:50:46 UTC
Test version:
1. RHEVH/RHVH:
Build 1:
el6-3.5-vintage, rhev-hypervisor6-6.8-20160707.3.iso
Build 2:
el7-3.6-ngn, rhvh-3.6-0.20170307.0
Build 3:
el7-4.y-ngn, rhvh-4.1-0.20170323.0

2. RHEVM 3.6
3.6.11-0.1.el6

Test steps:
Same as comment 15.

Test results:
In step11 and step12, encountered Bug 1425502 and Bug 1435887.

So can not verify this bug now, I will verify this bug after Bug 1425502 and Bug 1435887 are resolved.

Comment 21 Huijuan Zhao 2017-04-10 08:09:48 UTC
Some concern: 

According to comment 15, this testing steps only support shared storage(NFS/iSCSI/FC), can not support local storage(as local storage can not be available between two hosts migration). 

So QE will only cover shared storage for this build testing.

Comment 23 Sandro Bonazzola 2017-04-11 07:35:14 UTC
*** Bug 1290340 has been marked as a duplicate of this bug. ***

Comment 25 Huijuan Zhao 2017-04-27 08:26:35 UTC
Tested in NFS/iSCSI/FC machines with redhat-virtualization-host-3.6-0.20170424.0, below are detailed test results.


Test version:
1. RHEVH/RHVH:
Build 1:
el6-3.5-vintage, rhev-hypervisor6-6.8-20160707.3.iso
Build 2:
el7-3.6-ngn, redhat-virtualization-host-3.6-0.20170424.0
Build 3:
rhvh-4.1, redhat-virtualization-host-4.1-20170421.0

2. RHEVM 3.6
3.6.11-0.1.el6



** NFS/iSCSI storage **

Test steps:
Same with Comment 15

Test results:
After step 11 and step 12, host1 and host2 both upgrade to RHVH-4.1 successfully, vm can run successfully.



**FC storage**
Only have 1 FC machine, can not test according to Comment 15, so tested as below steps.

Test steps:
1. Install el7-3.6-ngn(rhvh-3.6-0.20170424.0) on host

2. Enable InClusterUpgrade option in RHEVM 3.6 with the cluster35(3.5 compatibility version):
    # engine-config -s CheckMixedRhelVersions=false --cver=3.5
    # service ovirt-engine restart
    In RHEVM 3.6 UI:
     -> Click the Clusters tab.
        -> Select the cluster35 and click Edit.
        -> Click the Scheduling Policy tab.
        -> Select "InClusterUpgrade" from the Select Policy drop-down list.
Note: The management network name is "rhevm"

3. Add host (rhvh-3.6-0.20170424.0) to RHEVM 3.6 with the cluster35(3.5 compatibility version), the management network name is "rhevm".
    -3.1 Add host (rhvh-3.6-0.20170424.0) to this cluster
    -3.2 Add Data Storage (FC storage)
    -3.3 Add ISO storage (NFS storage)
    -3.4 Create virtual machine (Create disk, disk attached), set network as dhcp mode, VM can get dhcp IP

4. In host, setup local repos and upgrade to RHVH-4.1 from host side:
    # yum update

5. In RHEVM 3.6 UI, change cluster to 3.6:
   - Edit cluster from 3.5 to 3.6 Compatibility Version
   - Edit Date Center from 3.5 to 3.6 Compatibility Version

6. Reboot host, enter to new build RHVH-4.1
    - Check the host status should be up in RHEVM 3.6(3.6 cluster)
    - Run/Stop the VM in RHEVM 3.6


Test results:
After step 6, host upgrade to RHVH-4.1 successfully, run/stop VM successfully.

Comment 26 Huijuan Zhao 2017-04-27 08:29:24 UTC
Douglas, could you please review Comment 25? 
According to the testing scenarios in Comment 25, could I verify this bug?
Thanks!

Comment 27 Douglas Schilling Landgraf 2017-04-27 18:20:41 UTC
(In reply to Huijuan Zhao from comment #25)
> Tested in NFS/iSCSI/FC machines with
> redhat-virtualization-host-3.6-0.20170424.0, below are detailed test results.
> 
> 
> Test version:
> 1. RHEVH/RHVH:
> Build 1:
> el6-3.5-vintage, rhev-hypervisor6-6.8-20160707.3.iso
> Build 2:
> el7-3.6-ngn, redhat-virtualization-host-3.6-0.20170424.0
> Build 3:
> rhvh-4.1, redhat-virtualization-host-4.1-20170421.0
> 
> 2. RHEVM 3.6
> 3.6.11-0.1.el6
> 
> 
> 
> ** NFS/iSCSI storage **
> 
> Test steps:
> Same with Comment 15
> 
> Test results:
> After step 11 and step 12, host1 and host2 both upgrade to RHVH-4.1
> successfully, vm can run successfully.
> 
> 
> 
> **FC storage**
> Only have 1 FC machine, can not test according to Comment 15, so tested as
> below steps.
> 
> Test steps:
> 1. Install el7-3.6-ngn(rhvh-3.6-0.20170424.0) on host
> 
> 2. Enable InClusterUpgrade option in RHEVM 3.6 with the cluster35(3.5
> compatibility version):
>     # engine-config -s CheckMixedRhelVersions=false --cver=3.5
>     # service ovirt-engine restart
>     In RHEVM 3.6 UI:
>      -> Click the Clusters tab.
>         -> Select the cluster35 and click Edit.
>         -> Click the Scheduling Policy tab.
>         -> Select "InClusterUpgrade" from the Select Policy drop-down list.
> Note: The management network name is "rhevm"
> 
> 3. Add host (rhvh-3.6-0.20170424.0) to RHEVM 3.6 with the cluster35(3.5
> compatibility version), the management network name is "rhevm".
>     -3.1 Add host (rhvh-3.6-0.20170424.0) to this cluster
>     -3.2 Add Data Storage (FC storage)
>     -3.3 Add ISO storage (NFS storage)
>     -3.4 Create virtual machine (Create disk, disk attached), set network as
> dhcp mode, VM can get dhcp IP
> 
> 4. In host, setup local repos and upgrade to RHVH-4.1 from host side:
>     # yum update

Just a note, for update/upgrade operations, a good practice is put the host in maintenance first.

> 
> 5. In RHEVM 3.6 UI, change cluster to 3.6:
>    - Edit cluster from 3.5 to 3.6 Compatibility Version
>    - Edit Date Center from 3.5 to 3.6 Compatibility Version
> 
> 6. Reboot host, enter to new build RHVH-4.1
>     - Check the host status should be up in RHEVM 3.6(3.6 cluster)
>     - Run/Stop the VM in RHEVM 3.6
> 
> 
> Test results:
> After step 6, host upgrade to RHVH-4.1 successfully, run/stop VM
> successfully.

(In reply to Huijuan Zhao from comment #26)
> Douglas, could you please review Comment 25? 
> According to the testing scenarios in Comment 25, could I verify this bug?
> Thanks!

Looks good to me. Dan/Jiri, any other comment/scenario to mention?

Thanks!

Comment 28 Jiri Belka 2017-05-02 07:06:47 UTC
> Looks good to me. Dan/Jiri, any other comment/scenario to mention?

ack, no objections from my side.

Comment 29 Huijuan Zhao 2017-05-03 06:22:06 UTC
According to comment 25, comment 27, comment 28, change the status to VERIFIED.

Comment 31 errata-xmlrpc 2017-05-09 17:04:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1212


Note You need to log in before you can comment on or make changes to this bug.