Bug 1376454 - [RFE] Document RHEV-H 3.6 (7.3) to RHVH 4.0 (7.3) migration
Summary: [RFE] Document RHEV-H 3.6 (7.3) to RHVH 4.0 (7.3) migration
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: RFEs
Version: 4.0.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.1.0-beta
: ---
Assignee: Douglas Schilling Landgraf
QA Contact: Huijuan Zhao
URL:
Whiteboard:
Depends On:
Blocks: 1415666 1421437
TreeView+ depends on / blocked
 
Reported: 2016-09-15 13:10 UTC by Fabian Deutsch
Modified: 2019-04-28 13:48 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
A reinstalling and restoring workflow was tested and confirmed for moving from version 3.6 Red Hat Enterprise Virtualization Hypervisor hosts to the new implementation, Red Hat Virtualization Host, in 4.0 or 4.1.
Clone Of:
Environment:
Last Closed: 2017-04-25 01:02:29 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:
cshao: testing_plan_complete+


Attachments (Terms of Use)
screenshot of import iscsi domain failed (195.13 KB, image/png)
2017-01-20 10:24 UTC, Huijuan Zhao
no flags Details
login ISCSI (119.89 KB, image/png)
2017-01-20 17:35 UTC, Douglas Schilling Landgraf
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:0997 0 normal SHIPPED_LIVE Red Hat Virtualization Manager (ovirt-engine) 4.1 GA 2017-04-18 20:11:26 UTC

Description Fabian Deutsch 2016-09-15 13:10:55 UTC
Description of problem:
There is no automated upgrade path from RHEV-H 3.6 to RHVH 4.0

But bug 1290340 documents how to migrate fro el6 to el7.
A very similar approach should work for the 3.6 to 4.0 migration too.


The only difference should be that the contents of /config (from 3.6) should be restored into / (on 4.0).
Thus, a path like /config/etc/passwd will be restored into /etc/passwd.

This RFe is about testing and documenting these steps.

Comment 10 Douglas Schilling Landgraf 2017-01-19 01:02:14 UTC
Thanks Maor, I have used 'engine-config -s OvfUpdateIntervalInMinutes=1' and restarted engine. It worked nicely.

Just two comments:

#1 The log at least for, me as 'user', it's not so clear if the task was completed or not:

"""
2017-01-18 18:23:11,465 INFO  [org.ovirt.engine.core.bll.OvfDataUpdater] (DefaultQuartzScheduler_Worker-34) [] Attempting to update VMs/Templates Ovf.
2017-01-18 18:23:11,467 INFO  [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-34) [4b00efcf] Running command: ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected :  ID: 00000001-0001-0001-0001-00000000021e Type: StoragePool
2017-01-18 18:23:11,484 INFO  [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-34) [4b00efcf] Lock freed to object 'EngineLock:{exclusiveLocks='[00000001-0001-0001-0001-00000000021e=<OVF_UPDATE, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}
"""

I see: 
   - "Attempting to update.."
   - "Running command..."
   - 'EngineLock:... <OVF_UPDATE, ACTION_TYPE_FAILED_OBJECT_LOCKED>'

   ACTION_TYPE_FAILED_OBJECT_LOCKED ? FAILED?
      
*if possible* I would suggest to have a different logging schema like:
 
   - "Attempting to update.."
   - "Running command..."
   - "'EngineLock:...'
   - "'OVF_UPDATE completed successfully, updated OVF with the following information/vms..."

#2 - During the import of data domain I saw (in 4.0 RHVM):

"This Data center compatibility version does not support importing a data domain with its entities (VMs and Templates). The imported domain will be imported without them."

As user I was worried checking this message but in the end I was able to import vms via 'Vm Import'. Would worth the effort to add in the message: 

"....VMs and Templates). The imported domain will be imported without them. Please import VMS and Templates manually via VM Import"  

I have noticed that when the DC is in 3.6 compat. mode I don't see such message.
Finally, I could open RFEs if required.

Thanks again!

Comment 16 Maor 2017-01-19 19:47:31 UTC
(In reply to Douglas Schilling Landgraf from comment #10)
> Thanks Maor, I have used 'engine-config -s OvfUpdateIntervalInMinutes=1' and
> restarted engine. It worked nicely.
> 
> Just two comments:
> 
> #1 The log at least for, me as 'user', it's not so clear if the task was
> completed or not:
> 
> """
> 2017-01-18 18:23:11,465 INFO  [org.ovirt.engine.core.bll.OvfDataUpdater]
> (DefaultQuartzScheduler_Worker-34) [] Attempting to update VMs/Templates Ovf.
> 2017-01-18 18:23:11,467 INFO 
> [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand]
> (DefaultQuartzScheduler_Worker-34) [4b00efcf] Running command:
> ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected : 
> ID: 00000001-0001-0001-0001-00000000021e Type: StoragePool
> 2017-01-18 18:23:11,484 INFO 
> [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand]
> (DefaultQuartzScheduler_Worker-34) [4b00efcf] Lock freed to object
> 'EngineLock:{exclusiveLocks='[00000001-0001-0001-0001-
> 00000000021e=<OVF_UPDATE, ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
> sharedLocks='null'}
> """
> 
> I see: 
>    - "Attempting to update.."
>    - "Running command..."
>    - 'EngineLock:... <OVF_UPDATE, ACTION_TYPE_FAILED_OBJECT_LOCKED>'
> 
>    ACTION_TYPE_FAILED_OBJECT_LOCKED ? FAILED?

This is part of the lock mechanism, it doesn't seem to indicate failure only then message that will be presented to the user in case there will be a failure.

>       
> *if possible* I would suggest to have a different logging schema like:
>  
>    - "Attempting to update.."
>    - "Running command..."
>    - "'EngineLock:...'
>    - "'OVF_UPDATE completed successfully, updated OVF with the following
> information/vms..."
> 

I agree that we miss a log indicating the update process finished successfully at the end.

The update process of the OVF contains several steps, it first gather the OVFs use a tar file and upload it as stream of bytes to the VDSM.
We have many logs in the process but no one indicate once it is finished with success.
Let's start first with a log indicating the process finished successfully and add it what we can in the process.

> #2 - During the import of data domain I saw (in 4.0 RHVM):
> 
> "This Data center compatibility version does not support importing a data
> domain with its entities (VMs and Templates). The imported domain will be
> imported without them."
> 
> As user I was worried checking this message but in the end I was able to
> import vms via 'Vm Import'. Would worth the effort to add in the message: 
> 
> "....VMs and Templates). The imported domain will be imported without them.
> Please import VMS and Templates manually via VM Import"  
> 
> I have noticed that when the DC is in 3.6 compat. mode I don't see such
> message.
> Finally, I could open RFEs if required.

You are right, it is a confusion message and I removed it recently because of that in the following patch https://gerrit.ovirt.org/#/c/67601/

> 
> Thanks again!

Comment 17 Douglas Schilling Landgraf 2017-01-19 19:54:59 UTC
(In reply to Maor from comment #16)
> (In reply to Douglas Schilling Landgraf from comment #10)
> > Thanks Maor, I have used 'engine-config -s OvfUpdateIntervalInMinutes=1' and
> > restarted engine. It worked nicely.
> > 
> > Just two comments:
> > 
> > #1 The log at least for, me as 'user', it's not so clear if the task was
> > completed or not:
> > 
> > """
> > 2017-01-18 18:23:11,465 INFO  [org.ovirt.engine.core.bll.OvfDataUpdater]
> > (DefaultQuartzScheduler_Worker-34) [] Attempting to update VMs/Templates Ovf.
> > 2017-01-18 18:23:11,467 INFO 
> > [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand]
> > (DefaultQuartzScheduler_Worker-34) [4b00efcf] Running command:
> > ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected : 
> > ID: 00000001-0001-0001-0001-00000000021e Type: StoragePool
> > 2017-01-18 18:23:11,484 INFO 
> > [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand]
> > (DefaultQuartzScheduler_Worker-34) [4b00efcf] Lock freed to object
> > 'EngineLock:{exclusiveLocks='[00000001-0001-0001-0001-
> > 00000000021e=<OVF_UPDATE, ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
> > sharedLocks='null'}
> > """
> > 
> > I see: 
> >    - "Attempting to update.."
> >    - "Running command..."
> >    - 'EngineLock:... <OVF_UPDATE, ACTION_TYPE_FAILED_OBJECT_LOCKED>'
> > 
> >    ACTION_TYPE_FAILED_OBJECT_LOCKED ? FAILED?
> 
> This is part of the lock mechanism, it doesn't seem to indicate failure only
> then message that will be presented to the user in case there will be a
> failure.
> 
> >       
> > *if possible* I would suggest to have a different logging schema like:
> >  
> >    - "Attempting to update.."
> >    - "Running command..."
> >    - "'EngineLock:...'
> >    - "'OVF_UPDATE completed successfully, updated OVF with the following
> > information/vms..."
> > 
> 
> I agree that we miss a log indicating the update process finished
> successfully at the end.
> 
> The update process of the OVF contains several steps, it first gather the
> OVFs use a tar file and upload it as stream of bytes to the VDSM.
> We have many logs in the process but no one indicate once it is finished
> with success.
> Let's start first with a log indicating the process finished successfully
> and add it what we can in the process.

+1 

> 
> > #2 - During the import of data domain I saw (in 4.0 RHVM):
> > 
> > "This Data center compatibility version does not support importing a data
> > domain with its entities (VMs and Templates). The imported domain will be
> > imported without them."
> > 
> > As user I was worried checking this message but in the end I was able to
> > import vms via 'Vm Import'. Would worth the effort to add in the message: 
> > 
> > "....VMs and Templates). The imported domain will be imported without them.
> > Please import VMS and Templates manually via VM Import"  
> > 
> > I have noticed that when the DC is in 3.6 compat. mode I don't see such
> > message.
> > Finally, I could open RFEs if required.
> 
> You are right, it is a confusion message and I removed it recently because
> of that in the following patch https://gerrit.ovirt.org/#/c/67601/

Great!

Thanks!

Comment 18 Douglas Schilling Landgraf 2017-01-20 05:51:05 UTC
Steps of migrating vms from RHEVM 3.6/RHEV-H 3.6 to RHVM 4.x/RHVH 4.x.

** ISCSI STORAGE
=================
From: rhevm-3.6.10 to rhevm-4.0.4
RHEVH: rhev-hypervisor7-7.3-20170118.0.iso
RHVH: redhat-virtualization-host-4.0-20170104.1.x86_64.liveimg.squashfs
Storage: ISCSI

Original env:
================
  - Installed RHEV-H 7.2-2017018.0
  - Registered host into RHEVM 3.6
  - Added data storage (ISCSI)
  - Added ISO storage (NFS)
  - Created disk
  - Created virtual machine (disk attached)
  - Before the migration, change the interval of Ovf update task to
    make sure all vms will be in the storage. This task is usually
    executed each 60minutes.

    # engine-config -s OvfUpdateIntervalInMinutes=1 (Updated to 
    # service ovirt-engine restart (Wait at least 1 minute after restart)

On 4.x side:
===============
  1) Initial Setup for RHVH and RHVM:
      - Installed RHVH-4.0-20160822.8-RHVH-x86_64-dvd1.iso
      - Registered host into RHEVM 4.0 
      - Enable different ISCSI Storage as data domain in the datacenter,
        just to make it UP.

  2) Now import the ISCSI Storage from RHEVM:
     - Storage tab
        -> Import Domain 
              -> Provide the ISCSI Storage from 3.6
                  -> Login to storage and import

  3) In the Datacenter tab, select storage and active the imported domain

  4) Import the VM/Disk:
     - Storage tab
        -> Select the domain imported 
            -> Select VM Import subtab
                -> Select VM and Click import

  After that, users will be able to start/stop vms and disks from 
  previous environment.

  5) Users can follow the same process for ISO Domain storage.


** NFS STORAGE 
=================

System data
=================
From: rhevm-3.6.10 to rhevm-4.0.4
RHEVH: rhev-hypervisor7-7.3-20170118.0.iso
RHVH: redhat-virtualization-host-4.0-20170104.1.x86_64.liveimg.squashfs
Storage: NFS

Original env:
================
  - Installed RHEV-H 7.2-2017018.0
  - Registered host into RHEVM 3.6
  - Added nfs storage (data and ISO)
  - Created disk
  - Created virtual machine (disk attached)
  - Before the migration, change the interval of Ovf update task to
    make sure all vms will be in the storage. This task is usually
    executed each 60minutes.

    # engine-config -s OvfUpdateIntervalInMinutes=1 (Updated to 
    # service ovirt-engine restart (Wait at least 1 minute after restart)

On 4.x side:
===============
  1) Initial Setup for RHVH and RHVM:
      - Installed RHVH-4.0-20160822.8-RHVH-x86_64-dvd1.iso
      - Registered host into RHEVM 4.0 
      - Enable different NFS Storage as data domain in the datacenter, just to 
        make it UP.

  2) Now import the NFS Storage from RHEVM:
     - Storage tab
        -> Import Domain 
              -> Provide the NFS Storage from 3.6 

  3) In the Datacenter tab, select storage and active the imported domain

  4) Import the VM/Disk:
     - Storage tab
        -> Select the domain imported 
            -> Select VM Import subtab
                -> Select VM and Click import

  After that, users will be able to start/stop vms and disks from 
  previous environment.

  5) Users can follow the same process for ISO Domain storage.

Comment 19 Douglas Schilling Landgraf 2017-01-20 05:52:10 UTC
Hi Huijuan Zhao,

Could you please validate the steps in comment#18?

Thanks!

Comment 20 Douglas Schilling Landgraf 2017-01-20 05:54:26 UTC
Hi Emma,

I believe would be nice to include such steps in our documentation after verification.

Thanks!

Comment 22 Huijuan Zhao 2017-01-20 10:24:06 UTC
Created attachment 1242704 [details]
screenshot of import iscsi domain failed

Comment 23 Douglas Schilling Landgraf 2017-01-20 16:52:22 UTC
Moving to assign as more discussing is going about this topic.

Comment 25 Douglas Schilling Landgraf 2017-01-20 17:35:40 UTC
Created attachment 1242950 [details]
login ISCSI

Comment 26 Huijuan Zhao 2017-01-22 02:49:34 UTC
Thanks Douglas.

1. So for NFS STORAGE, comment 18 steps work well.

2. For ISCSI STORAGE, comment 18 steps almost work well, but when import domain in rhvm4.0, must login iscsi storage manually, just as below steps:

** ISCSI STORAGE
=================
From: rhevm-3.6.10 to rhevm-4.0.6
RHEVH: rhev-hypervisor7-7.3-20170118.0.iso
RHVH: redhat-virtualization-host-4.0-20170104.1.x86_64.liveimg.squashfs
Storage: ISCSI

Original env:
================
  - Installed RHEV-H 7.2-20170118.0
  - Registered host into RHEVM 3.6
  - Added data storage (ISCSI)
  - Added ISO storage (NFS)
  - Created disk
  - Created virtual machine (disk attached)
  - Before the migration, change the interval of Ovf update task to
    make sure all vms will be in the storage. This task is usually
    executed each 60minutes.

    # engine-config -s OvfUpdateIntervalInMinutes=1 (Updated to 
    # service ovirt-engine restart (Wait at least 1 minute after restart)

On 4.x side:
===============
  1) Initial Setup for RHVH and RHVM:
      - Installed RHVH-4.0-20160104.0-RHVH-x86_64-dvd1.iso
      - Registered host into RHEVM 4.0 
      - Enable different ISCSI Storage as data domain in the datacenter,
        just to make it UP.

  2) Now import the ISCSI Storage from RHEVM:
     - Storage tab
        -> Import Domain 
              -> Provide the ISCSI Storage from 3.6 (click "Discover Targets", type in storage ip in "Adress",  click "Discover Targets", then click "login")
                  -> Login to storage and import

  3) In the Datacenter tab, select storage and active the imported domain

  4) Import the VM/Disk:
     - Storage tab
        -> Select the domain imported 
            -> Select VM Import subtab
                -> Select VM and Click import

  After that, users will be able to start/stop vms and disks from 
  previous environment.

Comment 28 Sandro Bonazzola 2017-01-24 09:37:04 UTC
We're missing fiber channel steps here, should be pretty much like the iscsi steps.

Comment 31 Yaniv Lavi 2017-01-31 09:00:16 UTC
Is this in 4.0 or 4.1? We need to know for the docs.

Comment 32 Douglas Schilling Landgraf 2017-01-31 14:11:17 UTC
(In reply to Yaniv Dary from comment #31)
> Is this in 4.0 or 4.1? We need to know for the docs.

I believe QE provided feedback only on 4.0 but should work in both.

Comment 33 Huijuan Zhao 2017-02-04 08:44:18 UTC
(In reply to Douglas Schilling Landgraf from comment #30)
> Indeed, I didn't test in fiber channel env. If we have such environment
> would be awesome.

I tested in fiber channel env, it works well.


** FC STORAGE
=================
From: rhevm-3.6.10 to rhevm-4.1
RHEVH: rhev-hypervisor7-7.3-20170118.0.iso
RHVH: redhat-virtualization-host-4.1-20170202.0.x86_64.liveimg.squashfs
Storage: FC

Original env:
================
  - Installed RHEV-H 7.2-20170118.0
  - Registered host into RHEVM 3.6
  - Added data storage (FC)
  - Added ISO storage (NFS)
  - Created disk
  - Created virtual machine (disk attached)
  - Before the migration, change the interval of Ovf update task to
    make sure all vms will be in the storage. This task is usually
    executed each 60minutes.

    # engine-config -s OvfUpdateIntervalInMinutes=1 (Updated to 
    # service ovirt-engine restart (Wait at least 1 minute after restart)

On 4.1 side:
===============
  1) Initial Setup for RHVH and RHVM:
      - Installed RHVH-4.1-20170203.1-RHVH-x86_64-dvd1.iso
      - Registered host into RHVM 4.1(4.1.0-0.3.beta2.el7)
      - Enable different FC Storage as data domain in the datacenter,
        just to make it UP.

  2) Now import the FC Storage from RHVM:
     - Storage tab
        -> Import Domain 
              -> Provide the FC Storage from 3.6

  3) In the Datacenter tab, select storage and active the imported domain

  4) Import the VM/Disk:
     - Storage tab
        -> Select the domain imported 
            -> Select VM Import subtab
                -> Select VM and Click import

  After that, users will be able to start/stop vms and disks from 
  previous environment.

Note: Before run VM after import VM, we must edit VM's "Maximum memory" which is larger than "Memory Size", then we can run VM successfully.

According to comment 18 and comment 26, this bug is fixed both in rhvh_4.0(redhat-virtualization-host-4.0-20170104.1.x86_64.liveimg.squashfs) and rhvh_4.1(redhat-virtualization-host-4.1-20170202.0.x86_64.liveimg.squashfs), and support NFS/ISCSI/FC storage, so change the status to VERIFIED.

Comment 34 Emma Heftman 2017-02-16 09:46:29 UTC
Douglas, can you please clarify whether this bug provides the actual documentation, or whether it is simply testing the workflow.

Comment 35 Douglas Schilling Landgraf 2017-02-22 19:12:18 UTC
(In reply to emma heftman from comment #34)
> Douglas, can you please clarify whether this bug provides the actual
> documentation, or whether it is simply testing the workflow.

According with my knowledge, at this moment, it's reinstall and restore documentation flow.


Note You need to log in before you can comment on or make changes to this bug.