Description of problem: There is no automated upgrade path from RHEV-H 3.6 to RHVH 4.0 But bug 1290340 documents how to migrate fro el6 to el7. A very similar approach should work for the 3.6 to 4.0 migration too. The only difference should be that the contents of /config (from 3.6) should be restored into / (on 4.0). Thus, a path like /config/etc/passwd will be restored into /etc/passwd. This RFe is about testing and documenting these steps.
Thanks Maor, I have used 'engine-config -s OvfUpdateIntervalInMinutes=1' and restarted engine. It worked nicely. Just two comments: #1 The log at least for, me as 'user', it's not so clear if the task was completed or not: """ 2017-01-18 18:23:11,465 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] (DefaultQuartzScheduler_Worker-34) [] Attempting to update VMs/Templates Ovf. 2017-01-18 18:23:11,467 INFO [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-34) [4b00efcf] Running command: ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected : ID: 00000001-0001-0001-0001-00000000021e Type: StoragePool 2017-01-18 18:23:11,484 INFO [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-34) [4b00efcf] Lock freed to object 'EngineLock:{exclusiveLocks='[00000001-0001-0001-0001-00000000021e=<OVF_UPDATE, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'} """ I see: - "Attempting to update.." - "Running command..." - 'EngineLock:... <OVF_UPDATE, ACTION_TYPE_FAILED_OBJECT_LOCKED>' ACTION_TYPE_FAILED_OBJECT_LOCKED ? FAILED? *if possible* I would suggest to have a different logging schema like: - "Attempting to update.." - "Running command..." - "'EngineLock:...' - "'OVF_UPDATE completed successfully, updated OVF with the following information/vms..." #2 - During the import of data domain I saw (in 4.0 RHVM): "This Data center compatibility version does not support importing a data domain with its entities (VMs and Templates). The imported domain will be imported without them." As user I was worried checking this message but in the end I was able to import vms via 'Vm Import'. Would worth the effort to add in the message: "....VMs and Templates). The imported domain will be imported without them. Please import VMS and Templates manually via VM Import" I have noticed that when the DC is in 3.6 compat. mode I don't see such message. Finally, I could open RFEs if required. Thanks again!
(In reply to Douglas Schilling Landgraf from comment #10) > Thanks Maor, I have used 'engine-config -s OvfUpdateIntervalInMinutes=1' and > restarted engine. It worked nicely. > > Just two comments: > > #1 The log at least for, me as 'user', it's not so clear if the task was > completed or not: > > """ > 2017-01-18 18:23:11,465 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] > (DefaultQuartzScheduler_Worker-34) [] Attempting to update VMs/Templates Ovf. > 2017-01-18 18:23:11,467 INFO > [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] > (DefaultQuartzScheduler_Worker-34) [4b00efcf] Running command: > ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected : > ID: 00000001-0001-0001-0001-00000000021e Type: StoragePool > 2017-01-18 18:23:11,484 INFO > [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] > (DefaultQuartzScheduler_Worker-34) [4b00efcf] Lock freed to object > 'EngineLock:{exclusiveLocks='[00000001-0001-0001-0001- > 00000000021e=<OVF_UPDATE, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', > sharedLocks='null'} > """ > > I see: > - "Attempting to update.." > - "Running command..." > - 'EngineLock:... <OVF_UPDATE, ACTION_TYPE_FAILED_OBJECT_LOCKED>' > > ACTION_TYPE_FAILED_OBJECT_LOCKED ? FAILED? This is part of the lock mechanism, it doesn't seem to indicate failure only then message that will be presented to the user in case there will be a failure. > > *if possible* I would suggest to have a different logging schema like: > > - "Attempting to update.." > - "Running command..." > - "'EngineLock:...' > - "'OVF_UPDATE completed successfully, updated OVF with the following > information/vms..." > I agree that we miss a log indicating the update process finished successfully at the end. The update process of the OVF contains several steps, it first gather the OVFs use a tar file and upload it as stream of bytes to the VDSM. We have many logs in the process but no one indicate once it is finished with success. Let's start first with a log indicating the process finished successfully and add it what we can in the process. > #2 - During the import of data domain I saw (in 4.0 RHVM): > > "This Data center compatibility version does not support importing a data > domain with its entities (VMs and Templates). The imported domain will be > imported without them." > > As user I was worried checking this message but in the end I was able to > import vms via 'Vm Import'. Would worth the effort to add in the message: > > "....VMs and Templates). The imported domain will be imported without them. > Please import VMS and Templates manually via VM Import" > > I have noticed that when the DC is in 3.6 compat. mode I don't see such > message. > Finally, I could open RFEs if required. You are right, it is a confusion message and I removed it recently because of that in the following patch https://gerrit.ovirt.org/#/c/67601/ > > Thanks again!
(In reply to Maor from comment #16) > (In reply to Douglas Schilling Landgraf from comment #10) > > Thanks Maor, I have used 'engine-config -s OvfUpdateIntervalInMinutes=1' and > > restarted engine. It worked nicely. > > > > Just two comments: > > > > #1 The log at least for, me as 'user', it's not so clear if the task was > > completed or not: > > > > """ > > 2017-01-18 18:23:11,465 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] > > (DefaultQuartzScheduler_Worker-34) [] Attempting to update VMs/Templates Ovf. > > 2017-01-18 18:23:11,467 INFO > > [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] > > (DefaultQuartzScheduler_Worker-34) [4b00efcf] Running command: > > ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected : > > ID: 00000001-0001-0001-0001-00000000021e Type: StoragePool > > 2017-01-18 18:23:11,484 INFO > > [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] > > (DefaultQuartzScheduler_Worker-34) [4b00efcf] Lock freed to object > > 'EngineLock:{exclusiveLocks='[00000001-0001-0001-0001- > > 00000000021e=<OVF_UPDATE, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', > > sharedLocks='null'} > > """ > > > > I see: > > - "Attempting to update.." > > - "Running command..." > > - 'EngineLock:... <OVF_UPDATE, ACTION_TYPE_FAILED_OBJECT_LOCKED>' > > > > ACTION_TYPE_FAILED_OBJECT_LOCKED ? FAILED? > > This is part of the lock mechanism, it doesn't seem to indicate failure only > then message that will be presented to the user in case there will be a > failure. > > > > > *if possible* I would suggest to have a different logging schema like: > > > > - "Attempting to update.." > > - "Running command..." > > - "'EngineLock:...' > > - "'OVF_UPDATE completed successfully, updated OVF with the following > > information/vms..." > > > > I agree that we miss a log indicating the update process finished > successfully at the end. > > The update process of the OVF contains several steps, it first gather the > OVFs use a tar file and upload it as stream of bytes to the VDSM. > We have many logs in the process but no one indicate once it is finished > with success. > Let's start first with a log indicating the process finished successfully > and add it what we can in the process. +1 > > > #2 - During the import of data domain I saw (in 4.0 RHVM): > > > > "This Data center compatibility version does not support importing a data > > domain with its entities (VMs and Templates). The imported domain will be > > imported without them." > > > > As user I was worried checking this message but in the end I was able to > > import vms via 'Vm Import'. Would worth the effort to add in the message: > > > > "....VMs and Templates). The imported domain will be imported without them. > > Please import VMS and Templates manually via VM Import" > > > > I have noticed that when the DC is in 3.6 compat. mode I don't see such > > message. > > Finally, I could open RFEs if required. > > You are right, it is a confusion message and I removed it recently because > of that in the following patch https://gerrit.ovirt.org/#/c/67601/ Great! Thanks!
Steps of migrating vms from RHEVM 3.6/RHEV-H 3.6 to RHVM 4.x/RHVH 4.x. ** ISCSI STORAGE ================= From: rhevm-3.6.10 to rhevm-4.0.4 RHEVH: rhev-hypervisor7-7.3-20170118.0.iso RHVH: redhat-virtualization-host-4.0-20170104.1.x86_64.liveimg.squashfs Storage: ISCSI Original env: ================ - Installed RHEV-H 7.2-2017018.0 - Registered host into RHEVM 3.6 - Added data storage (ISCSI) - Added ISO storage (NFS) - Created disk - Created virtual machine (disk attached) - Before the migration, change the interval of Ovf update task to make sure all vms will be in the storage. This task is usually executed each 60minutes. # engine-config -s OvfUpdateIntervalInMinutes=1 (Updated to # service ovirt-engine restart (Wait at least 1 minute after restart) On 4.x side: =============== 1) Initial Setup for RHVH and RHVM: - Installed RHVH-4.0-20160822.8-RHVH-x86_64-dvd1.iso - Registered host into RHEVM 4.0 - Enable different ISCSI Storage as data domain in the datacenter, just to make it UP. 2) Now import the ISCSI Storage from RHEVM: - Storage tab -> Import Domain -> Provide the ISCSI Storage from 3.6 -> Login to storage and import 3) In the Datacenter tab, select storage and active the imported domain 4) Import the VM/Disk: - Storage tab -> Select the domain imported -> Select VM Import subtab -> Select VM and Click import After that, users will be able to start/stop vms and disks from previous environment. 5) Users can follow the same process for ISO Domain storage. ** NFS STORAGE ================= System data ================= From: rhevm-3.6.10 to rhevm-4.0.4 RHEVH: rhev-hypervisor7-7.3-20170118.0.iso RHVH: redhat-virtualization-host-4.0-20170104.1.x86_64.liveimg.squashfs Storage: NFS Original env: ================ - Installed RHEV-H 7.2-2017018.0 - Registered host into RHEVM 3.6 - Added nfs storage (data and ISO) - Created disk - Created virtual machine (disk attached) - Before the migration, change the interval of Ovf update task to make sure all vms will be in the storage. This task is usually executed each 60minutes. # engine-config -s OvfUpdateIntervalInMinutes=1 (Updated to # service ovirt-engine restart (Wait at least 1 minute after restart) On 4.x side: =============== 1) Initial Setup for RHVH and RHVM: - Installed RHVH-4.0-20160822.8-RHVH-x86_64-dvd1.iso - Registered host into RHEVM 4.0 - Enable different NFS Storage as data domain in the datacenter, just to make it UP. 2) Now import the NFS Storage from RHEVM: - Storage tab -> Import Domain -> Provide the NFS Storage from 3.6 3) In the Datacenter tab, select storage and active the imported domain 4) Import the VM/Disk: - Storage tab -> Select the domain imported -> Select VM Import subtab -> Select VM and Click import After that, users will be able to start/stop vms and disks from previous environment. 5) Users can follow the same process for ISO Domain storage.
Hi Huijuan Zhao, Could you please validate the steps in comment#18? Thanks!
Hi Emma, I believe would be nice to include such steps in our documentation after verification. Thanks!
Created attachment 1242704 [details] screenshot of import iscsi domain failed
Moving to assign as more discussing is going about this topic.
Created attachment 1242950 [details] login ISCSI
Thanks Douglas. 1. So for NFS STORAGE, comment 18 steps work well. 2. For ISCSI STORAGE, comment 18 steps almost work well, but when import domain in rhvm4.0, must login iscsi storage manually, just as below steps: ** ISCSI STORAGE ================= From: rhevm-3.6.10 to rhevm-4.0.6 RHEVH: rhev-hypervisor7-7.3-20170118.0.iso RHVH: redhat-virtualization-host-4.0-20170104.1.x86_64.liveimg.squashfs Storage: ISCSI Original env: ================ - Installed RHEV-H 7.2-20170118.0 - Registered host into RHEVM 3.6 - Added data storage (ISCSI) - Added ISO storage (NFS) - Created disk - Created virtual machine (disk attached) - Before the migration, change the interval of Ovf update task to make sure all vms will be in the storage. This task is usually executed each 60minutes. # engine-config -s OvfUpdateIntervalInMinutes=1 (Updated to # service ovirt-engine restart (Wait at least 1 minute after restart) On 4.x side: =============== 1) Initial Setup for RHVH and RHVM: - Installed RHVH-4.0-20160104.0-RHVH-x86_64-dvd1.iso - Registered host into RHEVM 4.0 - Enable different ISCSI Storage as data domain in the datacenter, just to make it UP. 2) Now import the ISCSI Storage from RHEVM: - Storage tab -> Import Domain -> Provide the ISCSI Storage from 3.6 (click "Discover Targets", type in storage ip in "Adress", click "Discover Targets", then click "login") -> Login to storage and import 3) In the Datacenter tab, select storage and active the imported domain 4) Import the VM/Disk: - Storage tab -> Select the domain imported -> Select VM Import subtab -> Select VM and Click import After that, users will be able to start/stop vms and disks from previous environment.
We're missing fiber channel steps here, should be pretty much like the iscsi steps.
Is this in 4.0 or 4.1? We need to know for the docs.
(In reply to Yaniv Dary from comment #31) > Is this in 4.0 or 4.1? We need to know for the docs. I believe QE provided feedback only on 4.0 but should work in both.
(In reply to Douglas Schilling Landgraf from comment #30) > Indeed, I didn't test in fiber channel env. If we have such environment > would be awesome. I tested in fiber channel env, it works well. ** FC STORAGE ================= From: rhevm-3.6.10 to rhevm-4.1 RHEVH: rhev-hypervisor7-7.3-20170118.0.iso RHVH: redhat-virtualization-host-4.1-20170202.0.x86_64.liveimg.squashfs Storage: FC Original env: ================ - Installed RHEV-H 7.2-20170118.0 - Registered host into RHEVM 3.6 - Added data storage (FC) - Added ISO storage (NFS) - Created disk - Created virtual machine (disk attached) - Before the migration, change the interval of Ovf update task to make sure all vms will be in the storage. This task is usually executed each 60minutes. # engine-config -s OvfUpdateIntervalInMinutes=1 (Updated to # service ovirt-engine restart (Wait at least 1 minute after restart) On 4.1 side: =============== 1) Initial Setup for RHVH and RHVM: - Installed RHVH-4.1-20170203.1-RHVH-x86_64-dvd1.iso - Registered host into RHVM 4.1(4.1.0-0.3.beta2.el7) - Enable different FC Storage as data domain in the datacenter, just to make it UP. 2) Now import the FC Storage from RHVM: - Storage tab -> Import Domain -> Provide the FC Storage from 3.6 3) In the Datacenter tab, select storage and active the imported domain 4) Import the VM/Disk: - Storage tab -> Select the domain imported -> Select VM Import subtab -> Select VM and Click import After that, users will be able to start/stop vms and disks from previous environment. Note: Before run VM after import VM, we must edit VM's "Maximum memory" which is larger than "Memory Size", then we can run VM successfully. According to comment 18 and comment 26, this bug is fixed both in rhvh_4.0(redhat-virtualization-host-4.0-20170104.1.x86_64.liveimg.squashfs) and rhvh_4.1(redhat-virtualization-host-4.1-20170202.0.x86_64.liveimg.squashfs), and support NFS/ISCSI/FC storage, so change the status to VERIFIED.
Douglas, can you please clarify whether this bug provides the actual documentation, or whether it is simply testing the workflow.
(In reply to emma heftman from comment #34) > Douglas, can you please clarify whether this bug provides the actual > documentation, or whether it is simply testing the workflow. According with my knowledge, at this moment, it's reinstall and restore documentation flow.