Bug 1700905
Summary: | [DOCS] Document the new process for migrating VMs -- VMMGuide 6.13 | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | John Call <jcall> |
Component: | Documentation | Assignee: | Steve Goodman <sgoodman> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Petr Kubica <pkubica> |
Severity: | medium | Docs Contact: | |
Priority: | high | ||
Version: | 4.2.8-3 | CC: | bzlotnik, cminkema, derez, fsun, kowen, lleistne, lsurette, michal.skrivanek, mkalinin, mtessun, pkubica, rdlugyhe, sgoodman, srevivo |
Target Milestone: | ovirt-4.3.11 | Keywords: | Documentation, Improvement |
Target Release: | --- | Flags: | rdlugyhe:
needinfo-
jcall: needinfo- pkubica: needinfo- |
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-06-04 14:55:14 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
John Call
2019-04-17 15:49:17 UTC
Here is an abbreviated set of instructions that my customer and I worked through this week... How to migrate your VMs from the old RHV environment to the new RHV environment. The easiest way to do this would be to use a "transitional" Storage Domain. #0 - RHV has three types of Storage Domains: Data, ISO, Export. The ISO and Export domains should not be used because they are deprecated. A Data-type Storage Domain can only be connected to one environment (Data Center) at a time. #1 - Create a "transitional" Storage Domain (type=data) that is large enough to hold a few of the VMs that need to be moved. We'll plan on moving just a few VMs at a time. #2 - Shutdown a few of the VMs in the old environment. #3 - Move the **disks** of the powered off VMs to the "transitional" Storage Domain. #4 - Disconnect the "transitional" Storage Domain from the old environment (put it into Maintenance mode first) #5 - Import the "transitional" Storage Domain into the new environment #6 - Move the **disks** of the powered off VMs into their new/permanent/final Storage Domain. #7 - Start the VMs! #8 - Disconnect the "transitional" Storage Domain and re-connect it to the old environment. n.b. I emphasized the **disks** being moved because in my mind simplying moving the disk would be insufficient. The "magic" of this process is that the VM's configuration/metadata (vCPU, vRAM, timezone, MAC-addresses, etc...) is silently moved onto the Transitional Storage Domain as well --as long as the OVF gets updated-- p.s. The method described above may result in a longer-than-desired amount of downtime/outage for the VMs. The customer can easily use storage live migration to minimize the amount of time their VMs would be offline. In this case, the amount of downtime would be the short amount of time to shutdown the VM, detach the "transitional" Storage Domain from the old environment, import it into the new environment, discover/import the VMS, and re-start the VMs while their disks still reside on the "transitional" Storage Domain. In other words, about ~5 minutes... (In reply to John Call from comment #1) > Here is an abbreviated set of instructions that my customer and I worked > through this week... > > How to migrate your VMs from the old RHV environment to the new RHV > environment. The easiest way to do this would be to use a "transitional" > Storage Domain. > > #0 - RHV has three types of Storage Domains: Data, ISO, Export. The ISO and > Export domains should not be used because they are deprecated. A Data-type > Storage Domain can only be connected to one environment (Data Center) at a > time. > > #1 - Create a "transitional" Storage Domain (type=data) that is large enough > to hold a few of the VMs that need to be moved. We'll plan on moving just a > few VMs at a time. > > #2 - Shutdown a few of the VMs in the old environment. > > #3 - Move the **disks** of the powered off VMs to the "transitional" Storage > Domain. Shouldn't we add "Update OVF" step as well, to ensure everything is copied? > > #4 - Disconnect the "transitional" Storage Domain from the old environment > (put it into Maintenance mode first) > > #5 - Import the "transitional" Storage Domain into the new environment > > #6 - Move the **disks** of the powered off VMs into their > new/permanent/final Storage Domain. > > #7 - Start the VMs! > > #8 - Disconnect the "transitional" Storage Domain and re-connect it to the > old environment. > > > n.b. I emphasized the **disks** being moved because in my mind simplying > moving the disk would be insufficient. The "magic" of this process is that > the VM's configuration/metadata (vCPU, vRAM, timezone, MAC-addresses, > etc...) is silently moved onto the Transitional Storage Domain as well --as > long as the OVF gets updated-- (In reply to Marina Kalinin from comment #3) > > > > #3 - Move the **disks** of the powered off VMs to the "transitional" Storage > > Domain. > Shouldn't we add "Update OVF" step as well, to ensure everything is copied? > > I chose to ignore this as a documented step because the "maintenance" operation on the StorageDomain should update the OVF (unless the customer checks the box to ignore updating the OVF). I'm not the expert here, so I'll defer to you and the others if we should explicitly document this (hidden in some window behind the three-dots icon) Rolfe, also see the merge request in bug 1548850. It discusses using backup storage domains for tasks including migration. Backup storage domains are simply a data domain with a "backup domain" checkbox enabled. The new topics it introduces are: ------------------------------------------- Administration Guide: 13.3. Backing Up and Restoring Virtual Machines Using a Backup Storage Domain Including the topics 13.3.2. Setting a storage domain to be a backup domain 13.3.3. Backing up a Virtual Machine or Snapshot Using a Backup Domain ------------------------------------------- Virtual Machine Management Guide: 6.13.2. Exporting a Virtual Machine to a Data Domain John, Marina, We have something called a Backup domain, which you can use for migration. A backup domain differs from a non-backup domain in that all virtual machines on a backup domain are in a powered-down state. A virtual machine cannot run on a backup domain. See [1]. I wrote a draft of this procedure [2] based on comment 3. Please review it. [1] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/administration_guide/index#sect-Backing_Up_and_Restoring_Virtual_Machines_Using_a_Backup_Domain [2] https://docs.google.com/document/d/1HgHsTL1NUwmBjvW2T19uoFwhYGbfQ7YmswFZJ4ljq9g/edit?usp=sharing: Aloha Steve! I wasn't aware of the "backup" checkbox that would subtly alter the behavior of a data/regular storageDomain. I can only see one difference between using your Backup Domain procedure what I proposed with comment 2. The procedure in comment 2 can minimize the amount of down-time required when moving VMs, because the VMs can be live-migrated to/from the non-Backup/"transitional" storageDomain. Hi John, (In reply to John Call from comment #2) > The customer can easily use storage live > migration to minimize the amount of time their VMs would be offline. In > this case, the amount of downtime would be the short amount of time to > shutdown the VM, detach the "transitional" Storage Domain from the old > environment, import it into the new environment, discover/import the VMS, > and re-start the VMs while their disks still reside on the "transitional" > Storage Domain. In other words, about ~5 minutes... I'm confused. This is what I understand from what you wrote above: 1. You can use live migration to migrate vms *without shutting them down*. 2. *You have to shut down the vm*, detach the "transitional" Storage Domain from the old environment, import it into the new environment, discover/import the VMS, and *re-start the VMs* while their disks still reside on the "transitional" Storage Domain. Am I misunderstanding what you refer to with the word "this", where you write "in this case"? (I'm sorry if it seems like I'm being pedantic, I'm not an expert on this subject matter and I want to be certain I understand what you mean.) As I understand it, the only difference between using live migration and not using live migration is that with live migration, you don't shut down the vm before moving the virtual disk. [1] Am I correct about that? You don't have to do any additional steps? [1] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/administration_guide/index#Moving_a_Virtual_Disk (In reply to Steve Goodman from comment #12) > As I understand it, the only difference between using live migration and not > using live migration is that with live migration, you don't shut down the vm > before moving the virtual disk. [1] Am I correct about that? You don't have > to do any additional steps? Yes, you understood my intentions correctly. I advocated for this method with my customer because they needed to migrate several VMs with large Virtual Disks (e.g. 1TB or more). With their servers, it would take several hours (up to one whole day) to move the 1TB Virtual Disk of each VM. They needed a solution that minimized the amount of VM downtime. Is there any additional confusion? (In reply to John Call from comment #13) > Yes, you understood my intentions correctly. I advocated for this method > with my customer because they needed to migrate several VMs with large > Virtual Disks (e.g. 1TB or more). With their servers, it would take several > hours (up to one whole day) to move the 1TB Virtual Disk of each VM. They > needed a solution that minimized the amount of VM downtime. > > Is there any additional confusion? So how much down time was there using live migration to move the 1TB virtual disk of each VM? (In reply to Steve Goodman from comment #14) > So how much down time was there using live migration to move the 1TB virtual > disk of each VM? Using my live migration process resulted in <5 minutes of down time for each VM. (In reply to John Call from comment #15) > (In reply to Steve Goodman from comment #14) > > So how much down time was there using live migration to move the 1TB virtual > > disk of each VM? > > Using my live migration process resulted in <5 minutes of down time for each > VM. Just keep in mind, the copy still continues in the background. This is a great effort to rewrite the procedure. Thanks for working on it. Before I read your proposal, Steve, I would like to make sure you are aware about this RFE/BZ#1485271: Bug 1485271 - [RFE] Provide easier way to move/copy the entire VMs between the SDs Which should include documentation for the same matter as this request. Also, let me put this KCS here: https://access.redhat.com/solutions/3172561 Unsure why you didn't get referenced to it, John, from the sbr. Isn't it talking about same thing? (In reply to Marina Kalinin from comment #16) > Just keep in mind, the copy still continues in the background. Yes, the copy still continues in the background, but the VMs and their services are back online and available to their users. (In reply to Marina Kalinin from comment #17) > Also, let me put this KCS here: > https://access.redhat.com/solutions/3172561 > Unsure why you didn't get referenced to it, John, from the sbr. Isn't it > talking about same thing? Yes, it looks like that KCS article is effectively the same as the procedure I described in comment #1 (migrated VMs will be offline for hours or days, instead of using live migrations to minimize the downtime) (In reply to Marina Kalinin from comment #17) > Before I read your proposal, Steve, I would like to make sure you are aware > about this RFE/BZ#1485271: > Bug 1485271 - [RFE] Provide easier way to move/copy the entire VMs between > the SDs > Which should include documentation for the same matter as this request. The current bug (1700905) is targeting 4.3.10, and bug 1485271 targets 4.4. Just to be clear, you've already reviewed the text, but I want you to make sure that the procedures that I linked to are the correct ones. Kevin, please do a peer review. See comment 20 for the URL to the merge request. Updated preview: https://cee-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/CCS/job/ccs-mr-preview/7817/artifact/doc-Virtual_Machine_Management_Guide/preview/index.html#proc_Migrating_VMs_between_virt_environments_vm_guide_administrative_tasks Peer review is complete. Lucie, Please assign someone to verify this procedure. Petr, Would you please verify that the procedure works as documented? This targets 4.3.10. See comment 20 for the merge request. A preview is available there[1]. If the link is expired, enter "rebuild" as a comment. [1] https://cee-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/CCS/job/ccs-mr-preview/7817/artifact/doc-Virtual_Machine_Management_Guide/preview/index.html#proc_Migrating_VMs_between_virt_environments_vm_guide_administrative_tasks Sorry for late reply, but I was on PTO. This does not make sense to me: 5. If the virtual machine is powered off, start the virtual machines. --> VMs would always be powered off, since you imported the whole domain to the new environment. (In reply to Marina Kalinin from comment #25) > This does not make sense to me: > 5. If the virtual machine is powered off, start the virtual machines. > > --> VMs would always be powered off, since you imported the whole domain to > the new environment. Marina, yes, you're right, I misunderstood (and missed some of the text in) comment 2. I'll fix it. Thanks for calling that out. Marian, John, I made some edits based on my corrected understanding of using live storage migration. Please review: https://cee-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/CCS/job/ccs-mr-preview/8631/artifact/doc-Virtual_Machine_Management_Guide/preview/index.html#proc_Migrating_VMs_between_virt_environments_vm_guide_administrative_tasks Thanks, Steve. ~~~ Step 6: If you need to use the migration data domain to continue transferring virtual machines, use live storage migration to move the virtual disks from the migration data domain to another data domain that is attached to the new environment. ~~~ That's not right. You may use live migration instead of shutting down VMs to reduce downtime. But you have to completely disconnect the data domain from one environment to another, so all VMs on it MUST be down, no live migration in between different environments. Also, we may probably add a note, that when VM is exported from environment A to env B, it is not available on env A anymore. It will be possible to import it back, once the SD is imported back to env A. (In reply to Marina Kalinin from comment #28) > Thanks, Steve. > > ~~~ > Step 6: If you need to use the migration data domain to continue > transferring virtual machines, use live storage migration to move the > virtual disks from the migration data domain to another data domain that is > attached to the new environment. > ~~~ > That's not right. > > You may use live migration instead of shutting down VMs to reduce downtime. > But you have to completely disconnect the data domain from one environment > to another, so all VMs on it MUST be down, no live migration in between > different environments. Marina, John, I am trying to take into account John's comment 2. I understand that the VMs must be *powered off* when migrating from the old environment to the new one, but once the migration SD is attached to the new environment, you can power on the VMs, then live migrate them to another SD in the new environment. At that point, they are no longer on the migration SD and you can detach the migration SD and move it back to the old env and start over with some other VMs. Did I capture that correctly, and are you both in agreement regarding this method? Please add a suggestion for the correct text to the merge request or in this bug. > Also, we may probably add a note, that when VM is exported from environment > A to env B, it is not available on env A anymore. OK. > It will be possible to import it back, once the SD is imported back to env A. I don't know if it's necessary to say this. Once it's in another environment, the situation is identical to the one at the beginning of the procedure, and then you can just start over. No? I see. I think it is a bit too complicated to process. I might be wrong. We can probably skip this recommendation from our steps. Customers can figure this out themselves, but having it here seems to create confusion and overcomplexity. Let's concentrate on the actual steps must happen. (In reply to Marina Kalinin from comment #30) > I see. > I think it is a bit too complicated to process. I might be wrong. We can > probably skip this recommendation from our steps. > Customers can figure this out themselves, but having it here seems to create > confusion and overcomplexity. Let's concentrate on the actual steps must > happen. I changed this to a note, making it optional. I hope that this takes into account your concerns, while still addressing John's recommendation. Please see the merge request >> Also, we may probably add a note, that when VM is exported from environment >> A to env B, it is not available on env A anymore. > OK. I added some info to step 2. Please see the merge request.[1] The preview should be there at the bottom. Enter the comment "rebuild" if necessary, to rebuild the preview. I assume that I have addressed your concerns, so I'm moving this to ON_QA. [1] https://gitlab.cee.redhat.com/rhci-documentation/docs-Red_Hat_Enterprise_Virtualization/merge_requests/1629 Thanks for making these improvements! Hi, I have few points which could be fixed. 1. Shut down the virtual machines that you want to migrate. 2. Export the virtual machines to the migration data domain. See Moving a Virtual Disk in the Administration Guide - Why do not use live migration? User doesn't have to shut down all virtual machines. Just migrate disks to storage domain which will be used for migration, then shut down all VMs with disks on the migrated storage domain Important Do not check the ignore updating the OVF checkbox. The maintenance operation on the storage domain should update the OVF. - Thank you for this warning :) from own experience it is a really good advice. But I think it is necessary to have this warning (also) during putting the storage into maintenance here: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/administration_guide/index#Migrating_SD_between_DC_different_env The virtual machines are no longer available on the old storage domain. - in this moment all machines are there they will dissapear after detaching storage from RHV-M (in next steps) 3. Migrate the migration data domain from the old data center to the new one. See Migrating Storage Domains between Data Centers in Different Environments in the Administration Guide. 4. Import the virtual machines from the migration data domain. See Importing Virtual Machines from Imported Data Storage Domains in the Administration Guide. 5. The virtual machines are now imported to the new RHV environment. If you don’t need to use the migration data domain to continue migrating virtual machines from the old environment, you can start the virtual machines and skip the following steps. 6. If you need to use the migration data domain to continue transferring virtual machines, move the virtual disks from the migration data domain to another data domain that is attached to the new environment. See Moving a Virtual Disk in the Administration Guide. - I think steps 3-6 are correct Note You can minimize downtime by using live storage migration when importing the virtual machines to the new environment. Disconnect the migration data domain from the new environment and reconnect the domain to the old environment to repeat the process until all virtual machines are migrated. - There is the note which is missing also in 1-2. steps (during migration VMs to the migration data domain) Few things which aren't answered in the guide and I think they are important: Q1: Should user migrate between same version of RHV? - if not on destination engine, newer version of hosts & engine could update the storage domain to newer version (see warning after step 20.) https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/administration_guide/index#Migrating_SD_between_DC_different_env) What should user do in that case? (for example format the storage domain and create a new data storage domain(with same path) in the old engine) Q2: User could have file based/block based storage domain in the old engine? He will get warnings about different raw/qcow - thin/preallocated formats and warning about converting the images -> which could end with more allocated space for disks (a lot more) - I'm not an expert which case is the crucial one but definitely user can hit this between block and file based storage. - Maybe some advice that user should use the same type of the storage for migration data domain? iscsi - fc, glusterfs - nfs? Or is this fixed somehow else? Like the image should allocate the same size on NFS or iSCSI? (I know in previous versions there was a problem with it.) Q3. What about VMs with more than one disk? I just checked that there is no warnings at all (which is a bug) that one disk is on migration data domain and second disk is still on old domain. Storage domain was successfully detached without warning that I forget one disk to migrate. - Will create a bug for it to show proper warning or block detaching of the storage from engine (In reply to Petr Kubica from comment #34) > Few things which aren't answered in the guide and I think they are important: > Q1: Should user migrate between same version of RHV? > - if not on destination engine, newer version of hosts & engine could update > the storage domain to newer version (see warning after step 20.) > https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/ > html-single/administration_guide/ > index#Migrating_SD_between_DC_different_env) What should user do in that > case? (for example format the storage domain and create a new data storage > domain(with same path) in the old engine) Michal, can you help answer this question? > Q2: User could have file based/block based storage domain in the old engine? > He will get warnings about different raw/qcow - thin/preallocated formats > and warning about converting the images -> which could end with more > allocated space for disks (a lot more) - I'm not an expert which case is the > crucial one but definitely user can hit this between block and file based > storage. > - Maybe some advice that user should use the same type of the storage for > migration data domain? iscsi - fc, glusterfs - nfs? Or is this fixed somehow > else? Like the image should allocate the same size on NFS or iSCSI? (I know > in previous versions there was a problem with it.) Daniel, what do you think? > Q3. What about VMs with more than one disk? I just checked that there is no > warnings at all (which is a bug) that one disk is on migration data domain > and second disk is still on old domain. Storage domain was successfully > detached without warning that I forget one disk to migrate. > - Will create a bug for it to show proper warning or block detaching of the > storage from engine Michal, Daniel? What do you think? (In reply to Marina Kalinin from comment #30) > I see. > I think it is a bit too complicated to process. I might be wrong. We can > probably skip this recommendation from our steps. > Customers can figure this out themselves, but having it here seems to create > confusion and overcomplexity. Let's concentrate on the actual steps must > happen. I think using live migration everywhere is possible by default would be better approach. My point of view: - First thing it is the supported way how to migrate disks - But the most important: it significantly reduces maintenance window. I think every customer wants to have shortest maintenance windows as much as possible. With shutting VMs down before any disk migration (from old to migration storage) it could take hours to migrate all disk (depends on many things - networks, hosts, storage, number of disks and utilization of the infrastructure) but while live migration, all VMs are running and providing services on top of them. After all disk migrations are finished it is necessary only to shutdown the VMs and detach the migration storage domain. Detach and Attach of the migration domain should take few minutes (depends on the size of the storage and number of VMs etc) so the maintenance windows could be really short - in few minutes. - Third thing I don't see a much more complexity in: migrate disks -> shutdown VMs -> detach domain instead of: shutdown VMs -> migrate disks -> detach domain. If for someone the live migration is not suitable I'm for to have there a note that the user can shutdown all VMs intended for migration and then do all steps in offline mode. (In reply to Steve Goodman from comment #36) > (In reply to Petr Kubica from comment #34) > > Few things which aren't answered in the guide and I think they are important: > > Q1: Should user migrate between same version of RHV? > > - if not on destination engine, newer version of hosts & engine could update > > the storage domain to newer version (see warning after step 20.) > > https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/ > > html-single/administration_guide/ > > index#Migrating_SD_between_DC_different_env) What should user do in that > > case? (for example format the storage domain and create a new data storage > > domain(with same path) in the old engine) > > Michal, can you help answer this question? I'm not sure it's getting upgraded automatically, is it, Daniel/Benny? > > Q2: User could have file based/block based storage domain in the old engine? > > He will get warnings about different raw/qcow - thin/preallocated formats > > and warning about converting the images -> which could end with more > > allocated space for disks (a lot more) - I'm not an expert which case is the > > crucial one but definitely user can hit this between block and file based > > storage. > > - Maybe some advice that user should use the same type of the storage for > > migration data domain? iscsi - fc, glusterfs - nfs? Or is this fixed somehow > > else? Like the image should allocate the same size on NFS or iSCSI? (I know > > in previous versions there was a problem with it.) > > Daniel, what do you think? a warning saying ^^ would be good to have > > Q3. What about VMs with more than one disk? I just checked that there is no > > warnings at all (which is a bug) that one disk is on migration data domain > > and second disk is still on old domain. Storage domain was successfully > > detached without warning that I forget one disk to migrate. > > - Will create a bug for it to show proper warning or block detaching of the > > storage from engine > > Michal, Daniel? What do you think? worth another warning (In reply to Michal Skrivanek from comment #38) > (In reply to Steve Goodman from comment #36) > > (In reply to Petr Kubica from comment #34) > > > Few things which aren't answered in the guide and I think they are important: > > > Q1: Should user migrate between same version of RHV? > > > - if not on destination engine, newer version of hosts & engine could update > > > the storage domain to newer version (see warning after step 20.) > > > https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/ > > > html-single/administration_guide/ > > > index#Migrating_SD_between_DC_different_env) What should user do in that > > > case? (for example format the storage domain and create a new data storage > > > domain(with same path) in the old engine) > > > > Michal, can you help answer this question? > > I'm not sure it's getting upgraded automatically, is it, Daniel/Benny? > > > > > Q2: User could have file based/block based storage domain in the old engine? > > > He will get warnings about different raw/qcow - thin/preallocated formats > > > and warning about converting the images -> which could end with more > > > allocated space for disks (a lot more) - I'm not an expert which case is the > > > crucial one but definitely user can hit this between block and file based > > > storage. > > > - Maybe some advice that user should use the same type of the storage for > > > migration data domain? iscsi - fc, glusterfs - nfs? Or is this fixed somehow > > > else? Like the image should allocate the same size on NFS or iSCSI? (I know > > > in previous versions there was a problem with it.) > > > > Daniel, what do you think? https://bugzilla.redhat.com/show_bug.cgi?id=1733031 -> importing data domains to newer DC that may trigger SD format upgrade https://bugzilla.redhat.com/show_bug.cgi?id=1740978 -> Warn or Block importing VMs/Templates from unsupported compatibility levels > > a warning saying ^^ would be good to have > > > > > Q3. What about VMs with more than one disk? I just checked that there is no > > > warnings at all (which is a bug) that one disk is on migration data domain > > > and second disk is still on old domain. Storage domain was successfully > > > detached without warning that I forget one disk to migrate. > > > - Will create a bug for it to show proper warning or block detaching of the > > > storage from engine > > > > Michal, Daniel? What do you think? > > worth another warning (In reply to John Call from comment #5) > I chose to ignore this as a documented step because the "maintenance" > operation on the StorageDomain should update the OVF (unless the customer > checks the box to ignore updating the OVF). John, are you referring to the "Ignore OVF update failure" checkbox? (In reply to Steve Goodman from comment #40) > (In reply to John Call from comment #5) > > I chose to ignore this as a documented step because the "maintenance" > > operation on the StorageDomain should update the OVF (unless the customer > > checks the box to ignore updating the OVF). > > John, are you referring to the "Ignore OVF update failure" checkbox? Yes, that's what I'm referring to. A popup with this message appears in the RHVM GUI when you attempt to put a Storage Domain into maintenance mode. And just to be clear, the OVF update is critical for this migration process. Any failures to update OVF should **not** be ignored. Petr, Can you review my changes and if you approve, move this to VERIFIED? Hi Steve, Looks great now! :) Thanks Merged into 4.3. Published to 4.3 Administration Guide and Virtual Machine Management Guides: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/administration_guide/index?lb_target=production#Migrating_SD_between_DC_different_env https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/virtual_machine_management_guide/index#proc_Migrating_VMs_between_virt_environments_vm_guide_administrative_tasks |