Bug 1733031
| Summary: | [RFE] Add warning when importing data domains to newer DC that may trigger SD format upgrade | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Germano Veit Michel <gveitmic> |
| Component: | ovirt-engine | Assignee: | shani <sleviim> |
| Status: | CLOSED ERRATA | QA Contact: | Ilan Zuckerman <izuckerm> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 4.3.4 | CC: | aefrat, emarcus, fgarciad, lsvaty, michal.skrivanek, mkalinin, pelauter, sleviim, tnisan |
| Target Milestone: | ovirt-4.4.0 | Keywords: | FutureFeature |
| Target Release: | --- | Flags: | lsvaty:
testing_plan_complete-
|
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | rhv-4.4.0-29 | Doc Type: | Enhancement |
| Doc Text: |
To transfer virtual machines between data centers, you use data storage domains because export domains were deprecated. However, moving a data storage domain to a data center that has a higher compatibility level (DC level) can upgrade its storage format version, for example, from V3 to V5. This higher format version can prevent you from reattaching the data storage domain to the original data center and transferring additional virtual machines.
In the current release, if you encounter this situation, the Administration Portal asks you to confirm that you want to update the storage domain format, for example, from 'V3' to 'V5'. It also warns that you will not be able to attach it back to an older data center with a lower DC level.
To work around this issue, you can create a destination data center that has the same compatibility level as the source data center. When you finish transferring the virtual machines, you can increase the DC level.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-08-04 13:19:49 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1547336, 1547768 | ||
Note there already is a similar problem with Cluster Level. And another one when importing a VM to newer CL the VM is not correctly updated WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:
[Found non-acked flags: '{}', ]
For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:
[Found non-acked flags: '{}', ]
For more info please contact: rhv-devops
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:
[Found non-acked flags: '{}', ]
For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:
[Found non-acked flags: '{}', ]
For more info please contact: rhv-devops
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:
[Found non-acked flags: '{}', ]
For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:
[Found non-acked flags: '{}', ]
For more info please contact: rhv-devops
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:
[Found non-acked flags: '{}', ]
For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:
[Found non-acked flags: '{}', ]
For more info please contact: rhv-devops
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:
[Found non-acked flags: '{}', ]
For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:
[Found non-acked flags: '{}', ]
For more info please contact: rhv-devops
Verified on: ovirt-engine-4.4.0-0.29.master.el8ev.noarch vdsm-4.40.9-1.el8ev.x86_64 Followed those steps: 1. created 4.2 cluster and DC 2. Created ISCSI domain on this DC, so it would be V4 3. Detached the newly created SD from V4 DC 4. attached to an existing V5 DC Actual result (as expected): Warning message appeared: "Are you sure you want to update the storage domain? Data Center 'golden_env_mixed' has a newer version than Storage Domain 'test'. Approving this operation will upgrade the Storage Domain format from 'V4' to 'V5'. Note that you will not be able to attach it back to an older Data Center." 5. confirming the action, leads us to upgraded SD with version V5. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:3247 |
Description of problem: When transferring VMs between Data-Centers, customers are advised to use Data-Domains since Export is deprecated. However, when doing the transfer between different DC Levels, the floating Data Domain may get upgraded (i.e. V3 to V5), which can prevent it from attaching back to the source DC to transfer more VMs. Making the floating Data SD one way only. For example: Source: RHV/DC 4.0 with V3 SD Destination: RHV/DC 4.2 -> Once the Data Domain is attached to Destination it is upgraded to V4 and it cannot be attached back to the source. So please add a warning asking the user to confirm that the import will upgrade the Storage Domain to Version X, explaining that it may not be able to attach back to the older RHV DC and that if that is required the user is encouraged to create a new data-center in the destination with the same level of the source, and then bump the DC level once the transfers are complete.