Bug 1214408
| Summary: | Importing storage domains into an uninitialized datacenter leads to duplicate OVF_STORE disks being created, and can cause catastrophic loss of VM configuration data | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | James W. Mills <jamills> | ||||||
| Component: | ovirt-engine | Assignee: | Maor <mlipchuk> | ||||||
| Status: | CLOSED ERRATA | QA Contact: | lkuchlan <lkuchlan> | ||||||
| Severity: | medium | Docs Contact: | |||||||
| Priority: | medium | ||||||||
| Version: | 3.5.0 | CC: | acanan, amureini, bcholler, eedri, gklein, gwatson, jamills, juwu, lpeer, lsurette, mkalinin, mlipchuk, phil.coleman, pzhukov, rbalakri, Rhev-m-bugs, tnisan, yeylon, ykaul, ylavi | ||||||
| Target Milestone: | ovirt-3.6.0-rc | Keywords: | ZStream | ||||||
| Target Release: | 3.6.0 | Flags: | ylavi:
Triaged+
|
||||||
| Hardware: | All | ||||||||
| OS: | Linux | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||
| Doc Text: |
Previously, when importing an existing, clean storage domain that contains OVF_STORE disks from an old setup to an uninitialized data center, the OVF_STORE disks did not get registered after the data center was initialized and all virtual machine information was lost. With this update, when importing clean storage domains to an uninitialized data center, the OVF_STORE disks are registered correctly, and new unregistered entities are available in the Administration Portal under the Storage tab. In addition, storage domains with dirty metadata cannot be imported to uninitialized data centers.
|
Story Points: | --- | ||||||
| Clone Of: | |||||||||
| : | 1217339 (view as bug list) | Environment: | |||||||
| Last Closed: | 2016-03-09 21:05:04 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Bug Depends On: | |||||||||
| Bug Blocks: | 902971, 1217339, 1218733 | ||||||||
| Attachments: |
|
||||||||
|
Description
James W. Mills
2015-04-22 16:21:06 UTC
Maor - please take a look. This looks very familiar to something you already solved for 3.5.1, no? This is indeed a duplicate of bug https://bugzilla.redhat.com/1138114 which was fixed in version org.ovirt.engine-root-3.5.0-13 (In reply to Maor from comment #3) > This is indeed a duplicate of bug https://bugzilla.redhat.com/1138114 > which was fixed in version org.ovirt.engine-root-3.5.0-13 I don't understand how this is possible. Bug 1138114 was solved in 3.5.0 vt4, and the customer is using the GA release. Is it possible we have a regression on out hands? (In reply to Allon Mureinik from comment #5) > (In reply to Maor from comment #3) > > This is indeed a duplicate of bug https://bugzilla.redhat.com/1138114 > > which was fixed in version org.ovirt.engine-root-3.5.0-13 > > I don't understand how this is possible. > Bug 1138114 was solved in 3.5.0 vt4, and the customer is using the GA > release. > > Is it possible we have a regression on out hands? No, or at least it doesn't look like that on my setup. Maybe the problem is something different, maybe the OVF_STORE disks are not valid to read from. James, can u please attach the engine and VDSM logs? Maor,
As you and I discussed via email, I can easily reproduce this, even on a single RHEVM instance. The key is to detatch/remove the SD (cleanly), and then import it into a RHEVM and attach it to an *uninitialized* DC. I have recreated this scenario many times at this point, using two RHEVM instances and also just using a single RHEVM instance. Here is the scenario I used:
* Create a single VM/disk on the SD
Created "TESTVM1" with a single 1GB drive
* Put the SD into maintenance
I can verify the OVF_STORE disks have been created at this point
directly in the FS:
# grep OVF */*meta
44e10311-f0f2-4cae-8dca-5e7a70e68684/d5f91f24-1cc0-41cc-a1d2-fe4add91dabf.meta:DESCRIPTION={"Updated":true,"Disk
Description":"OVF_STORE","Storage
Domains":[{"uuid":"1012643c-8407-4670-9bae-cd99e9fcd5ab"}],"Last
Updated":"Mon Apr 27 09:31:25 CDT 2015","Size":10240}
57ecb640-c8c6-4c09-aec6-3e990cb2fcf6/677436b3-3f70-4f26-99e7-20d4f807fcaf.meta:DESCRIPTION={"Updated":true,"Disk
Description":"OVF_STORE","Storage
Domains":[{"uuid":"1012643c-8407-4670-9bae-cd99e9fcd5ab"}],"Last
Updated":"Mon Apr 27 09:31:25 CDT 2015","Size":10240}
# strings 44e10311-f0f2-4cae-8dca-5e7a70e68684/d5f91f24-1cc0-41cc-a1d2-fe4add91dabf
...
<?xml version='1.0' encoding='UTF-8'?><ovf:Envelope
...
* Detach and Remove the SD (without formatting)
At this point, I still see the OVF_STORE, there are still only two of
them, and the XML information is still intact.
* Create a new DC in the same RHEVM instance with no attached SD
* Import the SD back into RHEVM
* Attach the imported SD to the uninitialized DC
At this point, it is exactly the same as before. The original OVF_STORE disks were ignored, and two new ones were created.
I've attached logs from engine/vdsm for the above scenario.
Just to be extra thorough, and also to verify that the original OVF_STORE images were still perfectly readable, I added a new SD to the DC, detached/removed the SD with the duplicate OVF_STORE disks, *removed* the newly added OVF_STORE images on the filesystem, and then re-imported the SD into the DC (which is now initialized as opposed to uninitialized).
The "TESTVM1" VM and disk were available for import.
To be perfectly clear, the problem appears to be with importing SD into uninitialized DC.
Thanks!
~james
Created attachment 1019358 [details]
engine.log capturing new OVF_STORE creation
Created attachment 1019359 [details]
vdsm log well before and during duplicate OVF_STORE creation
Thanks for the logs James, It looks that the problem is that the customer used an uninitialized Storage Pool to attach the imported Storage Domain.(see [1]) although engine did not block this operation. I'm working for a fix for that so the user will be able to attach a "detached" Storage Domain to an uninitialized Storage Pool with existing OVF_STORE disks. [1] http://www.ovirt.org/Features/ImportStorageDomain#Restrictions "Attaching an imported Storage Domain can only be applied with an initialized Data Center." (In reply to Maor from comment #11) > Thanks for the logs James, > It looks that the problem is that the customer used an uninitialized Storage > Pool to attach the imported Storage Domain.(see [1]) although engine did not > block this operation. Note that in RHEV-M 3.5.1, this operation will be blocked with a user-friendly message (see bug 1178646), so the customer will get a clear indication what he's doing wrong, and eliminate the risk of potential data loss. Reducing priority based on this analysis. PM/GSS stakeholders, please chime in if you disagree with this move. Tested using: ovirt-engine-3.6.0-0.0.master.20150519172219.git9a2e2b3.el6.noarch vdsm-4.17.0-749.git649f00a.el7.x86_64 Verification instructions: 1. Cleanly detach SD from original RHEVM 2. Import into an uninitialized DC on new RHEVM 3. wait until the OVF update occurs 4. Check for the "VM Import" tab - It should exist Results: The import succeeds and the VMs available for import Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-0376.html |