Bug 1430865
Summary: | ERROR: duplicate key value violates unique constraint "pk_unregistered_disks_to_vms" | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Sam Yangsao <syangsao> | |
Component: | ovirt-engine | Assignee: | Maor <mlipchuk> | |
Status: | CLOSED ERRATA | QA Contact: | Raz Tamir <ratamir> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | 4.0.6 | CC: | apinnick, lsurette, mkalinin, mlipchuk, pstehlik, ratamir, rbalakri, Rhev-m-bugs, srevivo, syangsao, tnisan, ykaul, ylavi | |
Target Milestone: | ovirt-4.2.0 | Keywords: | ZStream | |
Target Release: | 4.2.0 | Flags: | ylavi:
testing_plan_complete?
|
|
Hardware: | All | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: |
Previously, if a snapshot of a disk attached to a virtual machine was deleted and the user tried to attach the storage domain containing this virtual machine before the OVF_STORE had been updated with the change, the attachment operation would fail. Because the OVF indicated the presence of a disk with a snapshot, this disk was fetched as a potential disk to register, even though it was already part of a virtual machine. In the current release, the disks are counted only once and the storage domain can be attached.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1446920 (view as bug list) | Environment: | ||
Last Closed: | 2018-05-15 17:41:09 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1446920 |
Description
Sam Yangsao
2017-03-09 18:15:35 UTC
This constraint was introduced in RHV 4.0.1 as part of the fix for bug 1302780. Maor - can you take a look please? Hi Sam, Just wanted to clear something. You wrote that you have one DC with 2 clusters, Dell-cluster and HP-cluster. by clusters, did you mean linux clusters to get High Availability for RHEV-M? (In reply to Maor from comment #5) > Hi Sam, > > Just wanted to clear something. > You wrote that you have one DC with 2 clusters, Dell-cluster and HP-cluster. > by clusters, did you mean linux clusters to get High Availability for RHEV-M? Hey Maor, It's 1 Data center with 2 clusters. No HA for the RHV-M :) Thanks! Hi, I will take a look at it first think tomorrow morning. Thank you for the info Hi Sam, I think that I found the issue, thank you very much for your help and the access to your env, that was much helpful and reduced the time finding the issue. It looks like there were VMs with disks and snapshots and some of the snapshots got deleted before there was an OVF update in the OVF_STORE disk. In that point of time the OVF of the VM indicated the disks contains snapshots. while those disks were without any snapshots. Once the storage domain got attached those disks were fetched as potential disks to register which were part of the VMs also. There seem to be a bug in the xml parser of the OVF that add those disks the VMs which those are attached to, since the XML was not updated after the removal of the snapshots those disks were initialized with VMs there were attached to, although those VMs were actually the same VM and that caused the SQL exception. Steps to reproduce: 1. Create a VM with disks and snapshot 2. Delete the snapshot 3. force remove the storage domain (do not deactivate it since the OVF_STORE will be updated this way) 4. Try to attach the storage domain back again to the Data Center I will post a patch that fixes it. Thank you again for your help Awe(In reply to Maor from comment #13) > Hi Sam, > > I think that I found the issue, thank you very much for your help and the > access to your env, that was much helpful and reduced the time finding the > issue. > > It looks like there were VMs with disks and snapshots and some of the > snapshots got deleted before there was an OVF update in the OVF_STORE disk. > In that point of time the OVF of the VM indicated the disks contains > snapshots. > while those disks were without any snapshots. > Once the storage domain got attached those disks were fetched as potential > disks to register which were part of the VMs also. > > There seem to be a bug in the xml parser of the OVF that add those disks the > VMs which those are attached to, since the XML was not updated after the > removal of the snapshots those disks were initialized with VMs there were > attached to, although those VMs were actually the same VM and that caused > the SQL exception. > > Steps to reproduce: > 1. Create a VM with disks and snapshot > 2. Delete the snapshot > 3. force remove the storage domain (do not deactivate it since the OVF_STORE > will be updated this way) > 4. Try to attach the storage domain back again to the Data Center > > I will post a patch that fixes it. > Thank you again for your help Awesome, thanks for your hardwork in finding this Maor. WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [FOUND CLONE FLAGS: ['rhevm-4.1.z', 'rhevm-4.2-ga'], ] For more info please contact: rhv-devops Verified with our automation on ovirt-4.2.0-0.0.master.20170519193842.gitf4353fb6.el7.centos. No failures on importing and attaching storage domain with unregistered disks Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:1488 BZ<2>Jira Resync |