Bug 1208021
Summary: | Live Merge: Active layer merge is not properly synchronized with vdsm | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Jan Kurik <jkurik> |
Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | 7.1 | CC: | acanan, alitke, amureini, bazulay, dyuan, eblake, ecohen, gklein, jdenemar, kgoldbla, lpeer, lsurette, mzhan, pkrempa, pm-eus, pstehlik, rbalakri, sherold, shyu, xuzhang, yeylon |
Target Milestone: | rc | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | libvirt-1.2.8-16.el7_1.3 | Doc Type: | Bug Fix |
Doc Text: |
Cause:
A recent bug fix introduced a regression when a block job is finished via the synchronous block job API. The code that is updating the backing chain of the disk after the block job finishes was extracted into a separate thread and thus allowed to race with other threads.
Consequence:
If one thread is executing the synchronous abort/pivot operation while a second thread is already waiting to access the domain definition to read the backing chain, the update operation will be queued after the query thread and thus the query thread will not get the updated backing chain data.
Fix:
When a synchronous Abort/Pivot operation is requested, the backing chain update is done in the same thread that is waiting for completion of the job. This assures that the backing chain info is updated in time.
Result:
Different threads are able to get correct backing chain data after a synchronous blockJobAbort is issued.
|
Story Points: | --- |
Clone Of: | 1206365 | Environment: | |
Last Closed: | 2015-05-12 17:54:56 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1206365 | ||
Bug Blocks: |
Description
Jan Kurik
2015-04-01 08:31:08 UTC
libvirt-1.2.8-16.el7_1.3.x86_64.rpm libvirt-client-1.2.8-16.el7_1.3.x86_64.rpm libvirt-daemon-1.2.8-16.el7_1.3.x86_64.rpm libvirt-daemon-config-network-1.2.8-16.el7_1.3.x86_64.rpm libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64.rpm libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64.rpm libvirt-daemon-driver-lxc-1.2.8-16.el7_1.3.x86_64.rpm libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64.rpm libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64.rpm libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64.rpm libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64.rpm libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64.rpm libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64.rpm libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64.rpm libvirt-daemon-lxc-1.2.8-16.el7_1.3.x86_64.rpm libvirt-debuginfo-1.2.8-16.el7_1.3.x86_64.rpm libvirt-devel-1.2.8-16.el7_1.3.x86_64.rpm libvirt-docs-1.2.8-16.el7_1.3.x86_64.rpm libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64.rpm libvirt-login-shell-1.2.8-16.el7_1.3.x86_64.rpm Steps to Reproduce: 1. Create a VM with 3 BLOCK disks (1 thin and 2 preallocated) 2. Start the VM 3. Create a snapshot 4. Delete the snapshot - Works fine Thanks Kevin. Move the bug to VERIFIED. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-0973.html |