Bug 1206365
Summary: | Live Merge: Active layer merge is not properly synchronized with vdsm | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Adam Litke <alitke> | |
Component: | libvirt | Assignee: | Peter Krempa <pkrempa> | |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | |
Severity: | urgent | Docs Contact: | ||
Priority: | urgent | |||
Version: | 7.1 | CC: | acanan, amureini, bazulay, dyuan, eblake, ecohen, gklein, jdenemar, lpeer, lsurette, mzhan, pkrempa, rbalakri, sherold, shyu, xuzhang, yanyang, yeylon | |
Target Milestone: | rc | Keywords: | ZStream | |
Target Release: | 7.1 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | storage | |||
Fixed In Version: | libvirt-1.2.14-1.el7 | Doc Type: | Bug Fix | |
Doc Text: |
Cause:
A recent bug fix introduced a regression when a block job is finished via the synchronous block job API. The code that is updating the backing chain of the disk after the block job finishes was extracted into a separate thread and thus allowed to race with other threads.
Consequence:
If one thread is executing the synchronous abort/pivot operation while a second thread is already waiting to access the domain definition to read the backing chain, the update operation will be queued after the query thread and thus the query thread will not get the updated backing chain data.
Fix:
When a synchronous Abort/Pivot operation is requested, the backing chain update is done in the same thread that is waiting for completion of the job. This assures that the backing chain info is updated in time.
Result:
Different threads are able to get correct backing chain data after a synchronous blockJobAbort is issued.
|
Story Points: | --- | |
Clone Of: | 1206355 | |||
: | 1208021 (view as bug list) | Environment: | ||
Last Closed: | 2015-11-19 06:25:02 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1155583, 1206355, 1206722, 1207808, 1208021 |
Description
Adam Litke
2015-03-26 21:26:31 UTC
This is likely fallout from Bug 1202719 and the working theory is that there is a new race window... A virDomainBlockCommit involving the active layer is completed (pivoted) by using a call to virDomainBlockJobAbort. The Abort returns and the caller immediately retrieves the domain XML, but that call returns the XML before libvirt could update it to represent the new backing chain (post pivot). This fix is required to support Live Merge in RHEVM. I don't have the permissions to raise flags on this BZ, but I'd like to propose it for RHEL 7.1.z. well, I can reproduce it with rhevm3.5 env, although not 100 percent. with packages rhevm-3.5.0-0.31.el6ev.noarch libvirt-1.2.8-16.el7_1.2.x86_64 vdsm-4.16.12.1-3.el7ev.x86_64 qemu-kvm-rhev-2.1.2-23.el7_1.1.x86_64 Patches fixing the regression proposed upstream. Please note that upstream released the broken code only in the release candidate for the upcomming release so there should not be any upstream release with the flaw once this is fixed. https://www.redhat.com/archives/libvir-list/2015-March/msg01524.html Fixed upstream: commit 630ee5ac6cf4e3be3f3e986897a289865dd2604b Author: Peter Krempa <pkrempa> Date: Mon Mar 30 11:26:20 2015 +0200 qemu: blockjob: Synchronously update backing chain in XML on ABORT/PIVOT When the synchronous pivot option is selected, libvirt would not update the backing chain until the job was exitted. Some applications then received invalid data as their job serialized first. This patch removes polling to wait for the ABORT/PIVOT job completion and replaces it with a condition. If a synchronous operation is requested the update of the XML is executed in the job of the caller of the synchronous request. Otherwise the monitor event callback uses a separate worker to update the backing chain with a new job. This is a regression since 1a92c719101e5bfa6fe2b78006ad04c7f075ea28 When the ABORT job is finished synchronously you get the following call stack: #0 qemuBlockJobEventProcess #1 qemuDomainBlockJobImpl #2 qemuDomainBlockJobAbort #3 virDomainBlockJobAbort While previously or while using the _ASYNC flag you'd get: #0 qemuBlockJobEventProcess #1 processBlockJobEvent #2 qemuProcessEventHandler #3 virThreadPoolWorker commit 0c4474df4e10d27e27dbcda80b1f9cc14f4bdd8a Author: Peter Krempa <pkrempa> Date: Mon Mar 30 11:26:19 2015 +0200 qemu: Extract internals of processBlockJobEvent into a helper Later on I'll be adding a condition that will allow to synchronise a SYNC block job abort. The approach will require this code to be called from two different places so it has to be extracted into a helper. commit 6b6c4ab8a6d2096bd5f50d2ae9b0a929fbaaf076 Author: Peter Krempa <pkrempa> Date: Mon Mar 30 11:26:18 2015 +0200 qemu: processBlockJob: Don't unlock @vm twice Commit 1a92c719 moved code to handle block job events to a different function that is executed in a separate thread. The caller of processBlockJob handles locking and unlocking of @vm, so the we should not do it in the function itself. v1.2.14-rc2 I can register host with latest libvirt and vdsm to rhevm, so this one is not blocked by bug 1232665. I can reproduce it with libvirt-1.2.13-1.el7.x86_64 and vdsm-4.16.21-1.el7ev.x86_64 Verified it with libvirt-1.2.17-1.el7 and vdsm-4.16.21-1.el7ev.x86_64 Steps 1. Prepare a running guest with 4 disks, (2 thin and 2 preallocated) 2. Create 4 snapshots: s4-->s3-->s2-->s1-->base 3. delete 3 snapshots: s4, s3, s1 4. create 2 snapshots: ss2-->ss1-->s2-->base 5. delete 1 snapshots: s2 6. create 2 snapshots: ss4-->ss3-->ss2-->ss1-->base 7. delete 4 snapshots: ss4, ss2, ss3, ss1 All ops work well so move it to verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2202.html |