Bug 599590 - [vdsm] [libvirt intg] libvirtd hangs during concurrent bi-directional migration
Summary: [vdsm] [libvirt intg] libvirtd hangs during concurrent bi-directional migration
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt   
(Show other bugs)
Version: 6.0
Hardware: All
OS: Linux
Target Milestone: rc
: ---
Assignee: Chris Lalancette
QA Contact: Virtualization Bugs
Whiteboard: vdsm & libvirt integration
Keywords: TestBlocker
Depends On:
Blocks: 581275 Rhel6.0LibvirtTier2 630614
TreeView+ depends on / blocked
Reported: 2010-06-03 14:48 UTC by Haim
Modified: 2014-01-13 00:46 UTC (History)
13 users (show)

Fixed In Version: libvirt-0_8_1-17_el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 630614 (view as bug list)
Last Closed: 2010-11-11 14:48:53 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
gdb logs of both server during the hang (40.22 KB, application/zip)
2010-06-03 14:48 UTC, Haim
no flags Details
Backport of libvirt upstream patch to fix concurrent p2p migrations (4.26 KB, patch)
2010-07-20 20:43 UTC, Chris Lalancette
no flags Details | Diff

Description Haim 2010-06-03 14:48:50 UTC
Created attachment 419407 [details]
gdb logs of both server during the hang

Description of problem:

trying out concurrent bi-directional migration causes libvirt to hang. 
this is reproducible and happened 3 times so far. 
scenario goes as follows: 

- 2 running hosts 
- 2 running vms 
   == vm 1 is on host 1 
   == vm 2 is on host 2 
- using rhev-m - perform concurrent migration meaning: 
   == vm 1 which runs on host 1 is destined to run on host 2 
   == vm 2 which runs on host 2 is destined to run on host 1 

attached information: 

1) libvirtd.log on both servers (white-vdse and pink-nehalem2)
2) gdb trace information (attached to hang process


Comment 5 Jiri Denemark 2010-06-03 15:45:55 UTC
> attached information: 
> 1) libvirtd.log on both servers (white-vdse and pink-nehalem2)
> 2) gdb trace information (attached to hang process

Seems like you wanted to attach libvirtd log but forgot to do so in the end. Could you, please, provide it?

Comment 6 Daniel Berrange 2010-06-03 16:09:07 UTC
The problem is this..

With PEER2PEER migration, the source libvirtd makes API calls into the destination libvirtd.  Unfortunately it is holding the qemu driver lock, and virDomainObj lock while doing this.  If the other libvirtd is also trying to call back into this libvirtd, it also holds the qemu driver + domain obj locks. Deadlock is ensured

As we do when interacting with the QEMU monitor, we need to add calls to BeginJob/EndJob and also an equivalent to EnterMonitor/LeaveMonitor  around every API call to the remote libvirtd. This ensures we release all locks while doing API calls.

This impacts the methods doPeer2PeerMigrate, doNonTunnelMigrate, doTunnelMigrate and doTunnelSendAll.

Comment 7 Chris Lalancette 2010-07-13 19:59:15 UTC
Looking at the code, I think we already have the BeginJob/EndJob calls (though it's not exactly trivial to follow it).  When we come into qemudDomainMigratePerform(), we lock the driver, then lock the virDomainObj (via virDomainFindByUUID()), then call qemuDomainObjBeginJobWithDriver().  After that we call doPeer2PeerMigrate(), so we already have the job "started".  However, qemuDomainObjBeginJobWithDriver just starts the job; it is still holding onto the driver lock and the domain obj lock after it exits.  What we need is exactly what Dan mentioned in the second part of his comment; an equivalent to EnterMonitor/LeaveMonitor, called something like "EnterRemoteLibvirt/LeaveRemoteLibvirt", that will take an additional reference (to keep things safe) and drop both the driver lock and the obj lock.

Chris Lalancette

Comment 8 Chris Lalancette 2010-07-16 13:41:49 UTC
Patch posted upstream:


Chris Lalancette

Comment 9 Chris Lalancette 2010-07-20 17:38:15 UTC
Committed to upstream libvirt as:


I'm working on a RHEL-6 backport now.

Chris Lalancette

Comment 11 Chris Lalancette 2010-07-20 20:43:54 UTC
Created attachment 433259 [details]
Backport of libvirt upstream patch to fix concurrent p2p migrations

Comment 12 Dave Allan 2010-07-21 15:47:04 UTC
libvirt-0_8_1-17_el6 has been built in RHEL-6-candidate with the fix.


Comment 14 Haim 2010-07-30 08:05:00 UTC
verified running with the following versions: 


executed several concurrent bi-directional migrations, all vms were successfully migrated to their destination host, virsh didn't hang. 


Comment 15 Haim 2010-07-30 08:06:38 UTC
and of course libvirt version: libvirt-0.8.1-19.el6.x86_64

Comment 16 releng-rhel@redhat.com 2010-11-11 14:48:53 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.

Note You need to log in before you can comment on or make changes to this bug.