Created attachment 419407 [details]
gdb logs of both server during the hang
Description of problem:
trying out concurrent bi-directional migration causes libvirt to hang.
this is reproducible and happened 3 times so far.
scenario goes as follows:
- 2 running hosts
- 2 running vms
== vm 1 is on host 1
== vm 2 is on host 2
- using rhev-m - perform concurrent migration meaning:
== vm 1 which runs on host 1 is destined to run on host 2
== vm 2 which runs on host 2 is destined to run on host 1
1) libvirtd.log on both servers (white-vdse and pink-nehalem2)
2) gdb trace information (attached to hang process
> attached information:
> 1) libvirtd.log on both servers (white-vdse and pink-nehalem2)
> 2) gdb trace information (attached to hang process
Seems like you wanted to attach libvirtd log but forgot to do so in the end. Could you, please, provide it?
The problem is this..
With PEER2PEER migration, the source libvirtd makes API calls into the destination libvirtd. Unfortunately it is holding the qemu driver lock, and virDomainObj lock while doing this. If the other libvirtd is also trying to call back into this libvirtd, it also holds the qemu driver + domain obj locks. Deadlock is ensured
As we do when interacting with the QEMU monitor, we need to add calls to BeginJob/EndJob and also an equivalent to EnterMonitor/LeaveMonitor around every API call to the remote libvirtd. This ensures we release all locks while doing API calls.
This impacts the methods doPeer2PeerMigrate, doNonTunnelMigrate, doTunnelMigrate and doTunnelSendAll.
Looking at the code, I think we already have the BeginJob/EndJob calls (though it's not exactly trivial to follow it). When we come into qemudDomainMigratePerform(), we lock the driver, then lock the virDomainObj (via virDomainFindByUUID()), then call qemuDomainObjBeginJobWithDriver(). After that we call doPeer2PeerMigrate(), so we already have the job "started". However, qemuDomainObjBeginJobWithDriver just starts the job; it is still holding onto the driver lock and the domain obj lock after it exits. What we need is exactly what Dan mentioned in the second part of his comment; an equivalent to EnterMonitor/LeaveMonitor, called something like "EnterRemoteLibvirt/LeaveRemoteLibvirt", that will take an additional reference (to keep things safe) and drop both the driver lock and the obj lock.
Patch posted upstream:
Committed to upstream libvirt as:
I'm working on a RHEL-6 backport now.
Created attachment 433259 [details]
Backport of libvirt upstream patch to fix concurrent p2p migrations
libvirt-0_8_1-17_el6 has been built in RHEL-6-candidate with the fix.
verified running with the following versions:
executed several concurrent bi-directional migrations, all vms were successfully migrated to their destination host, virsh didn't hang.
and of course libvirt version: libvirt-0.8.1-19.el6.x86_64
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.