This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 599590 - [vdsm] [libvirt intg] libvirtd hangs during concurrent bi-directional migration
[vdsm] [libvirt intg] libvirtd hangs during concurrent bi-directional migration
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt (Show other bugs)
6.0
All Linux
low Severity high
: rc
: ---
Assigned To: Chris Lalancette
Virtualization Bugs
vdsm & libvirt integration
: TestBlocker
Depends On:
Blocks: 581275 Rhel6.0LibvirtTier2 630614
  Show dependency treegraph
 
Reported: 2010-06-03 10:48 EDT by Haim
Modified: 2014-01-12 19:46 EST (History)
13 users (show)

See Also:
Fixed In Version: libvirt-0_8_1-17_el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 630614 (view as bug list)
Environment:
Last Closed: 2010-11-11 09:48:53 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
gdb logs of both server during the hang (40.22 KB, application/zip)
2010-06-03 10:48 EDT, Haim
no flags Details
Backport of libvirt upstream patch to fix concurrent p2p migrations (4.26 KB, patch)
2010-07-20 16:43 EDT, Chris Lalancette
no flags Details | Diff

  None (edit)
Description Haim 2010-06-03 10:48:50 EDT
Created attachment 419407 [details]
gdb logs of both server during the hang

Description of problem:

trying out concurrent bi-directional migration causes libvirt to hang. 
this is reproducible and happened 3 times so far. 
scenario goes as follows: 

- 2 running hosts 
- 2 running vms 
   == vm 1 is on host 1 
   == vm 2 is on host 2 
- using rhev-m - perform concurrent migration meaning: 
   == vm 1 which runs on host 1 is destined to run on host 2 
   == vm 2 which runs on host 2 is destined to run on host 1 

attached information: 

1) libvirtd.log on both servers (white-vdse and pink-nehalem2)
2) gdb trace information (attached to hang process

vdsm-4.9-7.el6.x86_64
libvirt-0.8.1-7.el6.x86_64
qemu-kvm-0.12.1.2-2.68.el6.x86_64
2.6.32-31.el6.x86_64
Comment 5 Jiri Denemark 2010-06-03 11:45:55 EDT
> attached information: 
> 
> 1) libvirtd.log on both servers (white-vdse and pink-nehalem2)
> 2) gdb trace information (attached to hang process

Seems like you wanted to attach libvirtd log but forgot to do so in the end. Could you, please, provide it?
Comment 6 Daniel Berrange 2010-06-03 12:09:07 EDT
The problem is this..

With PEER2PEER migration, the source libvirtd makes API calls into the destination libvirtd.  Unfortunately it is holding the qemu driver lock, and virDomainObj lock while doing this.  If the other libvirtd is also trying to call back into this libvirtd, it also holds the qemu driver + domain obj locks. Deadlock is ensured

As we do when interacting with the QEMU monitor, we need to add calls to BeginJob/EndJob and also an equivalent to EnterMonitor/LeaveMonitor  around every API call to the remote libvirtd. This ensures we release all locks while doing API calls.

This impacts the methods doPeer2PeerMigrate, doNonTunnelMigrate, doTunnelMigrate and doTunnelSendAll.
Comment 7 Chris Lalancette 2010-07-13 15:59:15 EDT
Looking at the code, I think we already have the BeginJob/EndJob calls (though it's not exactly trivial to follow it).  When we come into qemudDomainMigratePerform(), we lock the driver, then lock the virDomainObj (via virDomainFindByUUID()), then call qemuDomainObjBeginJobWithDriver().  After that we call doPeer2PeerMigrate(), so we already have the job "started".  However, qemuDomainObjBeginJobWithDriver just starts the job; it is still holding onto the driver lock and the domain obj lock after it exits.  What we need is exactly what Dan mentioned in the second part of his comment; an equivalent to EnterMonitor/LeaveMonitor, called something like "EnterRemoteLibvirt/LeaveRemoteLibvirt", that will take an additional reference (to keep things safe) and drop both the driver lock and the obj lock.

Chris Lalancette
Comment 8 Chris Lalancette 2010-07-16 09:41:49 EDT
Patch posted upstream:

https://www.redhat.com/archives/libvir-list/2010-July/msg00346.html

Chris Lalancette
Comment 9 Chris Lalancette 2010-07-20 13:38:15 EDT
Committed to upstream libvirt as:

f0c8e1cb3774d6f09e2681ca1988bf235a343007

I'm working on a RHEL-6 backport now.

Chris Lalancette
Comment 11 Chris Lalancette 2010-07-20 16:43:54 EDT
Created attachment 433259 [details]
Backport of libvirt upstream patch to fix concurrent p2p migrations
Comment 12 Dave Allan 2010-07-21 11:47:04 EDT
libvirt-0_8_1-17_el6 has been built in RHEL-6-candidate with the fix.

Dave
Comment 14 Haim 2010-07-30 04:05:00 EDT
verified running with the following versions: 

qemu-kvm-0.12.1.2-2.97.el6.x86_64
vdsm-4.9-11.el6.x86_64
2.6.32-52.el6.x86_64


executed several concurrent bi-directional migrations, all vms were successfully migrated to their destination host, virsh didn't hang. 

fixed.
Comment 15 Haim 2010-07-30 04:06:38 EDT
and of course libvirt version: libvirt-0.8.1-19.el6.x86_64
Comment 16 releng-rhel@redhat.com 2010-11-11 09:48:53 EST
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.

Note You need to log in before you can comment on or make changes to this bug.