Bug 599590

Summary: [vdsm] [libvirt intg] libvirtd hangs during concurrent bi-directional migration
Product: Red Hat Enterprise Linux 6 Reporter: Haim <hateya>
Component: libvirtAssignee: Chris Lalancette <clalance>
Status: CLOSED CURRENTRELEASE QA Contact: Virtualization Bugs <virt-bugs>
Severity: high Docs Contact:
Priority: low    
Version: 6.0CC: bazulay, berrange, dallan, danken, hateya, hbrock, iheim, jdenemar, mgoldboi, mjenner, xen-maint, yeylon, ykaul
Target Milestone: rcKeywords: TestBlocker
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard: vdsm & libvirt integration
Fixed In Version: libvirt-0_8_1-17_el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 630614 (view as bug list) Environment:
Last Closed: 2010-11-11 14:48:53 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 581275, 609432, 630614    
Attachments:
Description Flags
gdb logs of both server during the hang
none
Backport of libvirt upstream patch to fix concurrent p2p migrations none

Description Haim 2010-06-03 14:48:50 UTC
Created attachment 419407 [details]
gdb logs of both server during the hang

Description of problem:

trying out concurrent bi-directional migration causes libvirt to hang. 
this is reproducible and happened 3 times so far. 
scenario goes as follows: 

- 2 running hosts 
- 2 running vms 
   == vm 1 is on host 1 
   == vm 2 is on host 2 
- using rhev-m - perform concurrent migration meaning: 
   == vm 1 which runs on host 1 is destined to run on host 2 
   == vm 2 which runs on host 2 is destined to run on host 1 

attached information: 

1) libvirtd.log on both servers (white-vdse and pink-nehalem2)
2) gdb trace information (attached to hang process

vdsm-4.9-7.el6.x86_64
libvirt-0.8.1-7.el6.x86_64
qemu-kvm-0.12.1.2-2.68.el6.x86_64
2.6.32-31.el6.x86_64

Comment 5 Jiri Denemark 2010-06-03 15:45:55 UTC
> attached information: 
> 
> 1) libvirtd.log on both servers (white-vdse and pink-nehalem2)
> 2) gdb trace information (attached to hang process

Seems like you wanted to attach libvirtd log but forgot to do so in the end. Could you, please, provide it?

Comment 6 Daniel Berrangé 2010-06-03 16:09:07 UTC
The problem is this..

With PEER2PEER migration, the source libvirtd makes API calls into the destination libvirtd.  Unfortunately it is holding the qemu driver lock, and virDomainObj lock while doing this.  If the other libvirtd is also trying to call back into this libvirtd, it also holds the qemu driver + domain obj locks. Deadlock is ensured

As we do when interacting with the QEMU monitor, we need to add calls to BeginJob/EndJob and also an equivalent to EnterMonitor/LeaveMonitor  around every API call to the remote libvirtd. This ensures we release all locks while doing API calls.

This impacts the methods doPeer2PeerMigrate, doNonTunnelMigrate, doTunnelMigrate and doTunnelSendAll.

Comment 7 Chris Lalancette 2010-07-13 19:59:15 UTC
Looking at the code, I think we already have the BeginJob/EndJob calls (though it's not exactly trivial to follow it).  When we come into qemudDomainMigratePerform(), we lock the driver, then lock the virDomainObj (via virDomainFindByUUID()), then call qemuDomainObjBeginJobWithDriver().  After that we call doPeer2PeerMigrate(), so we already have the job "started".  However, qemuDomainObjBeginJobWithDriver just starts the job; it is still holding onto the driver lock and the domain obj lock after it exits.  What we need is exactly what Dan mentioned in the second part of his comment; an equivalent to EnterMonitor/LeaveMonitor, called something like "EnterRemoteLibvirt/LeaveRemoteLibvirt", that will take an additional reference (to keep things safe) and drop both the driver lock and the obj lock.

Chris Lalancette

Comment 8 Chris Lalancette 2010-07-16 13:41:49 UTC
Patch posted upstream:

https://www.redhat.com/archives/libvir-list/2010-July/msg00346.html

Chris Lalancette

Comment 9 Chris Lalancette 2010-07-20 17:38:15 UTC
Committed to upstream libvirt as:

f0c8e1cb3774d6f09e2681ca1988bf235a343007

I'm working on a RHEL-6 backport now.

Chris Lalancette

Comment 11 Chris Lalancette 2010-07-20 20:43:54 UTC
Created attachment 433259 [details]
Backport of libvirt upstream patch to fix concurrent p2p migrations

Comment 12 Dave Allan 2010-07-21 15:47:04 UTC
libvirt-0_8_1-17_el6 has been built in RHEL-6-candidate with the fix.

Dave

Comment 14 Haim 2010-07-30 08:05:00 UTC
verified running with the following versions: 

qemu-kvm-0.12.1.2-2.97.el6.x86_64
vdsm-4.9-11.el6.x86_64
2.6.32-52.el6.x86_64


executed several concurrent bi-directional migrations, all vms were successfully migrated to their destination host, virsh didn't hang. 

fixed.

Comment 15 Haim 2010-07-30 08:06:38 UTC
and of course libvirt version: libvirt-0.8.1-19.el6.x86_64

Comment 16 releng-rhel@redhat.com 2010-11-11 14:48:53 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.