Bug 1486028

Summary: After a KVM VM is migrated to the hypervisor, a restart of libvirtd will result in the VM being shut down.
Product: Red Hat Enterprise Linux 6 Reporter: Allie DeVolder <adevolder>
Component: libvirtAssignee: Michal Privoznik <mprivozn>
Status: CLOSED NOTABUG QA Contact: zhe peng <zpeng>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 6.9CC: adevolder, bhaubeck, dyuan, jdenemar, mkalinin, rbalakri, xuzhang, yalzhang, zpeng
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-09-04 08:20:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Allie DeVolder 2017-08-28 19:05:54 UTC
Description of problem:
When a VM is migrated to a hypervisor, and later libvirtd is restarted on that hypervisor, the migrated VM will stop approximately 2 minutes later.

Version-Release number of selected component (if applicable):
qemu-kvm-0.12.1.2-2.503.el6_9.3.x86_64
libvirt-0.10.2-62.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Migrate VM to hypervisor
2. Restart libvirtd on hypervisor
3. Wait about 2 minutes

Actual results:
VM is stopped

Expected results:
VM continues running

Additional info:

Comment 10 Marina Kalinin 2017-08-30 21:30:45 UTC
It is standalone KVM.

Comment 12 Michal Privoznik 2017-08-31 15:36:53 UTC
This is a very strange bug indeed. However, I suspect that libvirt is innocent here. I think that qemu process is translated into some weird state (maybe it is stuck in a system call?). Anyway, we can take libvirt out of the picture if the following succeeds:

1) service libvirt stop
2) socat unix:/var/lib/libvirt/qemu/domain-$ID-$DOMNAME/monitor.sock stdio

where $ID is the domain ID (a number), and $DOMNAME is the domain name. For instance if the domain in question was "MyDomain" and had id 4 the path would look like this:

/var/lib/libvirt/qemu/domain-4-MyDomain/monitor.sock

And when socat is run, it connects to qemu monitor and qemu should greet us with something like this:

{"QMP": {"version": {"qemu": {"micro": 94, "minor": 9, "major": 2}, "package": " (v2.10.0-rc4)"}, "capabilities": []}}


(the actual number are going to be different, but that doesn't matter now). However, if socat just hangs, just like libvirt does and the greeting message is not printed, then qemu is stuck on something. Can you please check which of the two cases you are hitting?

Comment 13 Michal Privoznik 2017-08-31 15:47:39 UTC
Also, from the sosreport it looks like /var/lib/libvirt is on an NFS mount. If something happens to the mount point, a hiccup of some sort, qemu is doomed to hang since it's monitor is there. Regardless of its configuration (I can see that disks for their domains are purely local). I wonder if this is actually the case and nor libvirt nor qemu are buggy here.

Comment 15 ben haubeck 2017-08-31 19:58:19 UTC
regarding the NFS-Storage: 

the other domains are also running from the same Filer mount and are not affecting by the restart of libvirtd. only the VMs, that are live migrated onto that host get killed. 

ben

Comment 19 Jiri Denemark 2017-09-04 08:20:27 UTC
Well, having /var/lib/libvirt on a shared filesystem is not going to work with migrations. The libvirtd daemons and qemu-kvm processes on both hosts will be fighting with each other when creating or deleting files in /var/lib/libvirt. This is how migration works:

1. a domain is running on the source host, monitor socket is there and libvirt is connected to it
2. a user initiates migration to another host
3. a new qemu process is started (with virtual CPUs stopped) on the target host and creates its own monitor socket (overwriting the original one) and libvirtd connects to it
4. runtime state and memory pages are transferred from the source qemu process to the destination one (once this step finishes, the virtual CPUs on the source host are not running anymore)
5. virtual CPUs of the domain on the target host are started
6. the qemu process on the source host is killed and its monitor socket is removed (which actually removes the socket created by the destination host)

Once libvirtd on the destination host is restarted, it cannot connect to the monitor sockets of any previously migrated domains because they were all removed. Thus the domains will get killed.

BTW, similar fights may also happen with libvirt's virtual networks, dnsmasq configurations and possibly other stuff which is stored in /var/lib/libvirt. Only specific directories, such as /var/lib/libvirt/images may be shared between hosts.

That said, this is not a bug, but a configuration error. It's possible that our Virtualization Deployment Guide may need to be updated in case it is unclear or misleading.