Bug 848343 - KVM migration works once, fails second time
Summary: KVM migration works once, fails second time
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: fuse
Version: 2.0
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ---
: ---
Assignee: Nagaprasad Sathyanarayana
QA Contact: Gowrishankar Rajaiyan
URL:
Whiteboard:
Depends On: GLUSTER-3852
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-08-15 09:59 UTC by Vidya Sakar
Modified: 2016-02-18 00:19 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.3.0.5rhs-36,glusterfs-3.3.0virt1-8
Doc Type: Bug Fix
Doc Text:
Clone Of: GLUSTER-3852
Environment:
Last Closed: 2015-08-10 07:47:57 UTC
Embargoed:


Attachments (Terms of Use)

Description Vidya Sakar 2012-08-15 09:59:10 UTC
+++ This bug was initially created as a clone of Bug #765584 +++

Forgot to mention:

Running CentOS 6 with all current updates, running gluster 3.3 beta 2.  Replicate volume bricks are on hyper1 and hyper2 (KVM hosts).  Migration works fine on gfs2 volume on the same hosts.

--- Additional comment from stephan.ellis on 2011-12-07 09:20:10 EST ---

Created attachment 724

--- Additional comment from stephan.ellis on 2011-12-07 09:20:39 EST ---

Created attachment 725

--- Additional comment from stephan.ellis on 2011-12-07 09:21:30 EST ---

Using the same volume mounted as nfs, migration works flawlessly.

--- Additional comment from stephan.ellis on 2011-12-07 11:54:43 EST ---

I have to libvirt based KVM hosts.  I've set up a gluster replicate volume between them, XFS as the actual filesystem for the bricks.  Gluster volume is mounted on both hosts using the native fuse client.  When migrating a VM from hyper1 to hyper2, it works.  Migration from hyper2 to hyper1 fails.  Same results if the first migration is hyper2 -> hyper1, second migration hyper1 -> hyper2 fails.

Relevant log lines from hyper1 when migrating a second time:

Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.063: info : qemudDispatchServer:1398 : Turn off polkit auth for privileged client 18310
Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.094: info : qemuSecurityDACSetOwnership:40 : Setting DAC user and group on '/mnt/gstor/ar-lab.img' to '107:107'
Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.114: info : qemudDispatchSignalEvent:397 : Received unexpected signal 17
Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.220: info : qemudDispatchSignalEvent:397 : Received unexpected signal 17
Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.225: info : brProbeVnetHdr:449 : Enabling IFF_VNET_HDR
Dec  7 10:53:00 hyper1 kernel: device vnet1 entered promiscuous mode
Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.229: info : brProbeVnetHdr:449 : Enabling IFF_VNET_HDR
Dec  7 10:53:00 hyper1 kernel: vsCore: port 2(vnet1) entering forwarding state
Dec  7 10:53:00 hyper1 kernel: device vnet2 entered promiscuous mode
Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.233: info : brProbeVnetHdr:449 : Enabling IFF_VNET_HDR
Dec  7 10:53:00 hyper1 kernel: vsPrivate: port 4(vnet2) entering forwarding state
Dec  7 10:53:00 hyper1 kernel: device vnet5 entered promiscuous mode
Dec  7 10:53:00 hyper1 kernel: vsCluster: port 2(vnet5) entering forwarding state
Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.254: info : qemudDispatchSignalEvent:397 : Received unexpected signal 17
Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.310: info : udevGetDeviceProperty:116 : udev reports device 'vnet2' does not have property 'DRIVER'
Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.310: info : udevGetDeviceProperty:116 : udev reports device 'vnet2' does not have property 'PCI_CLASS'
Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.317: info : udevGetDeviceProperty:116 : udev reports device 'vnet1' does not have property 'DRIVER'
Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.317: info : udevGetDeviceProperty:116 : udev reports device 'vnet1' does not have property 'PCI_CLASS'
Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.322: info : udevGetDeviceProperty:116 : udev reports device 'vnet5' does not have property 'DRIVER'
Dec  7 10:53:00 hyper1 libvirtd: 10:53:00.322: info : udevGetDeviceProperty:116 : udev reports device 'vnet5' does not have property 'PCI_CLASS'
Dec  7 10:53:02 hyper1 libvirtd: 10:53:02.091: info : qemuSecurityDACRestoreSecurityFileLabel:80 : Restoring DAC user and group on '/mnt/gstor/ar-lab.img'
Dec  7 10:53:02 hyper1 libvirtd: 10:53:02.091: info : qemuSecurityDACSetOwnership:40 : Setting DAC user and group on '/mnt/gstor/ar-lab.img' to '0:0'
Dec  7 10:53:02 hyper1 kernel: vsCore: port 2(vnet1) entering disabled state
Dec  7 10:53:02 hyper1 kernel: device vnet1 left promiscuous mode
Dec  7 10:53:02 hyper1 kernel: vsCore: port 2(vnet1) entering disabled state
Dec  7 10:53:02 hyper1 kernel: vsPrivate: port 4(vnet2) entering disabled state
Dec  7 10:53:02 hyper1 kernel: device vnet2 left promiscuous mode
Dec  7 10:53:02 hyper1 kernel: vsPrivate: port 4(vnet2) entering disabled state
Dec  7 10:53:02 hyper1 kernel: vsCluster: port 2(vnet5) entering disabled state
Dec  7 10:53:02 hyper1 kernel: device vnet5 left promiscuous mode
Dec  7 10:53:02 hyper1 kernel: vsCluster: port 2(vnet5) entering disabled state
Dec  7 10:53:02 hyper1 libvirtd: 10:53:02.346: error : qemudDomainMigrateFinish2:11763 : internal error guest unexpectedly quit

--- Additional comment from amarts on 2011-12-07 19:28:31 EST ---

Suspect it to be an issue with O_DIRECT in open(). can we get 'strace -f -v' output?

--- Additional comment from stephan.ellis on 2011-12-08 07:24:06 EST ---

Created attachment 727


command was:

strace -f -v -o migtrace.txt virsh migrate --live ar-lab qemu+ssh://hyper1/system

Comment 2 Amar Tumballi 2012-08-23 06:45:28 UTC
This bug is not seen in current master branch (which will get branched as RHS 2.1.0 soon). To consider it for fixing, want to make sure this bug still exists in RHS servers. If not reproduced, would like to close this.

Comment 3 shishir gowda 2012-09-13 07:33:44 UTC
With RHS servers and RHEV-M(latest qa) migration to and fro succeeds with-out any errors. Please re-open if the bug is hit again.

Comment 4 Amar Tumballi 2012-11-27 10:54:57 UTC
Shanks, can you please run one round of test and confirm it works? so we can also close the upstream bug

Comment 5 Anush Shetty 2012-11-27 12:25:27 UTC
Verified with glusterfs-3.3.0virt1-8


Note You need to log in before you can comment on or make changes to this bug.