Bug 1641798

Summary: [RHHI] Hosted Engine migration fails in gluster storage domain [rhel-7.6.z]
Product: Red Hat Enterprise Linux 7 Reporter: Oneata Mircea Teodor <toneata>
Component: libvirtAssignee: Michal Privoznik <mprivozn>
Status: CLOSED ERRATA QA Contact: Han Han <hhan>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 7.6CC: bshetty, fjin, jdenemar, jherrman, jsuchane, lsurette, mprivozn, mtessun, pbalogh, rcyriac, sabose, salmy, sasundar, stirabos, xuzhang, yalzhang
Target Milestone: rcKeywords: Regression, Upstream, ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: libvirt-4.5.0-10.el7_6.3 Doc Type: Bug Fix
Doc Text:
Prior to this update, migrating a virtual machine (VM) failed when the VM contained a symbolic link to a GlusterFS storage. With this update, the libvirt service establishes disk paths correctly, and VMs with symlinked GlusterFS storage can be migrated as expected.
Story Points: ---
Clone Of: 1640465 Environment:
Last Closed: 2018-11-27 01:22:40 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1640465    
Bug Blocks: 1640467    

Description Oneata Mircea Teodor 2018-10-22 19:17:22 UTC
This bug has been copied from bug #1640465 and has been proposed to be backported to 7.6 z-stream (EUS).

Comment 8 Gobinda Das 2018-11-06 08:58:43 UTC
*** Bug 1633517 has been marked as a duplicate of this bug. ***

Comment 10 Han Han 2018-11-12 09:36:29 UTC
Verified on:
libvirt-4.5.0-10.virtcov.el7_6.3.x86_64
glusterfs-3.12.2-19.el7rhgs.x86_64
qemu-kvm-rhev-2.12.0-19.el7_6.2.x86_64


SC1:Test VM migraition on symlink of shareable mounted glusterfs
1. Mount glusterfs on dst and src host:
# mount -t glusterfs 10.66.4.183:/gv0 /mnt/

2. Make symlinks on dst and src hosts:
# ln -s /mnt/glusterfs /var/lib/libvirt/images/glusterfs

3. Prepare a running VM whose image on then symlink dir:
# virsh domblklist rhel7                                                                                                                         
Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/glusterfs/A.qcow2

# virsh -k0 -K0 migrate rhel7 qemu+ssh://root@fjin-5-190/system --verbose                   
Migration: [100 %]

Migrate back:
# virsh -k0 -K0 migrate rhel7 qemu+ssh://root.me/system --verbose
root.me's password: 
Migration: [100 %]

Check R/W on VM:
# echo xx>xx
# cat xx
xx


SC2: Test VM migraition on mounter glusterfs directly
1. Mount glusterfs on dst and src host:
# mount -t glusterfs 10.66.4.183:/gv0 /mnt/

2. Prepare a running VM whose image on then symlink dir:
# virsh domblklist rhel7                                                                                                                         
Target     Source
------------------------------------------------
vda         /mnt/glusterfs/A.qcow2

# virsh -k0 -K0 migrate rhel7 qemu+ssh://root@fjin-5-190/system --verbose                   
Migration: [100 %]

Migrate back:
# virsh -k0 -K0 migrate rhel7 qemu+ssh://root.me/system --verbose
root.me's password: 
Migration: [100 %]

Check R/W on VM:
# echo xx>xx
# cat xx
xx

Comment 11 Han Han 2018-11-12 09:42:04 UTC
Could you please try this new libvirt package on RHHI? Thanks

Comment 13 errata-xmlrpc 2018-11-27 01:22:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3669

Comment 14 SATHEESARAN 2018-11-27 06:19:24 UTC
(In reply to Han Han from comment #11)
> Could you please try this new libvirt package on RHHI? Thanks

This time when I tried to migrate VMs from one host to another using RHV Manager, I hit totally a different issue. I will be raising a new bug[1] for that issue

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1653556