Bug 927252
| Summary: | Pre-create storage on Live Migration | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Michal Privoznik <mprivozn> |
| Component: | libvirt | Assignee: | Michal Privoznik <mprivozn> |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 7.0 | CC: | cwei, dyuan, eblake, fdinitto, juzhang, knoel, lmiksik, michen, mprivozn, mzhan, pbonzini, rbalakri, weizhan, ydu, zpeng |
| Target Milestone: | rc | Keywords: | FutureFeature, Upstream |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-1.2.13-1.el7 | Doc Type: | Enhancement |
| Doc Text: |
Feature:
Automatically precreate storage on live migration
Reason:
When migrating a domain onto a distant host, both qemu and libvirt try to preserve guest internal state as much as possible. For instance, all the disks that guest sees on the source should be seen on the remote too.
For some historical reasons, libvirt cared only about wrapping qemu internal state migration onto destination and left storage copying as a exercise to users.
However, as our storage drivers developed into more usable code, they gained new features too. We can use them to pre-create the storage on the migration destination.
Result:
After this RFE has been implemented, users don't need to care about anything now. Libvirt will automatically pre-create storage on the destination, copy over the disks and migrate the domain too. Moreover, users can even choose which disks should be copied (if for some reason they don't want some copied - e.g. the installation ISO media).
|
Story Points: | --- |
| Clone Of: | 869944 | Environment: | |
| Last Closed: | 2015-11-19 05:42:17 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 845675, 869944 | ||
| Bug Blocks: | 845674, 845676, 845679, 1056726, 1113727, 1122703 | ||
|
Description
Michal Privoznik
2013-03-25 13:43:05 UTC
I've been testing this feature in the Fedora 19 alpha. It's worth noting that it would be helpful to not only pre-create the storage, but also verify that the storage is large enough to store the image. For example, right now if I perform a 'touch myimage.img' on the destination machine, and then issue a migration which would try and copy a remote image into that file, the migration returns successfully. In reality the migration was not successful though... really bad things happened with no indication that they occurred. After the migration the virtual machine's disks are stripped out from underneath it because the disk was not actually migrated. I'd much rather the migration fail or expand the destination image size in this instance if possible. Right, that's why we need to pass the full disk sizes to the destination to check the free space and ideally pre-allocate the space there. However, this is not a trivial, since there is too much disk formats libvirt is dealing with. In addition, IIRC qemu-img doesn't allow one to fully allocate disk. It uses fallocate or equivalent which means, any later write() to the disk can end up with ENOSPC. My bad, I've meant ftruncate. The fallocate is the good one as it guarantees subsequent write()-s won't fail with ENOSPC whereas ftruncate doesn't. There's an upstream bug requesting the same 967233. Patches proposed upstream: https://www.redhat.com/archives/libvir-list/2013-September/msg00694.html Yet another attempt: https://www.redhat.com/archives/libvir-list/2014-November/msg01048.html I've just pushed the patches upstream:
commit cf54c60699833b3791a5d0eb3eb5a1948c267f6b
Author: Michal Privoznik <mprivozn>
AuthorDate: Tue Nov 25 14:19:07 2014 +0100
Commit: Michal Privoznik <mprivozn>
CommitDate: Tue Dec 2 18:02:13 2014 +0100
qemu_migration: Precreate missing storage
Based on previous commit, we can now precreate missing volumes. While
digging out the functionality from storage driver would be nicer, if
you've seen the code it's nearly impossible. So I'm going from the
other end:
1) For given disk target, disk path is looked up.
2) For the disk path, storage pool is looked up, a volume XML is
constructed and then passed to virStorageVolCreateXML() which has all
the knowledge how to create raw images, (encrypted) qcow(2) images,
etc.
One of the advantages of this approach is, we don't have to care about
image conversion - qemu does that for us. So for instance, users can
transform qcow2 into raw on migration (if the correct XML is passed to
the migration API).
Signed-off-by: Michal Privoznik <mprivozn>
commit e1466dc7fa0b8c2bc64f815b263386709e1e8267
Author: Michal Privoznik <mprivozn>
AuthorDate: Mon Nov 24 15:55:36 2014 +0100
Commit: Michal Privoznik <mprivozn>
CommitDate: Tue Dec 2 17:51:57 2014 +0100
qemu_migration: Send disk sizes to the other side
Up 'til now, users need to precreate non-shared storage on migration
themselves. This is not very friendly requirement and we should do
something about it. In this patch, the migration cookie is extended,
so that <nbd/> section does not only contain NBD port, but info on
disks being migrated. This patch sends a list of pairs of:
<disk target; disk size>
to the destination. The actual storage allocation is left for next
commit.
Signed-off-by: Michal Privoznik <mprivozn>
commit a714533b2bd2de81b9319bb753e74cc9210ca647
Author: Michal Privoznik <mprivozn>
AuthorDate: Mon Dec 1 15:31:51 2014 +0100
Commit: Michal Privoznik <mprivozn>
CommitDate: Tue Dec 2 17:51:57 2014 +0100
qemuMonitorJSONBlockStatsUpdateCapacity: Don't skip disks
The function queries the block devices visible to qemu
('query-block') and parses the qemu's output. The info is
returned in a hash table which is expected to be pre-filled by
qemuMonitorJSONGetAllBlockStatsInfo(). However, in the next patch
we are not going to call the latter function at all, so we should
make the former function add devices into the hash table if not
found there.
Signed-off-by: Michal Privoznik <mprivozn>
commit 5ab746b83aec0fa6674250c1575984e116d45371
Author: Michal Privoznik <mprivozn>
AuthorDate: Wed Nov 26 17:13:00 2014 +0100
Commit: Michal Privoznik <mprivozn>
CommitDate: Tue Dec 2 17:51:57 2014 +0100
storage: Introduce storagePoolLookupByTargetPath
While this could be exposed as a public API, it's not done yet as
there's no demand for that yet. Anyway, this is just preparing
the environment for easier volume creation on the destination.
Signed-off-by: Michal Privoznik <mprivozn>
v1.2.10-253-gcf54c60
verify with build:
libvirt-1.2.13-1.el7 and libvirt-1.2.15-2.el7.x86_64
qemu-kvm-rhev-2.3.0-1.el7.x86_64
step:
1: prepare two machine, source and target
2: install a guest on source, the guest storage only on source machine
check images:
# qemu-img info kvm-rhel7.1-x86_64-qcow2.img
image: kvm-rhel7.1-x86_64-qcow2.img
file format: qcow2
virtual size: 4.9G (5242880000 bytes)
disk size: 1.1G
cluster_size: 65536
Format specific information:
compat: 0.10
refcount bits: 16
3: do live migration to target machine
# virsh migrate rhel7 --live qemu+ssh://$target_ip/system --copy-storage-all --verbose
Migration: [100 %]
# virsh migrate rhel7 --live qemu+ssh://$target_ip/system --copy-storage-inc --verbose
Migration: [100 %]
4: check guest image on target
# qemu-img info kvm-rhel7.1-x86_64-qcow2.img
image: kvm-rhel7.1-x86_64-qcow2.img
file format: qcow2
virtual size: 4.9G (5242880000 bytes)
disk size: 1.1G
cluster_size: 65536
Format specific information:
compat: 0.10
refcount bits: 16
But i found an issue, if destroy target guest and remove guest images then do migration again, will get error, i will file new bug to track this issue.
This bug change to verified.
Hi Michal: I want to verify bug:https://bugzilla.redhat.com/show_bug.cgi?id=1210352, met Pre-create storage issue again. libvirt build :libvirt-1.2.17-2.el7.x86_64 qemu-kvm-rhev : qemu-kvm-rhev-2.3.0-12.el7.x86_64, when do migration, alway get error(scenario see below): error: Cannot access storage file '/var/lib/libvirt/images/kvm-rhel6.6-x86_64-qcow2v2.img' (as uid:107, gid:107): No such file or directory but if i use libvirt-1.2.15-2.el7.x86_64, migration can worked well. seems some patch in bug:1210352 cause this. first scenario: only set one non-shared storage in guest ... <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/kvm-rhel6.6-x86_64-qcow2v2.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> ... # virsh migrate rhel6 --live qemu+ssh://$target_ip/system --copy-storage-all --verbose error: Cannot access storage file '/var/lib/libvirt/images/kvm-rhel6.6-x86_64-qcow2v2.img' (as uid:107, gid:107): No such file or directory second scenario: set three non-shared storage in guest ... <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/kvm-rhel6.6-x86_64-qcow2v2.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/1.img'/> <target dev='vdb' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/2.img'/> <target dev='vdc' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </disk> ... then do migrate: # virsh migrate rhel6 --live qemu+ssh://$target_ip/system --copy-storage-all --migrate-disks vda,vdc --verbose error: Cannot access storage file '/var/lib/libvirt/images/2.img' (as uid:107, gid:107): No such file or directory please help to confirm this , very thanks in advance. i try with build: libvirt-1.2.17-7.el7.x86_64 qemu-kvm-rhev-2.3.0-22.el7.x86_64 the first scenario can't be reproduced, and second scenario related migrate-disks, so i will update in bug:1210352, remove needinfo, this bug is verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2202.html |