Bug 1025699
| Summary: | libvirtd doesn't use a port from the migration range for NBD migration | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Ján Tomko <jtomko> |
| Component: | libvirt | Assignee: | Ján Tomko <jtomko> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 7.0 | CC: | acathrow, dyuan, ydu, zhwang, zpeng |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-1.1.1-12.el7 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2014-06-13 11:56:42 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Ján Tomko
2013-11-01 10:32:43 UTC
Now pushed upstream:
commit 3e1e16aa8d4238241a1806cb9bdb3b9ad60db777
Author: Ján Tomko <jtomko>
AuthorDate: 2013-10-31 13:19:29 +0100
Commit: Ján Tomko <jtomko>
CommitDate: 2013-11-01 12:07:12 +0100
Use a port from the migration range for NBD as well
Instead of using a port from the remote display range.
https://bugzilla.redhat.com/show_bug.cgi?id=1025699
git describe: v1.1.4-rc2-1-g3e1e16a
Verify this bug on libvirt-1.1.1-12.el7, first i can reproduce this bug on libvirt-1.1.1-10.el7, the reproduce steps as the follwoing 1.Start a guest on source host which image is on local disk(without shared with target host) source# virsh list Id Name State ---------------------------------------------------- 15 rhel7 running 2.Create a empty image on target host with the same size, directory and name as in source host target# qemu-img create /IMAGE_DIR/guest.img IMAGE_SIZE 3. Configure the firewall on the destination to reject incoming connections to the graphics port range, but accept connections the migration range target# iptables -A INPUT -d localhost -p tcp --dport 49215:65535 -j DROP target# iptables -A INPUT -d localhost -p tcp --dport 5900:49151 -j DROP target# iptables -A INPUT -d localhost -p tcp --dport 49152:49215 -j ACCEPT target# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- anywhere tcp dpts:49215:65535 DROP tcp -- anywhere tcp dpts:rfb:49151 ACCEPT tcp -- anywhere tcp dpts:49152:49215 4.Migrate the guest to the target, will fail to migrate the guest to the target source# virsh migrate --live rhel7 --copy-storage-all qemu+ssh://$targetip/system --verbose error: internal error: unable to execute QEMU command 'drive-mirror': Failed to connect to socket: Connection timed out check the nbd debug logs in the target host, find the nbd server chooses a port from the display range (>5900) instead of the migration range 2013-11-12 09:44:31.651+0000: 12125: debug : qemuMonitorNBDServerStart:3740 : mon=0x7f8268002ed0 host=:: port=5901 Verify this bug with libvirt-1.1.1-12.el7, the verify steps as the following steps 1~3 were the same with the reproduce steps 4.Migrate the guest to the target, will successfully migrate the guest to the target source# virsh migrate --live rhel7 --copy-storage-all qemu+ssh://$targetip/system --verbose Migration: [100 %] 5.check the guest info in the target # virsh list --all Id Name State ---------------------------------------------------- 2 rhel7 running ps aux|grep kvm qemu 14139 29.4 0.9 2674096 310820 ? Sl 18:10 0:20 /usr/libexec/qemu-kvm -name rhel7 -S -machine pc-i440fx-rhel7.0.0,accel=kvm,usb=off -m 1024 - - - -incoming tcp:[::]:49152 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 check the libvirtd.log in the target 2013-11-12 09:40:56.494+0000: 11548: debug : qemuMonitorNBDServerStart:3740 : mon=0x7f526400e030 host=:: port=49153 According to the upper steps , this bug can be moved verifed This request was resolved in Red Hat Enterprise Linux 7.0. Contact your manager or support representative in case you have further questions about the request. |