RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1374613 - Migration fails with "info migration reply was missing return status" when storage insufficient on target
Summary: Migration fails with "info migration reply was missing return status" when st...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.3
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Jiri Denemark
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 1330548 (view as bug list)
Depends On:
Blocks: Gluster-HC-1
TreeView+ depends on / blocked
 
Reported: 2016-09-09 08:29 UTC by Dan Zheng
Modified: 2016-11-03 18:54 UTC (History)
11 users (show)

Fixed In Version: libvirt-2.0.0-9.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-03 18:54:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirtd log on source machine (3.69 MB, text/plain)
2016-09-09 08:35 UTC, Dan Zheng
no flags Details
vm log on target host (8.00 KB, text/plain)
2016-09-09 08:57 UTC, Dan Zheng
no flags Details
libvirtd log on target host (1.56 MB, text/plain)
2016-09-09 08:58 UTC, Dan Zheng
no flags Details
new qemu log on target (13.92 KB, text/plain)
2016-09-09 09:35 UTC, Dan Zheng
no flags Details
new libvirtd log on target (1.86 MB, text/plain)
2016-09-09 09:35 UTC, Dan Zheng
no flags Details
new libvirtd log on source (2.19 MB, text/plain)
2016-09-09 09:36 UTC, Dan Zheng
no flags Details
libvirtd log on source machine for scratch build (2.73 MB, text/plain)
2016-09-13 02:03 UTC, Dan Zheng
no flags Details
libvirtd log on target machine for scratch build (2.59 MB, text/plain)
2016-09-13 02:13 UTC, Dan Zheng
no flags Details
vm log on target host fro scratch build (19.73 KB, text/plain)
2016-09-13 02:16 UTC, Dan Zheng
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1330548 0 unspecified CLOSED VMs failed to migrate when one of the node in the cluster is put into maintenance. 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHSA-2016:2577 0 normal SHIPPED_LIVE Moderate: libvirt security, bug fix, and enhancement update 2016-11-03 12:07:06 UTC

Internal Links: 1330548

Description Dan Zheng 2016-09-09 08:29:57 UTC
Description of problem:
When the target machine is storage insufficient, the migration fails with unreasonable message.

Version-Release number of selected component (if applicable):
libvirt-2.0.0-6.el7.x86_64
qemu-kvm-rhev-2.6.0-23.el7.x86_64
3.10.0-500.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Make insufficient storage on target machine. Below makes the avaible 5G in the target file system.

#  dd if=/dev/zero of=/usr/share/avocado/data/avocado-vt/images/occupied bs=1G count=26
26+0 records in
26+0 records out
27917287424 bytes (28 GB) copied, 348.599 s, 80.1 MB/s
# df -k
Filesystem                         1K-blocks     Used Available Use% Mounted on
/dev/mapper/rhel-root               52403200 47705004   4698196  92% /
devtmpfs                             3885964        0   3885964   0% /dev
tmpfs                                3902124       88   3902036   1% /dev/shm
tmpfs                                3902124   204308   3697816   6% /run
tmpfs                                3902124        0   3902124   0% /sys/fs/cgroup
/dev/loop0                           1900368     6148   1772980   1% /srv/node/swiftloopback
/dev/mapper/rhel-home              427172056 51067188 376104868  12% /home
/dev/sda1                             508588   222032    286556  44% /boot
tmpfs                                 780428       16    780412   1% /run/user/42
tmpfs                                 780428        0    780428   0% /run/user/0


2. Create pool on target machine
# virsh pool-create-as --name precreation_pool --type dir --target /usr/share/avocado/data/avocado-vt/images 
Pool precreation_pool created


3. Run command on source machine
# virsh  migrate --live --copy-storage-all --domain avocado-vt-vm1 --desturi qemu+ssh://10.66.4.167/system
error: internal error: info migration reply was missing return status

 

Actual results:
Fail with unreasonable messages. See above.

Expected results:
Fail with reasonable messages, like 

"error: cannot allocate 31457280000 bytes in file '/usr/share/avocado/data/avocado-vt/images/jeos-23-64.qcow2': No space left on device"

Additional info:

See attachments for logs.

Comment 1 Dan Zheng 2016-09-09 08:35:52 UTC
Created attachment 1199332 [details]
libvirtd log on source machine

Comment 2 Dan Zheng 2016-09-09 08:57:33 UTC
Created attachment 1199351 [details]
vm log on target host

Comment 3 Dan Zheng 2016-09-09 08:58:26 UTC
Created attachment 1199352 [details]
libvirtd log on target host

Comment 5 Dan Zheng 2016-09-09 09:35:23 UTC
Created attachment 1199361 [details]
new qemu log on target

Comment 6 Dan Zheng 2016-09-09 09:35:59 UTC
Created attachment 1199362 [details]
new libvirtd log on target

Comment 7 Dan Zheng 2016-09-09 09:36:35 UTC
Created attachment 1199364 [details]
new libvirtd log on source

Comment 8 Jiri Denemark 2016-09-09 12:37:14 UTC
Broken by

commit 2e7cea24355328102c40dd127329ddf47d55a3e2
Refs: v1.2.17-87-g2e7cea2
Author:     Jiri Denemark <jdenemar>
AuthorDate: Thu Jul 2 21:46:56 2015 +0200
Commit:     Jiri Denemark <jdenemar>
CommitDate: Fri Jul 10 11:47:13 2015 +0200

    qemu: Use error from Finish instead of "unexpectedly failed"

    When QEMU exits on destination during migration, the source reports
    either success (if the failure happened at the very end) or unhelpful
    "unexpectedly failed" error message. However, the Finish API called on
    the destination may report a real error so let's use it instead of the
    generic one.

    Signed-off-by: Jiri Denemark <jdenemar>

which was backported to 7.2 for bug 1090093.

Comment 9 Jiri Denemark 2016-09-09 12:44:52 UTC
If migration fails, this bug can cause the real error to be overwritten with useless (and wrong) "info migration reply was missing return status" message making the real error hard (or even impossible) to diagnose.

Comment 10 Jiri Denemark 2016-09-12 10:51:03 UTC
Patch sent upstream for review: https://www.redhat.com/archives/libvir-list/2016-September/msg00322.html

Comment 12 Jiri Denemark 2016-09-12 13:49:53 UTC
*** Bug 1330548 has been marked as a duplicate of this bug. ***

Comment 14 Dan Zheng 2016-09-13 02:00:30 UTC
Test packages of the scratch build:
libvirt-2.0.0-9.el7_rc.cdb55b1e.x86_64
qemu-kvm-rhev-2.6.0-23.el7.x86_64

Run on local host:
# virsh  migrate --live --copy-storage-all --domain avocado-vt-vm1 --desturi qemu+ssh://10.66.4.167/system
error: operation failed: migration of disk vda failed

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 1     avocado-vt-vm1                 running



local libvirtd.log:
2016-09-13 01:45:08.800+0000: 15275: info : qemuMonitorJSONIOProcessLine:206 : QEMU_MONITOR_RECV_EVENT: mon=0x7f3a68002ea0 event={"timestamp": {"seconds": 1473731108, "microseconds": 800316}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len": 10737418240, "offset": 4907335680, "speed": 9223372036853727232, "type": "mirror", "error": "No space left on device"}}
..
2016-09-13 01:45:08.801+0000: 15279: error : qemuMigrationDriveMirrorReady:1866 : operation failed: migration of disk vda failed

remote libvirtd.log:
2016-09-13 01:45:09.213+0000: 22963: error : virSecuritySELinuxSetFileconHelper:920 : unable to set security context 'system_u:object_r:usr_t:s0' on '/usr/share/avocado/data/avocado-vt/images/jeos-23-64.qcow2': No space left on device

See more details in attachment.

So the scratch build is ok.

Comment 15 Dan Zheng 2016-09-13 02:03:49 UTC
Created attachment 1200313 [details]
libvirtd log on source machine for scratch build

Comment 16 Dan Zheng 2016-09-13 02:13:19 UTC
Created attachment 1200314 [details]
libvirtd log on target machine for scratch build

Comment 17 Dan Zheng 2016-09-13 02:16:38 UTC
Created attachment 1200327 [details]
vm log on target host fro scratch build

Comment 19 zhe peng 2016-09-18 06:59:50 UTC
I can reproduce this.
verify with build:
libvirt-2.0.0-9.el7.x86_64

step:
1. Make insufficient storage on target machine
2. create pool both on source and target
3. Run 
# virsh migrate --live --copy-storage-all --domain avocado-vt-vm1 --desturi qemu+ssh://$target_ip/system --verbose
error: operation failed: migration of disk vda failed

get "No space left on device" in libvirtd.log.
move to verified.

Comment 21 errata-xmlrpc 2016-11-03 18:54:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2577.html


Note You need to log in before you can comment on or make changes to this bug.