RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1265902 - Guest OS paused after migration
Summary: Guest OS paused after migration
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Jiri Denemark
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1271145
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-09-24 06:55 UTC by Fangge Jin
Modified: 2016-06-29 11:18 UTC (History)
14 users (show)

Fixed In Version: libvirt-1.2.17-13.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-19 06:55:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirtd log from target (2.66 MB, text/plain)
2015-09-30 02:11 UTC, Fangge Jin
no flags Details
qemu log from target (6.44 KB, text/plain)
2015-09-30 02:12 UTC, Fangge Jin
no flags Details
libvirt log from source host for comment 22 (7.78 MB, text/plain)
2015-10-09 05:03 UTC, Fangge Jin
no flags Details
libvirt log from target host for comment 23 (2.94 MB, text/plain)
2015-10-09 05:07 UTC, Fangge Jin
no flags Details
qemu log from source host for comment 23 (887 bytes, text/plain)
2015-10-09 05:08 UTC, Fangge Jin
no flags Details
qemu log from target host for comment 23 (6.17 KB, text/plain)
2015-10-09 05:09 UTC, Fangge Jin
no flags Details
libvirtd log for comment 27 (1.15 MB, application/x-gzip)
2015-10-09 05:24 UTC, Fangge Jin
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2202 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2015-11-19 08:17:58 UTC

Description Fangge Jin 2015-09-24 06:55:40 UTC
Description of problem:
Migrating a guest without --live option, after migration, guest OS doesn't respond, although "virsh list" indicates that the guest is running:
# virsh list
 Id    Name                           State
----------------------------------------------------
 15    test4                          running

Then I use "virsh suspend test4" and "virsh resume test4", guest OS can respond again.
# virsh suspend test4
Domain test4 suspended

# virsh resume test4
Domain test4 resumed


Version-Release number of selected component (if applicable):
libvirt-1.2.17-9.el7.x86_64
qemu-kvm-rhev-2.3.0-24.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Prepare a running guest with machine type ='pc-i440fx-rhel7.1.0'

2. Do migration without --live:
# virsh migrate test4 qemu+ssh://10.66.85.205/system --verbose
root.85.205's password:
Migration: [100 %]

3. Check the guest status on target:
# virsh list
 Id    Name                           State
----------------------------------------------------
 15    test4                          running

But actually, try to do some operation in the guest, the guest OS is not responding.

Ping the guest on target host, there is no reply:
# ping 192.168.122.244
PING 192.168.122.244 (192.168.122.244) 56(84) bytes of data.
^C
--- 192.168.122.244 ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 5999ms


4.Suspend and resume the guest.
# virsh suspend test4
Domain test4 suspended

# virsh resume test4
Domain test4 resumed

Now try to do some operation in the guest, the guest OS can respond again.

Ping the guest on target host, there is reply:
# ping 192.168.122.244
PING 192.168.122.244 (192.168.122.244) 56(84) bytes of data.
64 bytes from 192.168.122.244: icmp_seq=1 ttl=64 time=0.196 ms
64 bytes from 192.168.122.244: icmp_seq=2 ttl=64 time=0.123 ms
....

Actual results:
As step3~4.

Expected results:
The guest can work normally after migration.

Additional info:
1) When migration with --live option, there is no such issue.
2) Machine type "pc-i440fx-rhel7.1.0"/"pc-i440fx-rhel7.0.0" have no such issue.

Comment 2 Fangge Jin 2015-09-24 06:59:01 UTC
I'm sorry but that the guest machine type should be "pc-i440fx-rhel7.2.0" in step1.

Comment 4 Jiri Denemark 2015-09-29 07:43:39 UTC
Would you mind sharing libvirtd debug logs and qemu log from the destination host?

Comment 5 Fangge Jin 2015-09-30 02:11:52 UTC
Created attachment 1078524 [details]
libvirtd log from target

Comment 6 Fangge Jin 2015-09-30 02:12:31 UTC
Created attachment 1078525 [details]
qemu log from target

Comment 7 Jiri Denemark 2015-09-30 08:41:47 UTC
Looking at libvirtd.log, something weird happened in QEMU as {"execute":"cont","id":"libvirt-102"} returned success ({"return": {}, "id": "libvirt-102"}) but we didn't get any RESUME event.

Comment 8 Juan Quintela 2015-09-30 09:54:56 UTC
Hi

Do you have qemu.log from target?

My (wild) guess is that libvirt is not waiting until the migration has finished on destitanion.  It should wait for "complete" event on both source and destination before issuing the "cont" command.

Comment 10 Jiri Denemark 2015-09-30 11:01:01 UTC
So Juan's wild guess is actually correct. Apparently QEMU doesn't actually start the CPUs after "cont" when migration is not complete yet or something like that. In any case, the bug is confirmed to be in libvirt as one can see what happened in the debug log from the destination host:

...
2015-09-30 02:05:38.239+0000: 5216: debug : qemuMonitorJSONCommandWithFd:291 : Send command '{"execute":"cont","id":"libvirt-102"}' for write with FD -1
...
2015-09-30 02:05:38.314+0000: 5211: info : qemuMonitorJSONIOProcessLine:201 : QEMU_MONITOR_RECV_EVENT: mon=0x7f74440194c0 event={"timestamp": {"seconds": 1443578738, "microseconds": 314085}, "event": "MIGRATION", "data": {"status": "completed"}}

In other words, we call "cont" 64ms before the destination QEMU considers migration to be finished. We really need to start waiting for a completed MIGRATION event on destination too.

Comment 12 Jiri Denemark 2015-10-02 12:45:06 UTC
I wasn't able to reproduce the weired state of the domain, but by looking at the logs, it's apparently pretty easy to hit this race and call "cont" before migration actually completes. And I was able to reliably hit that even with --live.

Comment 13 Jiri Denemark 2015-10-02 12:47:51 UTC
Note: I was using peer-to-peer (--p2p) migration to make transition from the source libvirtd to the destination libvirtd faster.

Comment 14 Jiri Denemark 2015-10-02 13:34:01 UTC
So after some more tries, I was able to reproduce the weired state too. If migration completes after we call "cont" but before QEMU processes this command, the domain will remain running, but if QEMU manages to process the "cont" command before finishing migration, we will think the domain is running but it will actually be still paused.

One can distinguish these two cases in the libvirtd logs:

1) migration finished before processing "cont"

2015-10-02 13:30:58.820+0000: 1430623: debug : qemuMonitorJSONCommandWithFd:291 :
    Send command '{"execute":"cont","id":"libvirt-99"}' for write with FD -1
2015-10-02 13:30:58.828+0000: 1430618: info : qemuMonitorJSONIOProcessLine:201 :
    QEMU_MONITOR_RECV_EVENT: mon=0x7f90c000e410 event={"timestamp":
    {"seconds": 1443792658, "microseconds": 828065}, "event": "MIGRATION", "data":
    {"status": "completed"}}
2015-10-02 13:30:58.829+0000: 1430618: info : qemuMonitorJSONIOProcessLine:201 :
    QEMU_MONITOR_RECV_EVENT: mon=0x7f90c000e410 event={"timestamp":
    {"seconds": 1443792658, "microseconds": 828896}, "event": "RESUME"}
2015-10-02 13:30:58.830+0000: 1430618: info : qemuMonitorJSONIOProcessLine:206 :
    QEMU_MONITOR_RECV_REPLY: mon=0x7f90c000e410 reply={"return": {}, "id": "libvirt-99"}


2) "cont" was processed before migration finished:

2015-10-02 13:21:08.997+0000: 1430622: debug : qemuMonitorJSONCommandWithFd:291 :
    Send command '{"execute":"cont","id":"libvirt-99"}' for write with FD -1
2015-10-02 13:21:08.999+0000: 1430618: info : qemuMonitorJSONIOProcessLine:206 :
    QEMU_MONITOR_RECV_REPLY: mon=0x7f90d0006470 reply={"return": {}, "id": "libvirt-99"}
2015-10-02 13:21:09.007+0000: 1430618: info : qemuMonitorJSONIOProcessLine:201 :
    QEMU_MONITOR_RECV_EVENT: mon=0x7f90d0006470 event={"timestamp":
    {"seconds": 1443792069, "microseconds": 6818}, "event": "MIGRATION", "data":
    {"status": "completed"}}

Comment 15 Jiri Denemark 2015-10-05 14:22:29 UTC
Patches sent upstream for review: https://www.redhat.com/archives/libvir-list/2015-October/msg00123.html

Comment 16 Jiri Denemark 2015-10-05 15:10:24 UTC
While this bug was originally filed for non-live migration, it's possible to hit the race even with live migration.

Comment 17 Jiri Denemark 2015-10-06 14:07:58 UTC
After playing even more with this on RHEL with qemu-kvm-rhev, the behavior really depends on machine type. With older machine types QEMU actually resumes guest CPUs once migration finishes even though "cont" was earlier. For pc-i440fx-rhel7.2.0 machine type early "cont" is just ignored and guest CPUs stay paused.

Comment 18 Dr. David Alan Gilbert 2015-10-06 14:56:59 UTC
(In reply to Jiri Denemark from comment #17)
> After playing even more with this on RHEL with qemu-kvm-rhev, the behavior
> really depends on machine type. With older machine types QEMU actually
> resumes guest CPUs once migration finishes even though "cont" was earlier.
> For pc-i440fx-rhel7.2.0 machine type early "cont" is just ignored and guest
> CPUs stay paused.

That doesn't surprise me - the new global state thing is only enabled on the newer machine types.

Comment 19 Jiri Denemark 2015-10-07 07:09:04 UTC
Fixed upstream by v1.2.20-37-gd27c66d..v1.2.20-41-gbe5347b:

commit d27c66dbaae33d9b463eab8a8a2b0ad8cebff06a
Author: Jiri Denemark <jdenemar>
Date:   Thu Oct 1 21:39:35 2015 +0200

    qemu: Always update migration times on destination
    
    Even if we are migrating a domain with VIR_MIGRATE_PAUSED flag set, we
    should still update the total time of the migration. Updating downtime
    doesn't hurt either, even though we don't actually start guest CPUs.
    
    Signed-off-by: Jiri Denemark <jdenemar>

commit b106c8b910ec49f1018146dd82b0f93833a66600
Author: Jiri Denemark <jdenemar>
Date:   Thu Oct 1 21:54:38 2015 +0200

    qemu: Copy completed migration stats only on success
    
    The destination host gets detailed statistics about the current
    migration form the source host via migration cookie and copies them to
    the domain object so that they can be queried using
    virDomainGetJobStats. However, we should only copy statistics to the
    domain object when migration finished successfully.
    
    Signed-off-by: Jiri Denemark <jdenemar>

commit 2af983f4c42e8f0af95fe1eb464e14ba52bb1145
Author: Jiri Denemark <jdenemar>
Date:   Mon Oct 5 14:36:15 2015 +0200

    qemu: Introduce flags in qemuMigrationCompleted
    
    The function already has two bool parameters and we will need to add a
    new one. Let's switch to flags to make the callers readable.
    
    Signed-off-by: Jiri Denemark <jdenemar>

commit 829c62b7a5268a639116bc1cea51d1c6598f4bf4
Author: Jiri Denemark <jdenemar>
Date:   Mon Oct 5 14:32:58 2015 +0200

    qemu: Make updating stats in qemuMigrationCheckJobStatus optional
    
    With new QEMU which supports migration events,
    qemuMigrationCheckJobStatus needs to explicitly query QEMU for migration
    statistics once migration is completed to make sure the caller sees
    up-to-date statistics with both old and new QEMU. However, some callers
    are not interested in the statistics at all and once we start waiting
    for a completed migration on the destination host too, checking the
    statistics would even fail. Let's push the decision whether to update
    the statistics or not to the caller.
    
    Signed-off-by: Jiri Denemark <jdenemar>

commit be5347bb7223cab0a9f0713ed9c4bf5738a7e0ea
Author: Jiri Denemark <jdenemar>
Date:   Fri Oct 2 12:08:26 2015 +0200

    qemu: Wait until destination QEMU consumes all migration data
    
    Even though QEMU on the source host reports completed migration and thus
    we move to the Finish phase, QEMU on the destination host may still be
    processing migration data. Thus before we can start guest CPUs on the
    destination, we have to wait for a completed migration event.
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1265902
    
    Signed-off-by: Jiri Denemark <jdenemar>

Comment 22 Fangge Jin 2015-10-09 05:03:41 UTC
Created attachment 1081180 [details]
libvirt log from source host for comment 22

Comment 23 Fangge Jin 2015-10-09 05:06:00 UTC
I tested with the following builds, the results are same as before(comment1), please have a look. Logs are attached.

libvirt-1.2.17-13.el7.x86_64
qemu-kvm-rhev-2.3.0-29.el7.x86_64

Comment 24 Fangge Jin 2015-10-09 05:07:09 UTC
Created attachment 1081181 [details]
libvirt log from target host for comment 23

Comment 25 Fangge Jin 2015-10-09 05:08:14 UTC
Created attachment 1081182 [details]
qemu log from source host for comment 23

Comment 26 Fangge Jin 2015-10-09 05:09:00 UTC
Created attachment 1081183 [details]
qemu log from target host for comment 23

Comment 27 Fangge Jin 2015-10-09 05:22:06 UTC
And my guest paused unexpectedly when I test managedsave, the behaviour is similar, I don't know if it's related to this bug. And it's not 100% reproducible.  What I did generally is:

How reproducible:
< 10%

Versions:
libvirt-1.2.17-13.el7.x86_64

Steps:
1.Prepare a running guest.

2.Restart service libvirt-guests.

3.Check the state of guest:
# virsh list
 Id    Name                           State
----------------------------------------------------
 2     rhel7.2                          running

But actually, try to do some operation in the guest, the guest OS is not responding.

4.Then I use "virsh suspend rhel7.2" and "virsh resume rhel7.2", the guest can respond again.

Comment 28 Fangge Jin 2015-10-09 05:24:43 UTC
Created attachment 1081191 [details]
libvirtd log for comment 27

Comment 29 Jiri Denemark 2015-10-13 07:26:42 UTC
(In reply to JinFangge from comment #23)
> I tested with the following builds, the results are same as
> before(comment1), please have a look. Logs are attached.
> 
> libvirt-1.2.17-13.el7.x86_64
> qemu-kvm-rhev-2.3.0-29.el7.x86_64

Hmm, looking at the logs this seems to be slightly different and it's actually a proof that libvirt works correctly:

2015-10-09 03:42:35.895+0000: 29236: debug : qemuMigrationFinish:5719 : Waiting 
  for migration to complete
...
2015-10-09 03:42:35.898+0000: 29233: info : qemuMonitorJSONIOProcessLine:201 : 
  QEMU_MONITOR_RECV_EVENT: mon=0x7f80e00066c0 event={"timestamp": {"seconds": 
  1444362155, "microseconds": 898448}, "event": "MIGRATION", "data": {"status": 
  "completed"}}
...
2015-10-09 03:42:35.899+0000: 29236: info : qemuMonitorSend:1033 : 
  QEMU_MONITOR_SEND_MSG: mon=0x7f80e00066c0 
  msg={"execute":"cont","id":"libvirt-14"}
...
2015-10-09 03:42:35.899+0000: 29233: info : qemuMonitorJSONIOProcessLine:206 : 
  QEMU_MONITOR_RECV_REPLY: mon=0x7f80e00066c0 reply={"return": {}, "id": 
  "libvirt-14"}

That is, we properly wait for a migration completed event before calling "cont". However, even though QEMU said "cont" was executed successfully, it never sent any RESUME event and it apparently completely ignored the request to start guest CPUs. Please, file a bug for qemu-kvm-rhev?

Comment 30 Jiri Denemark 2015-10-13 07:36:15 UTC
(In reply to JinFangge from comment #27)
> And my guest paused unexpectedly when I test managedsave, the behaviour is
> similar, I don't know if it's related to this bug. And it's not 100%
> reproducible.

I suspect this is similar to comment 23. Once we know what's the bug in QEMU causing this we will know if we need to open a new libvirt bug or not.

Comment 31 Fangge Jin 2015-10-13 09:16:49 UTC
(In reply to Jiri Denemark from comment #29)
> (In reply to JinFangge from comment #23)
> > I tested with the following builds, the results are same as
> > before(comment1), please have a look. Logs are attached.
> > 
> > libvirt-1.2.17-13.el7.x86_64
> > qemu-kvm-rhev-2.3.0-29.el7.x86_64
> 
> Hmm, looking at the logs this seems to be slightly different and it's
> actually a proof that libvirt works correctly:
> 
> 2015-10-09 03:42:35.895+0000: 29236: debug : qemuMigrationFinish:5719 :
> Waiting 
>   for migration to complete
> ...
> 2015-10-09 03:42:35.898+0000: 29233: info : qemuMonitorJSONIOProcessLine:201
> : 
>   QEMU_MONITOR_RECV_EVENT: mon=0x7f80e00066c0 event={"timestamp":
> {"seconds": 
>   1444362155, "microseconds": 898448}, "event": "MIGRATION", "data":
> {"status": 
>   "completed"}}
> ...
> 2015-10-09 03:42:35.899+0000: 29236: info : qemuMonitorSend:1033 : 
>   QEMU_MONITOR_SEND_MSG: mon=0x7f80e00066c0 
>   msg={"execute":"cont","id":"libvirt-14"}
> ...
> 2015-10-09 03:42:35.899+0000: 29233: info : qemuMonitorJSONIOProcessLine:206
> : 
>   QEMU_MONITOR_RECV_REPLY: mon=0x7f80e00066c0 reply={"return": {}, "id": 
>   "libvirt-14"}
> 
> That is, we properly wait for a migration completed event before calling
> "cont". However, even though QEMU said "cont" was executed successfully, it
> never sent any RESUME event and it apparently completely ignored the request
> to start guest CPUs. Please, file a bug for qemu-kvm-rhev?

Created new bug for qemu-kvm-rhev: Bug 1271145 - Guest OS paused after migration.

Comment 32 Jiri Denemark 2015-10-13 13:52:48 UTC
(In reply to JinFangge from comment #27)
> And my guest paused unexpectedly when I test managedsave, the behaviour is
> similar, I don't know if it's related to this bug. And it's not 100%
> reproducible.

So it appears we will need a patch for libvirt to fix this code path (in addition to the QEMU fix from bug 1271145). Could you file a new bug for that (since I can't reproduce the bug).

Comment 33 Fangge Jin 2015-10-14 06:09:13 UTC
(In reply to Jiri Denemark from comment #32)
> (In reply to JinFangge from comment #27)
> > And my guest paused unexpectedly when I test managedsave, the behaviour is
> > similar, I don't know if it's related to this bug. And it's not 100%
> > reproducible.
> 
> So it appears we will need a patch for libvirt to fix this code path (in
> addition to the QEMU fix from bug 1271145). Could you file a new bug for
> that (since I can't reproduce the bug).

I also can't reproduce it now after trying many times(with the qemu version without patch in bug 1271145), I will create bug if I meet this issue next time.

Comment 34 Fangge Jin 2015-10-15 06:16:36 UTC
Verify on build:
libvirt-1.2.17-13.el7.x86_64
qemu-kvm-rhev-2.3.0-31.el7.x86_64

Steps are same as comment 0, guest works well after migration:
# virsh migrate rhel7.2 qemu+ssh://10.66.5.20/system --verbose
root.5.20's password: 
Migration: [100 %]

Get the domjobinfo on source:
(during migration)
# virsh domjobinfo rhel7.2 
Job type:         Unbounded   
Time elapsed:     3139         ms
Data processed:   272.340 MiB
Data remaining:   771.773 MiB
Data total:       1.102 GiB
Memory processed: 272.340 MiB
Memory remaining: 771.773 MiB
Memory total:     1.102 GiB
Memory bandwidth: 107.645 MiB/s
Constant pages:   21870       
Normal pages:     69535       
Normal data:      271.621 MiB
Expected downtime: 300          ms
Setup time:       20           ms

(after migration)
# virsh domjobinfo rhel7.2 --completed 
Job type:         Completed   
Time elapsed:     9546         ms
Time elapsed w/o network: 9535         ms
Data processed:   944.201 MiB
Data remaining:   0.000 B
Data total:       1.102 GiB
Memory processed: 944.201 MiB
Memory remaining: 0.000 B
Memory total:     1.102 GiB
Memory bandwidth: 105.563 MiB/s
Constant pages:   47843       
Normal pages:     241139      
Normal data:      941.949 MiB
Total downtime:   9035         ms
Downtime w/o network: 9024         ms
Setup time:       20           ms

Get domjobinfo on target:
(during migration)
# virsh domjobinfo rhel7.2 (virsh hangs until migration finished)
Job type:     None

(after migration)
# virsh domjobinfo rhel7.2 --completed 
Job type:         Completed   
Time elapsed:     9546         ms
Time elapsed w/o network: 9535         ms
Data processed:   944.201 MiB
Data remaining:   0.000 B
Data total:       1.102 GiB
Memory processed: 944.201 MiB
Memory remaining: 0.000 B
Memory total:     1.102 GiB
Memory bandwidth: 105.563 MiB/s
Constant pages:   47843       
Normal pages:     241139      
Normal data:      941.949 MiB
Total downtime:   9035         ms
Downtime w/o network: 9024         ms
Setup time:       20           ms

Comment 35 Fangge Jin 2015-10-15 06:20:37 UTC
According to comment 34, the problem in this bug has been fixed correctly. So move to verified.

Comment 37 errata-xmlrpc 2015-11-19 06:55:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html

Comment 38 Jiri Denemark 2016-06-29 11:18:59 UTC
*** Bug 1340976 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.