Bug 1996609 - Qemu hit core dump when dump guest memory during live migration
Summary: Qemu hit core dump when dump guest memory during live migration
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: 9.0
Assignee: Peter Xu
QA Contact: Li Xiaohui
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-23 10:16 UTC by Li Xiaohui
Modified: 2022-05-17 12:25 UTC (History)
10 users (show)

Fixed In Version: qemu-kvm-6.1.0-7.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-17 12:23:26 UTC
Type: Bug
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/centos-stream/src qemu-kvm merge_requests 51 0 None None None 2021-11-15 09:48:35 UTC
Red Hat Issue Tracker RHELPLAN-94414 0 None None None 2021-08-23 10:18:01 UTC
Red Hat Product Errata RHBA-2022:2307 0 None None None 2022-05-17 12:24:43 UTC

Description Li Xiaohui 2021-08-23 10:16:25 UTC
Description of problem:
Qemu hit core dump when dump guest memory during live migration


Version-Release number of selected component (if applicable):
hosts info: kernel-5.14.0-0.rc4.35.el9.x86_64 & qemu-kvm-6.0.0-12.el9.x86_64
guest info: kernel-5.13.0-0.rc7.51.el9.x86_64


How reproducible:
not always, 67%


Steps to Reproduce:
1.Boot a vm with "-device vmcoreinfo" qemu cli[1]
2.Boot a vm with qemu cli[1] and append '-incoming defer'
3.Migrate vm from src to dst;
dst qmp:
{"execute": "migrate-incoming", "arguments": {"uri": "tcp:[::]:4000"}, "id": "ApNFq8XX"}
src qmp:
{"execute": "migrate", "arguments": {"uri": "tcp:$dst_host_ip:4000"}, "id": "XhomThrr"}
4.Dump guest memory at once after finishing step 3 to ensure dump happen during migration is active 
{"execute": "dump-guest-memory", "arguments": {"paging": false, "protocol": "file:/home/dump.elf", "format": "elf"}, "id": "1Xlxq06O"}


Actual results:
Dump file generated, but wait several minutes, qemu core dump after step 4:
[root@ibm-x3250m6-07 home]# coredumpctl info 10417
           PID: 10417 (qemu-kvm)
           UID: 0 (root)
           GID: 0 (root)
        Signal: 6 (ABRT)
     Timestamp: Mon 2021-08-23 07:39:43 UTC (6min ago)
  Command Line: /usr/libexec/qemu-kvm -name mouse-vm -sandbox off -machine q35 -cpu Skylak>
    Executable: /usr/libexec/qemu-kvm
 Control Group: /user.slice/user-0.slice/session-3.scope
          Unit: session-3.scope
         Slice: user-0.slice
       Session: 3
     Owner UID: 0 (root)
       Boot ID: 578f283d1e9c49308eff359d36a38c41
    Machine ID: 5815313cf4814dd18bb08b6a5552e5e2
      Hostname: ibm-x3250m6-07.lab.eng.pek2.redhat.com
       Storage: none
       Message: Process 10417 (qemu-kvm) of user 0 dumped core.
                
                Stack trace of thread 10573:
                #0  0x00007f90a8ab3763 pthread_kill@GLIBC_2.2.5 (libc.so.6 + 0x8f763)
                #1  0x00007f90a8a66686 raise (libc.so.6 + 0x42686)
                #2  0x00007f90a8a507d3 abort (libc.so.6 + 0x2c7d3)
                #3  0x00007f90a8a506fb __assert_fail_base.cold (libc.so.6 + 0x2c6fb)
                #4  0x00007f90a8a5f3a6 __assert_fail (libc.so.6 + 0x3b3a6)
                #5  0x000055d85ef82a24 virtio_net_pre_save (qemu-kvm + 0x5e5a24)
                #6  0x000055d85ecef406 vmstate_save_state_v (qemu-kvm + 0x352406)
                #7  0x000055d85ee95ea0 qemu_savevm_state_complete_precopy_non_iterable (qe>
                #8  0x000055d85ee967bc qemu_savevm_state_complete_precopy (qemu-kvm + 0x4f>
                #9  0x000055d85ed4d6a2 migration_thread (qemu-kvm + 0x3b06a2)
                #10 0x000055d85f30bf3a qemu_thread_start (qemu-kvm + 0x96ef3a)
                #11 0x00007f90a8ab1a8f start_thread (libc.so.6 + 0x8da8f)
                #12 0x00007f90a8b36480 __clone3 (libc.so.6 + 0x112480)
                
                Stack trace of thread 10418:
                #0  0x00007f90a8b2efbd syscall (libc.so.6 + 0x10afbd)
                #1  0x000055d85f30bcaf qemu_event_wait (qemu-kvm + 0x96ecaf)
                #2  0x000055d85f30476f call_rcu_thread (qemu-kvm + 0x96776f)
                #3  0x000055d85f30bf3a qemu_thread_start (qemu-kvm + 0x96ef3a)
                #4  0x00007f90a8ab1a8f start_thread (libc.so.6 + 0x8da8f)
                #5  0x00007f90a8b36480 __clone3 (libc.so.6 + 0x112480)
                
                Stack trace of thread 10437:
                #0  0x00007f90a8aae75a __futex_abstimed_wait_common (libc.so.6 + 0x8a75a)
                #1  0x00007f90a8ab0ed0 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8ced0)
                #2  0x000055d85f30b6df qemu_cond_wait_impl (qemu-kvm + 0x96e6df)
                #3  0x000055d85f069f87 qemu_wait_io_event (qemu-kvm + 0x6ccf87)
                #4  0x000055d85f033791 kvm_vcpu_thread_fn (qemu-kvm + 0x696791)
                #5  0x000055d85f30bf3a qemu_thread_start (qemu-kvm + 0x96ef3a)
                #6  0x00007f90a8ab1a8f start_thread (libc.so.6 + 0x8da8f)
                #7  0x00007f90a8b36480 __clone3 (libc.so.6 + 0x112480)
                
                Stack trace of thread 10434:
                #0  0x00007f90a8aae75a __futex_abstimed_wait_common (libc.so.6 + 0x8a75a)
                #1  0x00007f90a8ab0ed0 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8ced0)
                #2  0x000055d85f30b6df qemu_cond_wait_impl (qemu-kvm + 0x96e6df)
                #3  0x000055d85f069f87 qemu_wait_io_event (qemu-kvm + 0x6ccf87)
                #4  0x000055d85f033791 kvm_vcpu_thread_fn (qemu-kvm + 0x696791)
                #5  0x000055d85f30bf3a qemu_thread_start (qemu-kvm + 0x96ef3a)
                #6  0x00007f90a8ab1a8f start_thread (libc.so.6 + 0x8da8f)
                #7  0x00007f90a8b36480 __clone3 (libc.so.6 + 0x112480)
                
                Stack trace of thread 10436:
                #0  0x00007f90a8aae75a __futex_abstimed_wait_common (libc.so.6 + 0x8a75a)
                #1  0x00007f90a8ab0ed0 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8ced0)
                #2  0x000055d85f30b6df qemu_cond_wait_impl (qemu-kvm + 0x96e6df)
                #3  0x000055d85f069f87 qemu_wait_io_event (qemu-kvm + 0x6ccf87)
                #4  0x000055d85f033791 kvm_vcpu_thread_fn (qemu-kvm + 0x696791)
                #5  0x000055d85f30bf3a qemu_thread_start (qemu-kvm + 0x96ef3a)
                #6  0x00007f90a8ab1a8f start_thread (libc.so.6 + 0x8da8f)
                #7  0x00007f90a8b36480 __clone3 (libc.so.6 + 0x112480)
                
                Stack trace of thread 10430:
                #0  0x00007f90a8b2986f __poll (libc.so.6 + 0x10586f)
                #1  0x00007f90a91a758c g_main_context_iterate.constprop.0 (libglib-2.0.so.>
                #2  0x00007f90a9152563 g_main_loop_run (libglib-2.0.so.0 + 0x54563)
                #3  0x000055d85f118ffc iothread_run (qemu-kvm + 0x77bffc)
                #4  0x000055d85f30bf3a qemu_thread_start (qemu-kvm + 0x96ef3a)
                #5  0x00007f90a8ab1a8f start_thread (libc.so.6 + 0x8da8f)
                #6  0x00007f90a8b36480 __clone3 (libc.so.6 + 0x112480)
                
                Stack trace of thread 10446:
                #0  0x00007f90a8aae75a __futex_abstimed_wait_common (libc.so.6 + 0x8a75a)
                #1  0x00007f90a8ab0ed0 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8ced0)
                #2  0x000055d85f30b6df qemu_cond_wait_impl (qemu-kvm + 0x96e6df)
                #3  0x000055d85ed3f0f6 vnc_worker_thread (qemu-kvm + 0x3a20f6)
                #4  0x000055d85f30bf3a qemu_thread_start (qemu-kvm + 0x96ef3a)
                #5  0x00007f90a8ab1a8f start_thread (libc.so.6 + 0x8da8f)
                #6  0x00007f90a8b36480 __clone3 (libc.so.6 + 0x112480)
                
                Stack trace of thread 10431:
                #0  0x00007f90a8aae75a __futex_abstimed_wait_common (libc.so.6 + 0x8a75a)
                #1  0x00007f90a8ab0ed0 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8ced0)
                #2


Expected results:
Migrate completes successfully, dump file generate, vm works well after migration


Additional info:
Qemu core dump file please see attachment.

Comment 1 Li Xiaohui 2021-08-23 10:31:55 UTC
qemu cli[1]:
/usr/libexec/qemu-kvm  \
-name "mouse-vm" \
-sandbox off \
-machine q35 \
-cpu Skylake-Client \
-nodefaults  \
-vga std \
-chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1,server=on,wait=off \
-chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor,server=on,wait=off \
-mon chardev=qmp_id_qmpmonitor1,mode=control \
-mon chardev=qmp_id_catch_monitor,mode=control \
-device pcie-root-port,port=0x10,chassis=1,id=root0,bus=pcie.0,multifunction=on,addr=0x2 \
-device pcie-root-port,port=0x11,chassis=2,id=root1,bus=pcie.0,addr=0x2.0x1 \
-device pcie-root-port,port=0x12,chassis=3,id=root2,bus=pcie.0,addr=0x2.0x2 \
-device pcie-root-port,port=0x13,chassis=4,id=root3,bus=pcie.0,addr=0x2.0x3 \
-device pcie-root-port,port=0x14,chassis=5,id=root4,bus=pcie.0,addr=0x2.0x4 \
-device pcie-root-port,port=0x15,chassis=6,id=root5,bus=pcie.0,addr=0x2.0x5 \
-device pcie-root-port,port=0x16,chassis=7,id=root6,bus=pcie.0,addr=0x2.0x6 \
-device pcie-root-port,port=0x17,chassis=8,id=root7,bus=pcie.0,addr=0x2.0x7 \
-device nec-usb-xhci,id=usb1,bus=root0 \
-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=root1 \
-device scsi-hd,id=image1,drive=drive_image1,bus=virtio_scsi_pci0.0,channel=0,scsi-id=0,lun=0,bootindex=0 \
-device virtio-net-pci,mac=9a:8a:8b:8c:8d:8e,id=net0,vectors=4,netdev=tap0,bus=root2 \
-device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
-device vmcoreinfo \
-blockdev driver=file,cache.direct=on,cache.no-flush=off,filename=/home/test/rhel900-64-virtio-scsi.qcow2,node-name=drive_sys1 \
-blockdev driver=qcow2,node-name=drive_image1,file=drive_sys1 \
-netdev tap,id=tap0,vhost=on \
-m 4096 \
-smp 4,maxcpus=4,cores=2,threads=1,sockets=2 \
-vnc :10 \
-rtc base=utc,clock=host \
-boot menu=off,strict=off,order=cdn,once=c \
-enable-kvm  \
-qmp tcp:0:3333,server=on,wait=off \
-qmp tcp:0:9999,server=on,wait=off \
-qmp tcp:0:9888,server=on,wait=off \
-serial tcp:0:4444,server=on,wait=off \
-monitor stdio \

Comment 2 Dr. David Alan Gilbert 2021-08-24 12:57:15 UTC
                Stack trace of thread 10573:

                #4  0x00007f90a8a5f3a6 __assert_fail (libc.so.6 + 0x3b3a6)
                #5  0x000055d85ef82a24 virtio_net_pre_save (qemu-kvm + 0x5e5a24)
                #6  0x000055d85ecef406 vmstate_save_state_v (qemu-kvm + 0x352406)
                #7  0x000055d85ee95ea0 qemu_savevm_state_complete_precopy_non_iterable (qe>
                #8  0x000055d85ee967bc qemu_savevm_state_complete_precopy (qemu-kvm + 0x4f>
                #9  0x000055d85ed4d6a2 migration_thread (qemu-kvm + 0x3b06a2)

OK, so it's an assert fail during the migration thread; I think the assert is:

static int virtio_net_pre_save(void *opaque)
{
    VirtIONet *n = opaque;

    /* At this point, backend must be stopped, otherwise
     * it might keep writing to memory. */
    assert(!n->vhost_started);

    return 0;
}

so for some reason the VM is apparently running during the device save, when it should have been stopped.
I think this is a fight between migration stopping the VM just before the device save, and dump restarting the VM after it's finished the dump.

dump_init() does:

    if (runstate_is_running()) {
        vm_stop(RUN_STATE_SAVE_VM);
        s->resume = true;
    } else {
        s->resume = false;
    }

and dump_cleanup does:
    if (s->resume) {
        if (s->detached) {
            qemu_mutex_lock_iothread();
        }
        vm_start();
        if (s->detached) {
            qemu_mutex_unlock_iothread();
        }
    }


I'm not quite sure what's going on with the locking, but I suspect that we get:

  a) Migration starts
  b) Dump starts
  c) Dump pauses the VM but notes it was running
  d) dump takes the big lock perhaps (?) - but migration keeps running it's main part
  e) migration gets to just before complete stops the VM
  f) dump finishes, restarts the VM
  g) migration calls virtio_net's dump code that gets upset the VM is running

But hmm, I thought they both took the iothread lock; e.g. in migration_completion it takes the lock, stops the VM, and calls qemu_savevm_state_complete_precopy
so it should be safe. Hmm.

Comment 3 Dr. David Alan Gilbert 2021-08-24 13:54:30 UTC
we could worry about debugging that locking; or we could just ban dumping from starting during migration.

Comment 4 Peter Xu 2021-08-24 14:13:09 UTC
(In reply to Dr. David Alan Gilbert from comment #3)
> we could worry about debugging that locking; or we could just ban dumping
> from starting during migration.

Maybe also vice versa?  We should also disable live migration during a guest dump.  Fundamentally the vm state cached in either guest-dump or live-migration could be racy if any of them goes into another...

migrate_add_blocker() from guest-dump looks the right thing to me.

Comment 9 Yanan Fu 2021-11-24 02:28:12 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 13 Li Xiaohui 2021-12-29 08:46:58 UTC
Dump guest memory during migration, qmp will give below error on qemu-kvm-6.2.0-1.el9.x86_64, and migration can succeed:
{"execute": "dump-guest-memory", "arguments": {"paging": false, "protocol": "file:/home/dump.elf", "format": "elf"}, "id": "VKVBB3mz"}
{"id": "VKVBB3mz", "error": {"class": "GenericError", "desc": "disallowing migration blocker (migration/snapshot in progress) for: Live migration disabled: dump-guest-memory in progress"}}


AFAIK we can't start migration if dump is in process. So no need to test this scenario in this bz.


Also found this bz has been fixed on RHEL-8.6.0 under qemu-kvm-6.2.0-1.module+el8.6.0+13725+61ae1949.x86_64. 


Mark bz verified per above tests.

Comment 15 errata-xmlrpc 2022-05-17 12:23:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (new packages: qemu-kvm), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:2307


Note You need to log in before you can comment on or make changes to this bug.