RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2028337 - Guest crash when hotplug cpu with migration on different amd
Summary: Guest crash when hotplug cpu with migration on different amd
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.0
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: rc
: 9.2
Assignee: Leonardo Bras
QA Contact: Li Xiaohui
URL:
Whiteboard:
Depends On: 2043545 2044903 2066586
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-02 04:34 UTC by Li Xiaohui
Modified: 2023-06-30 09:06 UTC (History)
10 users (show)

Fixed In Version: qemu-kvm-7.2.0-10.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-06-02 07:42:02 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-104514 0 None None None 2021-12-02 04:38:47 UTC

Description Li Xiaohui 2021-12-02 04:34:36 UTC
Description of problem:
Hit two different call trace info when hotplug cpu with migration together on Amd machines:
1) hotplug cpu,then migrate; or migrate, after migration finish, then hotplug cpu.
Check cpu number and dmesg info in guest, found call trace like call_trace_1, but guest seems still work.
2) hotplug cpu, then migrate, after migration, check cpu numberand and dmesg info in guest, found call trace like call_trace_2, and guest doesn't work, seems crash(I tried to connect&login guest via console and remote-viewer, but failed)  


Version-Release number of selected component (if applicable):
hosts: kernel-5.14.0-21.el9.x86_64 & qemu-kvm-6.1.0-8.el9.x86_64
src host: AMD EPYC 7313 16-Core Processor, dst host: AMD EPYC 7251 8-Core Processor
guest: kernel-5.14.0-21.el9.x86_64


How reproducible:
1/20 or even lower


Steps to Reproduce:
1.Boot a guest on src host with one hotpluggable vcpu1:
-cpu EPYC \
-smp 1,maxcpus=4,cores=2,threads=1,sockets=2 \
-device EPYC-x86_64-cpu,socket-id=0,core-id=1,thread-id=0,id=cpu1 \
2.Add parameter "movable_node" into vm kernel line, then reboot vm
3.Hot-unplug cpu1
{"execute": "device_del", "arguments": {"id": "cpu1"}, "id": "PNqgzEZM"}
4.Boot the guest on dst host without vcpu1 but with listenning mode
5.Migrate guest to dst host and check guest cpu number and dmesg info after migration
6.Hotplug one vcpu in guest again on dst host
{"execute": "device_add", "arguments": {"driver": "EPYC-x86_64-cpu", "core-id": "1", "thread-id": "0", "socket-id": "0", "id": "cpu1"}, "id": "7SiI80a6"}
7.Boot the guest in the src host with listening mode and hotpluggable vcpu
8.Migrate guest back from dst to src, and check guest cpu number and dmesg info.


Actual results:
1)Test some times for this case, hit twice on the first call trace, once happened after step 5,
once happened after step 8 when check dmesg info
2) Hit once on the second call trace, happened after step 8 when check dmesg info


Expected results:
No call trace in guest, and guest works well after migrtion.


Additional info:
Have tried two scenarios for 50 times without migration, didn't reproduce this issue:
Scenario 1: unplug cpu, then hotplug cpu, check dmesg info and guest, all work well;
Scenario 2: hotplug cpu, check dmesg info and guest, all work well.


Qemu command line:
/usr/libexec/qemu-kvm  \
-name "mouse-vm" \
-sandbox on \
-machine q35 \
-cpu EPYC \
-nodefaults  \
-chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1,server=on,wait=off \
-chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor,server=on,wait=off \
-mon chardev=qmp_id_qmpmonitor1,mode=control \
-mon chardev=qmp_id_catch_monitor,mode=control \
-device pcie-root-port,port=0x10,chassis=1,id=root0,bus=pcie.0,multifunction=on,addr=0x2 \
-device pcie-root-port,port=0x11,chassis=2,id=root1,bus=pcie.0,addr=0x2.0x1 \
-device pcie-root-port,port=0x12,chassis=3,id=root2,bus=pcie.0,addr=0x2.0x2 \
-device pcie-root-port,port=0x13,chassis=4,id=root3,bus=pcie.0,addr=0x2.0x3 \
-device pcie-root-port,port=0x14,chassis=5,id=root4,bus=pcie.0,addr=0x2.0x4 \
-device pcie-root-port,port=0x15,chassis=6,id=root5,bus=pcie.0,addr=0x2.0x5 \
-device pcie-root-port,port=0x16,chassis=7,id=root6,bus=pcie.0,addr=0x2.0x6 \
-device pcie-root-port,port=0x17,chassis=8,id=root7,bus=pcie.0,addr=0x2.0x7 \
-device pcie-root-port,port=0x20,chassis=21,id=extra_root0,bus=pcie.0,multifunction=on,addr=0x3 \
-device pcie-root-port,port=0x21,chassis=22,id=extra_root1,bus=pcie.0,addr=0x3.0x1 \
-device pcie-root-port,port=0x22,chassis=23,id=extra_root2,bus=pcie.0,addr=0x3.0x2 \
-device nec-usb-xhci,id=usb1,bus=root0,addr=0x0 \
-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=root1,addr=0x0 \
-device scsi-hd,id=image1,drive=drive_image1,bus=virtio_scsi_pci0.0,channel=0,scsi-id=0,lun=0,bootindex=0,write-cache=on \
-device virtio-net-pci,mac=9a:8a:8b:8c:8d:8e,id=net0,netdev=tap0,bus=root2,addr=0x0 \
-device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
-device virtio-balloon-pci,id=balloon0,bus=root3,addr=0x0 \
-device VGA,id=video0,vgamem_mb=16,bus=pcie.0,addr=0x1 \
-device EPYC-x86_64-cpu,socket-id=0,core-id=1,thread-id=0,id=cpu1 \
-blockdev driver=file,auto-read-only=on,discard=unmap,aio=threads,cache.direct=on,cache.no-flush=off,filename=/mnt/nfs/rhel900-64-virtio-scsi.qcow2,node-name=drive_sys1 \
-blockdev driver=qcow2,node-name=drive_image1,read-only=off,cache.direct=on,cache.no-flush=off,file=drive_sys1 \
-netdev tap,id=tap0,vhost=on \
-m 4096 \
-smp 1,maxcpus=4,cores=2,threads=1,sockets=2 \
-vnc :10 \
-rtc base=utc,clock=host \
-boot menu=off,strict=off,order=cdn,once=c \
-enable-kvm  \
-qmp tcp:0:3333,server=on,wait=off \
-qmp tcp:0:9999,server=on,wait=off \
-qmp tcp:0:9888,server=on,wait=off \
-serial tcp:0:4444,server=on,wait=off \
-monitor stdio \
-msg timestamp=on \
-object memory-backend-ram,id=mem0,size=4096M \
-numa node,memdev=mem0 \

Comment 1 Li Xiaohui 2021-12-02 04:40:36 UTC
call_trace_1:
[6160445163.274594] systemd[1]: systemd-logind.service: Watchdog timeout (limit 3min)!
[6160445163.274594] systemd[1]: systemd-logind.service: Killing process 922 (systemd-logind) with signal SIGABRT.
[6160445163.274594] systemd[1]: systemd-udevd.service: Watchdog timeout (limit 3min)!
[6160445163.274594] systemd[1]: systemd-udevd.service: Killing process 802 (systemd-udevd) with signal SIGABRT.
[6160445163.274594] systemd[1]: sysstat-collect.service: Deactivated successfully.
[6160445163.274594] systemd[1]: Finished system activity accounting tool.
[6160445163.274594] ------------[ cut here ]------------
[6160445163.274594] WARNING: CPU: 1 PID: 0 at kernel/time/timer.c:1729 __run_timers.part.0+0xc6/0x220
[6160445163.274594] Modules linked in: nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 rfkill ip_set nf_tables nfnetlink intel_rapl_msr intel_rapl_common i2c_i801 iTCO_wdt iTCO_vendor_support joydev pcspkr i2c_smbus lpc_ich virtio_balloon fuse xfs libcrc32c bochs_drm drm_vram_helper drm_ttm_helper ttm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops cec crct10dif_pclmul crc32_pclmul ahci sd_mod crc32c_intel libahci t10_pi drm sg libata virtio_net ghash_clmulni_intel serio_raw net_failover virtio_scsi failover dm_multipath dm_mirror dm_region_hash dm_log dm_mod be2iscsi bnx2i cnic uio cxgb4i cxgb4 tls libcxgbi libcxgb qla4xxx iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi
[6160445163.274594] CPU: 1 PID: 0 Comm: swapper/1 Kdump: loaded Not tainted 5.14.0-21.el9.x86_64 #1
[6160445163.274594] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.14.0-7.el9 04/01/2014
[6160445163.274594] RIP: 0010:__run_timers.part.0+0xc6/0x220
[6160445163.274594] Code: 00 00 00 00 41 83 c6 01 4c 89 d6 a8 07 75 0f 83 c2 40 48 c1 e8 03 81 fa 40 02 00 00 75 b5 45 85 f6 75 5c 41 80 7f 24 00 75 02 <0f> 0b 49 83 47 10 01 4c 89 ff e8 4b ea ff ff 49 89 47 18 49 8b 47
[6160445163.274594] RSP: 0018:ffff9a69c00d0f00 EFLAGS: 00010046
[6160445163.274594] RAX: 000000000000017f RBX: ffff9a69c00d0f08 RCX: 000000000000023f
[6160445163.274594] RDX: 0000000000000200 RSI: ffff9a69c00d0f08 RDI: ffff88eebbc9c0c0
[6160445163.274594] RBP: ffff88ee53b39010 R08: 000000000000001b R09: ffff88eebbc9c0e8
[6160445163.274594] R10: 0000000000000004 R11: 0000000000000200 R12: 000000013f000000
[6160445163.274594] R13: dead000000000122 R14: 0000000000000000 R15: ffff88eebbc9c0c0
[6160445163.274594] FS:  0000000000000000(0000) GS:ffff88eebbc80000(0000) knlGS:0000000000000000
[6160445163.274594] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[6160445163.274594] CR2: 000055ee335de1b8 CR3: 0000000102526000 CR4: 00000000003506e0
[6160445163.274594] Call Trace:
[6160445163.274594]  <IRQ>
[6160445163.274594]  run_timer_softirq+0x26/0x50
[6160445163.274594]  __do_softirq+0xca/0x276
[6160445163.274594]  __irq_exit_rcu+0xc1/0xe0
[6160445163.274594]  sysvec_apic_timer_interrupt+0x72/0x90
[6160445163.274594]  </IRQ>
[6160445163.274594]  asm_sysvec_apic_timer_interrupt+0x12/0x20
[6160445163.274594] RIP: 0010:default_idle+0x10/0x20
[6160445163.274594] Code: 8b 04 25 40 6f 01 00 f0 80 60 02 df c3 0f ae f0 0f ae 38 0f ae f0 eb b9 66 90 0f 1f 44 00 00 eb 07 0f 00 2d 2a ce 5b 00 fb f4 <c3> cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 0f 1f 44 00 00 65
[6160445163.274594] RSP: 0018:ffff9a69c008bee8 EFLAGS: 00000206
[6160445163.274594] RAX: ffffffff9a2469d0 RBX: ffff88ee40250000 RCX: ffff88ee471ea0b0
[6160445163.274594] RDX: 0000000000000001 RSI: ffff9a69c008be98 RDI: 00000050a589d403
[6160445163.274594] RBP: 0000000000000000 R08: 557e4c3c8f41689b R09: 0000000000000000
[6160445163.274594] R10: 0000000000000004 R11: 0000000000004b00 R12: 0000000000000000
[6160445163.274594] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[6160445163.274594]  ? mwait_idle+0x70/0x70
[6160445163.274594]  ? rcu_eqs_enter.constprop.0+0x5c/0x70
[6160445163.274594]  default_idle_call+0x2f/0xa0
[6160445163.274594]  cpuidle_idle_call+0x159/0x1b0
[6160445163.274594]  do_idle+0x7b/0xe0
[6160445163.274594]  cpu_startup_entry+0x19/0x20
[6160445163.274594]  secondary_startup_64_no_verify+0xc2/0xcb
[6160445163.274594] ---[ end trace 736296f7f9cfc61a ]---

Comment 2 Li Xiaohui 2021-12-02 04:50:14 UTC
call_trace_2:
2021-11-30-02:03:24: [39822.519120] Sending NMI from CPU 1 to CPUs 0:
2021-11-30-02:03:24: [39822.519120] rcu: rcu_sched kthread timer wakeup didn't happen for 39755390 jiffies! g13017 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
2021-11-30-02:03:24: [39822.521873] NMI backtrace for cpu 0
2021-11-30-02:03:24: [39822.521877] CPU: 0 PID: 793 Comm: systemd-journal Kdump: loaded Not tainted 5.14.0-21.el9.x86_64 #1
2021-11-30-02:03:24: [39822.521880] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.14.0-7.el9 04/01/2014
2021-11-30-02:03:24: [39822.521881] RIP: 0010:io_serial_in+0x14/0x20
2021-11-30-02:03:24: [39822.521883] Code: 00 00 d3 e6 48 63 f6 48 03 77 10 8b 06 c3 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 0f b6 8f b9 00 00 00 8b 57 08 d3 e6 01 f2 ec <0f> b6 c0 c3 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 0f b6 8f b9 00
2021-11-30-02:03:24: [39822.521885] RSP: 0018:ffffaffd40003c40 EFLAGS: 00000006
2021-11-30-02:03:24: [39822.521887] RAX: ffffffffa9029705 RBX: ffffffffaa8ecea0 RCX: 0000000000000000
2021-11-30-02:03:24: [39822.521888] RDX: 00000000000003f9 RSI: 0000000000000001 RDI: ffffffffab3da940
2021-11-30-02:03:24: [39822.521889] RBP: ffffffffab3da940 R08: 657268746b206465 R09: 6863735f75637220
2021-11-30-02:03:24: [39822.521889] R10: 6b2064656863735f R11: 756372203a756372 R12: 0000000000000000
2021-11-30-02:03:24: [39822.521890] R13: 0000000000000084 R14: 0000000000000001 R15: 0000000000000000
2021-11-30-02:03:24: [39822.521891] FS:  00007f7849a5f3c0(0000) GS:ffff8a863bc00000(0000) knlGS:0000000000000000
2021-11-30-02:03:24: [39822.521891] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2021-11-30-02:03:24: [39822.521892] CR2: 0000559e393a3438 CR3: 0000000102220000 CR4: 00000000003506f0
2021-11-30-02:03:24: [39822.521893] Call Trace:
2021-11-30-02:03:24: [39822.521893]  <IRQ>
2021-11-30-02:03:24: [39822.521893]  serial8250_console_write+0x92/0x370
2021-11-30-02:03:24: [39822.521894]  ? record_print_text+0xc0/0x150
2021-11-30-02:03:24: [39822.521894]  call_console_drivers.constprop.0+0xcb/0x190
2021-11-30-02:03:24: [39822.521895]  console_unlock+0x177/0x330
2021-11-30-02:03:24: [39822.521895]  vprintk_emit+0x14d/0x230
2021-11-30-02:03:24: [39822.521896]  printk+0x58/0x6f
2021-11-30-02:03:24: [39822.521896]  rcu_check_gp_kthread_expired_fqs_timer+0x83/0xab
2021-11-30-02:03:24: [39822.521897]  print_cpu_stall.cold+0x25/0xd2
2021-11-30-02:03:24: [39822.521897]  check_cpu_stall+0xda/0x1d0
2021-11-30-02:03:24: [39822.521898]  rcu_pending+0x26/0x130
2021-11-30-02:03:24: [39822.521898]  rcu_sched_clock_irq+0x43/0x100
2021-11-30-02:03:24: [39822.521899]  update_process_times+0x8c/0xc0
2021-11-30-02:03:24: [39822.521899]  tick_sched_handle+0x22/0x60
2021-11-30-02:03:24: [39822.521900]  tick_sched_timer+0x61/0x70
2021-11-30-02:03:24: [39822.521900]  ? tick_sched_do_timer+0x50/0x50
2021-11-30-02:03:24: [39822.521901]  __hrtimer_run_queues+0x12a/0x270
2021-11-30-02:03:24: [39822.521901]  hrtimer_interrupt+0x110/0x2c0
2021-11-30-02:03:24: [39822.521902]  __sysvec_apic_timer_interrupt+0x5c/0xd0
2021-11-30-02:03:24: [39822.521902]  sysvec_apic_timer_interrupt+0x6d/0x90
2021-11-30-02:03:24: [39822.521903]  </IRQ>
2021-11-30-02:03:24: [39822.521903]  asm_sysvec_apic_timer_interrupt+0x12/0x20
2021-11-30-02:03:24: [39822.521904] RIP: 0010:avtab_search_node+0xcd/0x100
2021-11-30-02:03:24: [39822.521905] Code: ae b2 c2 41 89 c1 41 c1 e9 10 44 31 c8 23 41 10 48 8b 09 48 98 48 8b 04 c1 48 85 c0 74 39 0f b7 4f 06 66 81 e1 ff 7f 66 39 10 <74> 0c 77 1e 48 8b 40 10 48 85 c0 75 f0 c3 66 39 70 02 75 ee 66 44
2021-11-30-02:03:24: [39822.521906] RSP: 0018:ffffaffd4014fad8 EFLAGS: 00000216
2021-11-30-02:03:24: [39822.521907] RAX: ffff8a85e6be2f18 RBX: ffff8a85d481ba40 RCX: 0000000000000707
2021-11-30-02:03:24: [39822.521908] RDX: 000000000000014e RSI: 0000000000000134 RDI: ffffaffd4014fb40
2021-11-30-02:03:24: [39822.521908] RBP: 0000000000000133 R08: 0000000000000007 R09: 0000000000008be8
2021-11-30-02:03:24: [39822.521909] R10: 0000000000000133 R11: 6d6f632f35393132 R12: ffff8a85d481ba48
2021-11-30-02:03:24: [39822.521910] R13: ffffaffd4014fc94 R14: ffffaffd4014fc10 R15: ffff8a85d3442188
2021-11-30-02:03:24: [39822.521910]  context_struct_compute_av+0x1ed/0x4a0
2021-11-30-02:03:24: [39822.521911]  security_compute_av+0x129/0x290
2021-11-30-02:03:24: [39822.521911]  avc_compute_av.isra.0+0x35/0x60
2021-11-30-02:03:24: [39822.521912]  avc_has_perm_noaudit+0xe3/0xf0
2021-11-30-02:03:24: [39822.521912]  selinux_inode_permission+0x10e/0x1d0
2021-11-30-02:03:24: [39822.521913]  security_inode_permission+0x30/0x50
2021-11-30-02:03:24: [39822.521913]  link_path_walk.part.0.constprop.0+0x29f/0x380
2021-11-30-02:03:24: [39822.521914]  ? path_init+0x2bc/0x3e0
2021-11-30-02:03:24: [39822.521914]  path_openat+0xb1/0x2b0
2021-11-30-02:03:24: [39822.521915]  do_filp_open+0xb2/0x150
2021-11-30-02:03:24: [39822.521915]  ? __virt_addr_valid+0x45/0x70
2021-11-30-02:03:24: [39822.521916]  ? __check_object_size.part.0+0x11f/0x140
2021-11-30-02:03:24: [39822.521916]  do_sys_openat2+0x96/0x150
2021-11-30-02:03:24: [39822.521917]  __x64_sys_openat+0x53/0x90
2021-11-30-02:03:24: [39822.521917]  do_syscall_64+0x3b/0x90
2021-11-30-02:03:24: [39822.521918]  entry_SYSCALL_64_after_hwframe+0x44/0xae
2021-11-30-02:03:24: [39822.521918] RIP: 0033:0x7f784a66870b
2021-11-30-02:03:24: [39822.521919] Code: 25 00 00 41 00 3d 00 00 41 00 74 4b 64 8b 04 25 18 00 00 00 85 c0 75 67 44 89 e2 48 89 ee bf 9c ff ff ff b8 01 01 00 00 0f 05 <48> 3d 00 f0 ff ff 0f 87 91 00 00 00 48 8b 54 24 28 64 48 2b 14 25
2021-11-30-02:03:24: [39822.521920] RSP: 002b:00007ffe6d8e91c0 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
2021-11-30-02:03:24: [39822.521921] RAX: ffffffffffffffda RBX: 000055b3e8a4b3e0 RCX: 00007f784a66870b
2021-11-30-02:03:24: [39822.521922] RDX: 0000000000080000 RSI: 00007ffe6d8e9350 RDI: 00000000ffffff9c
2021-11-30-02:03:24: [39822.521923] RBP: 00007ffe6d8e9350 R08: 0000000000000008 R09: 0000000000000001
2021-11-30-02:03:24: [39822.521923] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000080000
2021-11-30-02:03:24: [39822.521924] R13: 000055b3e8a4b3e0 R14: 0000000000000001 R15: 0000000000000000
2021-11-30-02:03:24: [39822.521927] rcu: rcu_sched kthread timer wakeup didn't happen for 39755390 jiffies! g13017 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
2021-11-30-02:03:24: [39822.521927] rcu: 	Possible timer handling issue on cpu=1 timer-softirq=2857
2021-11-30-02:03:24: [39822.523037] rcu: 	Possible timer handling issue on cpu=1 timer-softirq=2857
2021-11-30-02:03:24: [39822.523037] rcu: rcu_sched kthread starved for 39755393 jiffies! g13017 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=1
2021-11-30-02:03:24: [39822.525442] rcu: rcu_sched kthread starved for 39755393 jiffies! g13017 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=1
2021-11-30-02:03:24: [39822.525442] rcu: 	Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
2021-11-30-02:03:24: [39822.528869] rcu: 	Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
2021-11-30-02:03:24: [39822.528869] rcu: RCU grace-period kthread stack dump:
2021-11-30-02:03:24: [39822.530727] rcu: RCU grace-period kthread stack dump:
2021-11-30-02:03:24: [39822.530727] task:rcu_sched       state:I
2021-11-30-02:03:24: [39822.532861] task:rcu_sched       state:I
2021-11-30-02:03:24: [39822.532861]  stack:    0 pid:   14 ppid:     2 flags:0x00004000
2021-11-30-02:03:24: [39822.535079]  stack:    0 pid:   14 ppid:     2 flags:0x00004000
2021-11-30-02:03:24: [39822.535079] Call Trace:
2021-11-30-02:03:24: [39822.537153] Call Trace:
2021-11-30-02:03:24: [39822.537153]  __schedule+0x200/0x540
2021-11-30-02:03:24: [39822.538609]  __schedule+0x200/0x540
2021-11-30-02:03:24: [39822.538609]  schedule+0x3c/0xa0
2021-11-30-02:03:24: [39822.539622]  schedule+0x3c/0xa0
2021-11-30-02:03:24: [39822.539622]  schedule_timeout+0x88/0x140
2021-11-30-02:03:24: [39822.541056]  schedule_timeout+0x88/0x140
2021-11-30-02:03:24: [39822.541056]  ? __bpf_trace_tick_stop+0x10/0x10
2021-11-30-02:03:24: [39822.542206]  ? __bpf_trace_tick_stop+0x10/0x10
2021-11-30-02:03:24: [39822.542206]  rcu_gp_fqs_loop+0xec/0x2e0
2021-11-30-02:03:24: [39822.543529]  rcu_gp_fqs_loop+0xec/0x2e0
2021-11-30-02:03:24: [39822.543529]  rcu_gp_kthread+0xce/0x140
2021-11-30-02:03:24: [39822.544753]  rcu_gp_kthread+0xce/0x140
2021-11-30-02:03:24: [39822.544753]  ? rcu_gp_init+0x4c0/0x4c0
2021-11-30-02:03:24: [39822.545992]  ? rcu_gp_init+0x4c0/0x4c0
2021-11-30-02:03:24: [39822.545992]  kthread+0x10f/0x130
2021-11-30-02:03:24: [39822.547225]  kthread+0x10f/0x130
2021-11-30-02:03:24: [39822.547225]  ? set_kthread_struct+0x40/0x40
2021-11-30-02:03:24: [39822.548470]  ? set_kthread_struct+0x40/0x40
2021-11-30-02:03:24: [39822.548470]  ret_from_fork+0x22/0x30
2021-11-30-02:03:24: [39822.549756]  ret_from_fork+0x22/0x30
2021-11-30-02:03:24: [39822.549756] rcu: Stack dump where RCU GP kthread last ran:
2021-11-30-02:03:24: [39822.551233] rcu: Stack dump where RCU GP kthread last ran:
2021-11-30-02:03:24: [39822.551233] Sending NMI from CPU 0 to CPUs 1:
2021-11-30-02:03:24: [39822.610094] NMI backtrace for cpu 1
2021-11-30-02:03:24: [39822.610098] CPU: 1 PID: 1 Comm: systemd Kdump: loaded Not tainted 5.14.0-21.el9.x86_64 #1
2021-11-30-02:03:24: [39822.610100] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.14.0-7.el9 04/01/2014
2021-11-30-02:03:24: [39822.610102] RIP: 0010:smp_call_function_single+0xe2/0x110
2021-11-30-02:03:24: [39822.610104] Code: 65 48 2b 14 25 28 00 00 00 75 46 c9 c3 48 89 e6 48 89 54 24 18 4c 89 44 24 10 e8 89 fe ff ff 8b 54 24 08 83 e2 01 74 0b f3 90 <8b> 54 24 08 83 e2 01 75 f5 eb c6 8b 05 e5 10 59 02 85 c0 0f 85 72
2021-11-30-02:03:24: [39822.610106] RSP: 0018:ffffaffd40013c20 EFLAGS: 00000202
2021-11-30-02:03:24: [39822.610108] RAX: 0000000000000000 RBX: ffff8a85c2831880 RCX: 0000000000000830
2021-11-30-02:03:24: [39822.610109] RDX: 0000000000000001 RSI: 00000000000000fb RDI: 0000000000000000
2021-11-30-02:03:24: [39822.610110] RBP: ffffaffd40013c68 R08: ffffffffa8c74f50 R09: ffffaffd40013c78
2021-11-30-02:03:24: [39822.610111] R10: 0000000000000001 R11: 0000000002021b21 R12: 0000000000000001
2021-11-30-02:03:24: [39822.610112] R13: 0000000000000008 R14: ffffaffd40013d40 R15: 000000000000091a
2021-11-30-02:03:24: [39822.610112] FS:  00007fae2995fb40(0000) GS:ffff8a863bc80000(0000) knlGS:0000000000000000
2021-11-30-02:03:24: [39822.610113] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2021-11-30-02:03:24: [39822.610114] CR2: 00007fae2a5a44a0 CR3: 0000000101e6e000 CR4: 00000000003506e0
2021-11-30-02:03:24: [39822.610115] Call Trace:
2021-11-30-02:03:24: [39822.610116]  ? sw_perf_event_destroy+0x60/0x60
2021-11-30-02:03:24: [39822.610116]  ? _raw_spin_unlock_irqrestore+0xa/0x20
2021-11-30-02:03:24: [39822.610117]  perf_cgroup_attach+0x64/0xa0
2021-11-30-02:03:24: [39822.610118]  ? perf_cgroup_switch+0x190/0x190
2021-11-30-02:03:24: [39822.610118]  cgroup_migrate_execute+0x39f/0x4b0
2021-11-30-02:03:24: [39822.610119]  cgroup_attach_task+0x137/0x1d0
2021-11-30-02:03:24: [39822.610120]  ? cgroup_attach_permissions+0x129/0x1a0
2021-11-30-02:03:24: [39822.610120]  __cgroup_procs_write+0xd1/0x140
2021-11-30-02:03:24: [39822.610121]  cgroup_procs_write+0x13/0x20
2021-11-30-02:03:24: [39822.610121]  kernfs_fop_write_iter+0x11c/0x1b0
2021-11-30-02:03:24: [39822.610122]  new_sync_write+0x11c/0x1b0
2021-11-30-02:03:24: [39822.610122]  vfs_write+0x1be/0x250
2021-11-30-02:03:24: [39822.610123]  ksys_write+0x5f/0xe0
2021-11-30-02:03:24: [39822.610123]  do_syscall_64+0x3b/0x90
2021-11-30-02:03:24: [39822.610124]  entry_SYSCALL_64_after_hwframe+0x44/0xae
2021-11-30-02:03:24: [39822.610124] RIP: 0033:0x7fae2a4aaa8f
2021-11-30-02:03:24: [39822.610125] Code: 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 69 86 f8 ff 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 48 89 44 24 08 e8 ac 86 f8 ff 48
2021-11-30-02:03:24: [39822.610126] RSP: 002b:00007ffe948e0670 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
2021-11-30-02:03:24: [39822.610127] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007fae2a4aaa8f
2021-11-30-02:03:24: [39822.610128] RDX: 0000000000000005 RSI: 00007ffe948e085a RDI: 0000000000000015
2021-11-30-02:03:24: [39822.610129] RBP: 00007ffe948e085a R08: 0000000000000000 R09: 00007ffe948e06e0
2021-11-30-02:03:24: [39822.610129] R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000005
2021-11-30-02:03:24: [39822.610130] R13: 0000555c2269e770 R14: 0000000000000005 R15: 00007fae2a5a47a0
2021-11-30-02:03:24: [39822.610130] NMI backtrace for cpu 0
2021-11-30-02:03:24: [39822.644859] CPU: 0 PID: 793 Comm: systemd-journal Kdump: loaded Not tainted 5.14.0-21.el9.x86_64 #1
2021-11-30-02:03:24: [39822.646858] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.14.0-7.el9 04/01/2014
2021-11-30-02:03:24: [39822.647860] Call Trace:
2021-11-30-02:03:24: [39822.647860]  <IRQ>
2021-11-30-02:03:24: [39822.648860]  dump_stack_lvl+0x34/0x44
2021-11-30-02:03:24: [39822.648860]  ? lapic_can_unplug_cpu+0x80/0x80
2021-11-30-02:03:24: [39822.649859]  nmi_cpu_backtrace.cold+0x32/0x68
2021-11-30-02:03:24: [39822.649859]  nmi_trigger_cpumask_backtrace+0xd7/0xe0
2021-11-30-02:03:24: [39822.650859]  trigger_single_cpu_backtrace+0x2a/0x2d
2021-11-30-02:03:24: [39822.651859]  rcu_dump_cpu_stacks+0xaa/0xe3
2021-11-30-02:03:24: [39822.652859]  print_cpu_stall.cold+0x2f/0xd2
2021-11-30-02:03:24: [39822.652859]  check_cpu_stall+0xda/0x1d0
2021-11-30-02:03:24: [39822.653859]  rcu_pending+0x26/0x130
2021-11-30-02:03:24: [39822.653859]  rcu_sched_clock_irq+0x43/0x100
2021-11-30-02:03:24: [39822.654858]  update_process_times+0x8c/0xc0
2021-11-30-02:03:24: [39822.655859]  tick_sched_handle+0x22/0x60
2021-11-30-02:03:24: [39822.655859]  tick_sched_timer+0x61/0x70
2021-11-30-02:03:24: [39822.656859]  ? tick_sched_do_timer+0x50/0x50
2021-11-30-02:03:24: [39822.656859]  __hrtimer_run_queues+0x12a/0x270
2021-11-30-02:03:24: [39822.657859]  hrtimer_interrupt+0x110/0x2c0
2021-11-30-02:03:24: [39822.658860]  __sysvec_apic_timer_interrupt+0x5c/0xd0
2021-11-30-02:03:24: [39822.658860]  sysvec_apic_timer_interrupt+0x6d/0x90
2021-11-30-02:03:24: [39822.659859]  </IRQ>
2021-11-30-02:03:24: [39822.659859]  asm_sysvec_apic_timer_interrupt+0x12/0x20
2021-11-30-02:03:24: [39822.660859] RIP: 0010:avtab_search_node+0xcd/0x100
2021-11-30-02:03:24: [39822.661858] Code: ae b2 c2 41 89 c1 41 c1 e9 10 44 31 c8 23 41 10 48 8b 09 48 98 48 8b 04 c1 48 85 c0 74 39 0f b7 4f 06 66 81 e1 ff 7f 66 39 10 <74> 0c 77 1e 48 8b 40 10 48 85 c0 75 f0 c3 66 39 70 02 75 ee 66 44
2021-11-30-02:03:24: [39822.664859] RSP: 0018:ffffaffd4014fad8 EFLAGS: 00000216
2021-11-30-02:03:24: [39822.664859] RAX: ffff8a85e6be2f18 RBX: ffff8a85d481ba40 RCX: 0000000000000707
2021-11-30-02:03:24: [39822.665859] RDX: 000000000000014e RSI: 0000000000000134 RDI: ffffaffd4014fb40
2021-11-30-02:03:24: [39822.666858] RBP: 0000000000000133 R08: 0000000000000007 R09: 0000000000008be8
2021-11-30-02:03:24: [39822.668858] R10: 0000000000000133 R11: 6d6f632f35393132 R12: ffff8a85d481ba48
2021-11-30-02:03:24: [39822.669860] R13: ffffaffd4014fc94 R14: ffffaffd4014fc10 R15: ffff8a85d3442188
2021-11-30-02:03:24: [39822.670858]  context_struct_compute_av+0x1ed/0x4a0
2021-11-30-02:03:24: [39822.670858]  security_compute_av+0x129/0x290
2021-11-30-02:03:24: [39822.671860]  avc_compute_av.isra.0+0x35/0x60
2021-11-30-02:03:24: [39822.672859]  avc_has_perm_noaudit+0xe3/0xf0
2021-11-30-02:03:24: [39822.672859]  selinux_inode_permission+0x10e/0x1d0
2021-11-30-02:03:24: [39822.673858]  security_inode_permission+0x30/0x50
2021-11-30-02:03:24: [39822.674860]  link_path_walk.part.0.constprop.0+0x29f/0x380
2021-11-30-02:03:24: [39822.674860]  ? path_init+0x2bc/0x3e0
2021-11-30-02:03:24: [39822.675858]  path_openat+0xb1/0x2b0
2021-11-30-02:03:24: [39822.676859]  do_filp_open+0xb2/0x150
2021-11-30-02:03:24: [39822.676859]  ? __virt_addr_valid+0x45/0x70
2021-11-30-02:03:24: [39822.677860]  ? __check_object_size.part.0+0x11f/0x140
2021-11-30-02:03:24: [39822.678859]  do_sys_openat2+0x96/0x150
2021-11-30-02:03:24: [39822.678859]  __x64_sys_openat+0x53/0x90
2021-11-30-02:03:24: [39822.679858]  do_syscall_64+0x3b/0x90
2021-11-30-02:03:24: [39822.679858]  entry_SYSCALL_64_after_hwframe+0x44/0xae
2021-11-30-02:03:24: [39822.680859] RIP: 0033:0x7f784a66870b
2021-11-30-02:03:24: [39822.680859] Code: 25 00 00 41 00 3d 00 00 41 00 74 4b 64 8b 04 25 18 00 00 00 85 c0 75 67 44 89 e2 48 89 ee bf 9c ff ff ff b8 01 01 00 00 0f 05 <48> 3d 00 f0 ff ff 0f 87 91 00 00 00 48 8b 54 24 28 64 48 2b 14 25
2021-11-30-02:03:24: [39822.683859] RSP: 002b:00007ffe6d8e91c0 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
2021-11-30-02:03:24: [39822.684860] RAX: ffffffffffffffda RBX: 000055b3e8a4b3e0 RCX: 00007f784a66870b
2021-11-30-02:03:24: [39822.685859] RDX: 0000000000080000 RSI: 00007ffe6d8e9350 RDI: 00000000ffffff9c
2021-11-30-02:03:24: [39822.686859] RBP: 00007ffe6d8e9350 R08: 0000000000000008 R09: 0000000000000001
2021-11-30-02:03:24: [39822.687859] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000080000
2021-11-30-02:03:24: [39822.689858] R13: 000055b3e8a4b3e0 R14: 0000000000000001 R15: 0000000000000000

Comment 3 Dr. David Alan Gilbert 2021-12-02 10:40:47 UTC
a) Can we tell if the device_del completed on the source before the migration?
b) Is this new to RHEL9 or did it happen on 8.5?

Comment 4 Li Xiaohui 2021-12-02 10:56:11 UTC
(In reply to Dr. David Alan Gilbert from comment #3)
> a) Can we tell if the device_del completed on the source before the
> migration?

Yes, device_del completed as automation would check the result

> b) Is this new to RHEL9 or did it happen on 8.5?

I didn't reproduce issue on rhel 8.6.0 under 50 times.
For rhel 8.6.0 test, I used other machines but still migrate from Milan to Naples.

Comment 5 Dr. David Alan Gilbert 2021-12-02 11:00:19 UTC
(In reply to Li Xiaohui from comment #4)
> (In reply to Dr. David Alan Gilbert from comment #3)
> > a) Can we tell if the device_del completed on the source before the
> > migration?
> 
> Yes, device_del completed as automation would check the result
> 
> > b) Is this new to RHEL9 or did it happen on 8.5?
> 
> I didn't reproduce issue on rhel 8.6.0 under 50 times.
> For rhel 8.6.0 test, I used other machines but still migrate from Milan to
> Naples.

OK, thanks.
So the question is whether the change in the host or the guest from 8->9 made the difference.
Can you please try:

   a) RHEL 9 host, with RHEL8.6 guest
   b) RHEL 8.6 host with RHEL 9 guest

so then we can see if it always happens with a rhel9 host or always with the rhel9 guest kernel.

Comment 6 Li Xiaohui 2021-12-07 11:06:54 UTC
(In reply to Dr. David Alan Gilbert from comment #5)
> (In reply to Li Xiaohui from comment #4)
> > (In reply to Dr. David Alan Gilbert from comment #3)
> > > a) Can we tell if the device_del completed on the source before the
> > > migration?
> > 
> > Yes, device_del completed as automation would check the result
> > 
> > > b) Is this new to RHEL9 or did it happen on 8.5?
> > 
> > I didn't reproduce issue on rhel 8.6.0 under 50 times.
> > For rhel 8.6.0 test, I used other machines but still migrate from Milan to
> > Naples.
> 
> OK, thanks.
> So the question is whether the change in the host or the guest from 8->9
> made the difference.
> Can you please try:

The following scenarios all were tried 100 times,

> 
>    a) RHEL 9 host, with RHEL8.6 guest

Still hit some error when check dmesg info in step 7 according to Description with running rhel8.6.0 guest on rhel 9, like:
[276799.741251] systemd[1]: systemd-logind.service: Watchdog timeout (limit 3min)!
[276799.741251] systemd[1]: systemd-logind.service: Killing process 1403 (systemd-logind) with signal SIGABRT.
[276799.741251] systemd[1]: sssd-kcm.service: Succeeded.
[276799.741251] systemd-coredump[2871]: Resource limits disable core dumping for process 939 (systemd-journal).
[276799.741251] systemd-coredump[2871]: Process 939 (systemd-journal) of user 0 dumped core.
[276799.741251] systemd[1]: systemd-journald.service: Main process exited, code=dumped, status=6/ABRT
[276799.741251] systemd[1]: systemd-journald.service: Failed with result 'watchdog'.
[276799.741251] systemd[1]: systemd-journald.service: Service has no hold-off time (RestartSec=0), scheduling restart.
[276799.741251] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 2.
[276799.741251] systemd[1]: systemd-journal-flush.service: Succeeded.
[276799.741251] systemd[1]: Stopped Flush Journal to Persistent Storage.
[276799.741251] systemd[1]: Stopping Flush Journal to Persistent Storage...

>    b) RHEL 8.6 host with RHEL 9 guest

Case works well, no product issue.

> 
> so then we can see if it always happens with a rhel9 host or always with the
> rhel9 guest kernel.

Comment 7 Dr. David Alan Gilbert 2021-12-07 13:34:45 UTC
OK, thanks for the test.
So that looks like it's specific to rhel9 host.
So maybe it's rhel9 host kernel related?

Comment 8 Li Xiaohui 2021-12-10 07:39:47 UTC
Hit a strange issue when test this case on rhel9, fail to get hotplug vcpu in guest after hotplugging cpu via qmp:
According to Description, after executing step 6,
 { "execute": "device_add","arguments":{"driver":"EPYC-x86_64-cpu","core-id": "1","thread-id": "0","socket-id": "0","id":"cpu1" }}
wait more than 1 minute, still fail to get hotpluggable vcpu in guest via cmd: cat /proc/cpuinfo | grep processor | wc -l.

Above issue also has a low reproduce rate. I'm not sure whether it's same product with this bz. Could someone give a help?


And I could see some event when hotplug cpu and unplug cpu, does it mean only below event occur then stand for hotplug/unplug cpu successfully?
1) hotplug vcpu:
{"timestamp": {"seconds": 1639046699, "microseconds": 636204}, "event": "ACPI_DEVICE_OST", "data": {"info": {"device": "cpu1", "source": 1, "status": 0, "slot": "1", "slot-type": "CPU"}}}
2) unplug cpu:
{"timestamp": {"seconds": 1639046666, "microseconds": 552417}, "event": "ACPI_DEVICE_OST", "data": {"info": {"device": "cpu1", "source": 3, "status": 132, "slot": "1", "slot-type": "CPU"}}}
{"timestamp": {"seconds": 1639046666, "microseconds": 585761}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/cpu1/lapic"}}
{"timestamp": {"seconds": 1639046666, "microseconds": 585965}, "event": "DEVICE_DELETED", "data": {"device": "cpu1", "path": "/machine/peripheral/cpu1"}}
{"timestamp": {"seconds": 1639046666, "microseconds": 586080}, "event": "ACPI_DEVICE_OST", "data": {"info": {"source": 3, "status": 0, "slot": "1", "slot-type": "CPU"}}}

Comment 9 Dr. David Alan Gilbert 2021-12-16 14:09:00 UTC
I started to try reproducing this and am seeing a warning on the host that doesn't sound great:

/usr/libexec/qemu-kvm -M q35 -cpu EPYC -m 8G -enable-kvm -smp 1,maxcpus=4,cores=2,threads=1,sockets=2 -nographic -drive if=virtio,file=/home/rhel-guest-image-9.0-20211129.2.x86_64.qcow2 -device EPYC-x86_64-cpu,socket-id=0,core-id=1,thread-id=0,id=cpu1
device_del cpu1
(qemu) [   28.141072] kvm-guest: disable async PF for cpu 1
[   28.144365] IRQ 26: no longer affine to CPU1
[   28.147401] smpboot: CPU 1 is now offline
[   29.241128] PKCS7: Message signed outside of X.509 validity window
 
(qemu) device_add EPYC-x86_64-cpu,socket-id=0,core-id=1,thread-id=0,id=cpu1
[  533.592952] CPU1 has been hot-added
[  533.595252] x86: Booting SMP configuration:
[  533.595681] smpboot: Booting Node 0 Processor 1 APIC 0x1
[  533.600854] kvm-clock: cpu 1, msr 102a01041, secondary cpu clock
[  533.601262] kvm-guest: setup async PF for cpu 1
[  533.602298] kvm-guest: stealtime: cpu 1, msr 277cae080
[  533.606947] Will online and init hotplugged CPU: 1
[  533.615735] Decoding supported only on Scalable MCA processors.
(qemu) [  534.725385] PKCS7: Message signed outside of X.509 validity window
 

and on the host I see:

[11577.534338] KVM: KVM_SET_CPUID{,2} after KVM_RUN may cause guest instability
[11577.541390] KVM: KVM_SET_CPUID{,2} will fail after KVM_RUN starting with Linux 5.16

that's currently a 5.14.0-16.el9.x86_64 host kernel and a 6.2.0-1.rc3 qemu.

which makes me worry what's tripping it.

Comment 10 Leonardo Bras 2021-12-16 15:05:45 UTC
(In reply to Li Xiaohui from comment #0)
> 2.Add parameter "movable_node" into vm kernel line, then reboot vm

> Qemu command line:
> /usr/libexec/qemu-kvm  \
[...]
> -m 4096 \
> -smp 1,maxcpus=4,cores=2,threads=1,sockets=2 \
[...]

Kernel documentation on movable_node:
[KNL] Boot-time switch to make hotplugable memory
NUMA nodes to be movable. This means that the memory
of such nodes will be usable only for movable
allocations which rules out almost all kernel
allocations. Use with caution!

For this VM case, where no hot-plugged memory is accepted, this parameter should make no change.
Is this parameter necessary for the bug to reproduce?

Best regards,
Leo

Comment 11 Vitaly Kuznetsov 2021-12-16 15:21:45 UTC
(In reply to Dr. David Alan Gilbert from comment #9)
> 
> [11577.534338] KVM: KVM_SET_CPUID{,2} after KVM_RUN may cause guest
> instability
> [11577.541390] KVM: KVM_SET_CPUID{,2} will fail after KVM_RUN starting with
> Linux 5.16
> 
> that's currently a 5.14.0-16.el9.x86_64 host kernel and a 6.2.0-1.rc3 qemu.
> 
> which makes me worry what's tripping it.

In case these warnings are triggered by selftests (hyperv_features, vmx_pmu_msrs_test),
this is known. Upstream, such sequences is already forbidden after:

commit feb627e8d6f69c9a319fe279710959efb3eba873
Author: Vitaly Kuznetsov <vkuznets>
Date:   Mon Nov 22 18:58:18 2021 +0100

    KVM: x86: Forbid KVM_SET_CPUID{,2} after KVM_RUN

we're not backporting it to RHEL yet but in case QEMU is triggering these, it
needs to be fixed ASAP as it won't work with 5.16.

Comment 12 Dr. David Alan Gilbert 2021-12-16 16:22:17 UTC
I'm wondering if it's an artefact of not cleaning up after the hot-unplug?

Starting the following stap run after the device_del:

global initialtid = 0;
probe process("/usr/libexec/qemu-kvm").statement("*@kvm.c:1979") {
	printf("tid: %d host cpu: %d : kvm_arch_init_vcpu\n", tid(), cpu())
	if (initialtid == 0) {
		initialtid = tid();
	} else {
		if (tid()!=initialtid) {
			print_ubacktrace();
		}
	}
}

probe process("/usr/libexec/qemu-kvm").statement("*@kvm-all.c:2850") {
	printf("tid: %d host cpu: %d : kvm_cpu_exec KVM_RUN call\n", tid(), cpu())
	if (initialtid == 0) {
		initialtid = tid();
	} else {
		if (tid()!=initialtid) {
			print_ubacktrace();
		}
	}
}

probe module("kvm").statement("*@mmu.c:4920") {
	printf("tid: %d host cpu: %d : KERN kvm_mmu_after_set_cpuid warning last_vmentry_cpu=%d\n", tid(), cpu(), $vcpu->arch->last_vmentry_cpu)
	print_backtrace();
}


 I'm seeing:

tid: 82869 host cpu: 25 : kvm_cpu_exec KVM_RUN call
tid: 82869 host cpu: 25 : kvm_cpu_exec KVM_RUN call
tid: 82869 host cpu: 25 : kvm_cpu_exec KVM_RUN call
tid: 82869 host cpu: 25 : kvm_cpu_exec KVM_RUN call
tid: 83555 host cpu: 156 : kvm_arch_init_vcpu
 0x562c59e98756 : kvm_arch_init_vcpu+0x16b6/0x2140 [/usr/libexec/qemu-kvm]
 0x562c5a05b99a : kvm_init_vcpu+0x32a/0x390 [/usr/libexec/qemu-kvm]
 0x562c5a05f52f : kvm_vcpu_thread_fn+0x13f/0x3e0 [/usr/libexec/qemu-kvm]
 0x562c5a2b709a : qemu_thread_start+0x6a/0x100 [/usr/libexec/qemu-kvm]
 0x7fb39d828af7 : start_thread+0x297/0x420 [/usr/lib64/libc.so.6]
 0x7fb39d8ad850 : __GI___clone3+0x30/0x50 [/usr/lib64/libc.so.6]
tid: 83555 host cpu: 156 : KERN kvm_mmu_after_set_cpuid warning last_vmentry_cpu=9
 0xffffffffc0bccd56 : kvm_mmu_after_set_cpuid+0x36/0x70 [kvm]
 0xffffffffc0bb6651 : kvm_vcpu_ioctl_set_cpuid2+0x51/0xf0 [kvm]
 0xffffffffc0b9a6fb : kvm_arch_vcpu_ioctl+0x62b/0x1110 [kvm]
 0xffffffffc0b76370 : kvm_vcpu_ioctl+0x3c0/0x620 [kvm]
 0xffffffffa8972032 : __x64_sys_ioctl+0x82/0xb0 [kernel]
 0xffffffffa903478b : do_syscall_64+0x3b/0x90 [kernel]
 0xffffffffa920007c : entry_SYSCALL_64_after_hwframe+0x44/0xae [kernel]
 0xffffffffa920007c : entry_SYSCALL_64_after_hwframe+0x44/0xae [kernel] (inexact)
tid: 83555 host cpu: 156 : kvm_cpu_exec KVM_RUN call
 0x562c5a05d5bf : kvm_cpu_exec+0x10f/0x7d0 [/usr/libexec/qemu-kvm]
 0x562c5a05f5fa : kvm_vcpu_thread_fn+0x20a/0x3e0 [/usr/libexec/qemu-kvm]
 0x562c5a2b709a : qemu_thread_start+0x6a/0x100 [/usr/libexec/qemu-kvm]
 0x7fb39d828af7 : start_thread+0x297/0x420 [/usr/lib64/libc.so.6]
 0x7fb39d8ad850 : __GI___clone3+0x30/0x50 [/usr/lib64/libc.so.6]
tid: 82869 host cpu: 25 : kvm_cpu_exec KVM_RUN call
tid: 82869 host cpu: 25 : kvm_cpu_exec KVM_RUN call
tid: 82869 host cpu: 25 : kvm_cpu_exec KVM_RUN call
tid: 82869 host cpu: 25 : kvm_cpu_exec KVM_RUN call



(qemu) info cpus 
* CPU #0: thread_id=82869
  CPU #1: thread_id=83555

Comment 13 Dr. David Alan Gilbert 2021-12-16 16:40:28 UTC
So I'm wondering if this happens when we device_del a vcpu, we don't destroy the kernels idea of the vcpu (I think we put it on a 
'parked' list in qemu) and then later resurrect it; the kernel doesn't realise it should be new.

(Whether that's the cause of the original problem is unclear to me; but it might explain the warning)

Comment 14 Dr. David Alan Gilbert 2021-12-16 18:58:29 UTC
Li Xiaohui:
  a) Comment 8 - is that hot plug problem involving migration or separate?
  b) In Comment 10 Leo asks why you use moveable_node - please explain
  c) In the meeting the other day, Juan noted you're using a mix of different CPU hosts; please clarify if this happens on one CPU host type.

Comment 15 Li Xiaohui 2021-12-19 10:59:40 UTC
(In reply to Dr. David Alan Gilbert from comment #14)
> Li Xiaohui:
>   a) Comment 8 - is that hot plug problem involving migration or separate?

I will check later.

>   b) In Comment 10 Leo asks why you use moveable_node - please explain

Thanks Leo, he is right.
Will remove movable node setting.

>   c) In the meeting the other day, Juan noted you're using a mix of
> different CPU hosts; please clarify if this happens on one CPU host type.

Didn't hit any issue when try this case on two Naples machines.
Have no Milan machines, will try after getting available machines.

Comment 16 Li Xiaohui 2021-12-27 06:54:02 UTC
(In reply to Li Xiaohui from comment #15)
> (In reply to Dr. David Alan Gilbert from comment #14)
> > Li Xiaohui:
> >   a) Comment 8 - is that hot plug problem involving migration or separate?
> 
> I will check later.

Yes, hotplug problem in Comment 8 is involving migration. Didn't reproduce hotplug issue without migration.

> 
> >   b) In Comment 10 Leo asks why you use moveable_node - please explain
> 
> Thanks Leo, he is right.
> Will remove movable node setting.
> 
> >   c) In the meeting the other day, Juan noted you're using a mix of
> > different CPU hosts; please clarify if this happens on one CPU host type.
> 
> Didn't hit any issue when try this case on two Naples machines.
> Have no Milan machines, will try after getting available machines.

Also didn't hit guest crash issue when test on Two Milan machines, but easily reproduce between Milan and Naple machine.

Seems only hit guest crash and hotplug issue when test on two hosts that have different cpu models.

Comment 17 Igor Mammedov 2021-12-28 12:57:59 UTC
(In reply to Li Xiaohui from comment #16)
> (In reply to Li Xiaohui from comment #15)
> > (In reply to Dr. David Alan Gilbert from comment #14)
> > > Li Xiaohui:
> > >   a) Comment 8 - is that hot plug problem involving migration or separate?
> > 
> > I will check later.
> 
> Yes, hotplug problem in Comment 8 is involving migration. Didn't reproduce
> hotplug issue without migration.
> 
> > 
> > >   b) In Comment 10 Leo asks why you use moveable_node - please explain
> > 
> > Thanks Leo, he is right.
> > Will remove movable node setting.
> > 
> > >   c) In the meeting the other day, Juan noted you're using a mix of
> > > different CPU hosts; please clarify if this happens on one CPU host type.
> > 
> > Didn't hit any issue when try this case on two Naples machines.
> > Have no Milan machines, will try after getting available machines.
> 
> Also didn't hit guest crash issue when test on Two Milan machines, but
> easily reproduce between Milan and Naple machine.
> 
> Seems only hit guest crash and hotplug issue when test on two hosts that
> have different cpu models.

can you diff cpuid within guest on both hosts?
(i.e.
  1. start vm on source host copy /proc/cpuinfo in the guest
  2. migrate the vm to destination host, hotplug CPU and make a copy of /proc/cpuinfo)
  3. compare both cpuinfos

PS:
it also would be interesting to see as an alternative to step 2
cpuinfo from freshly started vm on destination host
)

Comment 18 Li Xiaohui 2022-01-07 06:32:57 UTC
Sorry, I will try according to Igor 's Comment 17 next week since now are busy with some other product issues. thanks.

Comment 19 Li Xiaohui 2022-01-12 02:25:11 UTC
(In reply to Igor Mammedov from comment #17)
> (In reply to Li Xiaohui from comment #16)
> > (In reply to Li Xiaohui from comment #15)
> > > (In reply to Dr. David Alan Gilbert from comment #14)
> > > > Li Xiaohui:
> > > >   a) Comment 8 - is that hot plug problem involving migration or separate?
> > > 
> > > I will check later.
> > 
> > Yes, hotplug problem in Comment 8 is involving migration. Didn't reproduce
> > hotplug issue without migration.
> > 
> > > 
> > > >   b) In Comment 10 Leo asks why you use moveable_node - please explain
> > > 
> > > Thanks Leo, he is right.
> > > Will remove movable node setting.
> > > 
> > > >   c) In the meeting the other day, Juan noted you're using a mix of
> > > > different CPU hosts; please clarify if this happens on one CPU host type.
> > > 
> > > Didn't hit any issue when try this case on two Naples machines.
> > > Have no Milan machines, will try after getting available machines.
> > 
> > Also didn't hit guest crash issue when test on Two Milan machines, but
> > easily reproduce between Milan and Naple machine.
> > 
> > Seems only hit guest crash and hotplug issue when test on two hosts that
> > have different cpu models.
> 
> can you diff cpuid within guest on both hosts?
> (i.e.
>   1. start vm on source host copy /proc/cpuinfo in the guest
>   2. migrate the vm to destination host, hotplug CPU and make a copy of
> /proc/cpuinfo)
>   3. compare both cpuinfos

Hi, when hit guest call trace, the cpuid info of guest on src host and on dst host(after migration and hotplug) is same as below [1]

> 
> PS:
> it also would be interesting to see as an alternative to step 2
> cpuinfo from freshly started vm on destination host
> )

But use same qemu command line to boot the vm on destination host, the cpuid is as below [2], different about cpu MHz, bogomips:

[root@hp-dl385g10-13 home]# diff source destination 
9c9
< cpu MHz		: 2994.372
---
> cpu MHz		: 2096.058
23c23
< bogomips	: 5988.74
---
> bogomips	: 4192.11
28a29
> 
36c37
< cpu MHz		: 2994.372
---
> cpu MHz		: 2096.058
50c51
< bogomips	: 5988.74
---
> bogomips	: 4192.11



***************************************
[1]cpuid info
[root@guest ~]# cat /proc/cpuinfo
processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 23
model		: 1
model name	: AMD EPYC Processor
stepping	: 2
microcode	: 0x1000065
cpu MHz		: 2994.372
cache size	: 512 KB
physical id	: 0
siblings	: 2
core id		: 0
cpu cores	: 2
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 arat
bugs		: fxsave_leak sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips	: 5988.74
TLB size	: 1024 4K pages
clflush size	: 64
cache_alignment	: 64
address sizes	: 48 bits physical, 48 bits virtual
power management:
processor	: 1
vendor_id	: AuthenticAMD
cpu family	: 23
model		: 1
model name	: AMD EPYC Processor
stepping	: 2
microcode	: 0x1000065
cpu MHz		: 2994.372
cache size	: 512 KB
physical id	: 0
siblings	: 2
core id		: 1
cpu cores	: 2
apicid		: 1
initial apicid	: 1
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 arat
bugs		: fxsave_leak sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips	: 5988.74
TLB size	: 1024 4K pages
clflush size	: 64
cache_alignment	: 64
address sizes	: 48 bits physical, 48 bits virtual
power management:

******************************************
[2]cpuid info
[root@guest ~]# cat /proc/cpuinfo
processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 23
model		: 1
model name	: AMD EPYC Processor
stepping	: 2
microcode	: 0x1000065
cpu MHz		: 2096.058
cache size	: 512 KB
physical id	: 0
siblings	: 2
core id		: 0
cpu cores	: 2
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 arat
bugs		: fxsave_leak sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips	: 4192.11
TLB size	: 1024 4K pages
clflush size	: 64
cache_alignment	: 64
address sizes	: 48 bits physical, 48 bits virtual
power management:

processor	: 1
vendor_id	: AuthenticAMD
cpu family	: 23
model		: 1
model name	: AMD EPYC Processor
stepping	: 2
microcode	: 0x1000065
cpu MHz		: 2096.058
cache size	: 512 KB
physical id	: 0
siblings	: 2
core id		: 1
cpu cores	: 2
apicid		: 1
initial apicid	: 1
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 arat
bugs		: fxsave_leak sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips	: 4192.11
TLB size	: 1024 4K pages
clflush size	: 64
cache_alignment	: 64
address sizes	: 48 bits physical, 48 bits virtual
power management:

Comment 20 Leonardo Bras 2022-01-14 18:31:36 UTC
Hello Li Xiaohui,

Just out of curiosity, could you please send us the output of 
cat /proc/cpuinfo
for both hosts?

Best regards,
Leo

Comment 21 Li Xiaohui 2022-01-15 02:29:17 UTC
(In reply to Leonardo Bras from comment #20)
> Hello Li Xiaohui,
> 
> Just out of curiosity, could you please send us the output of 
> cat /proc/cpuinfo
> for both hosts?
> 
> Best regards,
> Leo

Source and destination cpuinfo like:
1) Src host:
processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 25
model		: 1
model name	: AMD EPYC 7313 16-Core Processor
stepping	: 1
microcode	: 0xa001143
cpu MHz		: 2994.417
cache size	: 512 KB
physical id	: 0
siblings	: 32
core id		: 0
cpu cores	: 16
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 16
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca sme sev sev_es
bugs		: sysret_ss_attrs spectre_v1 spectre_v2 spec_store_bypass
bogomips	: 5988.83
TLB size	: 2560 4K pages
clflush size	: 64
cache_alignment	: 64
address sizes	: 43 bits physical, 48 bits virtual
power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]


2) Dst host:
processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 23
model		: 1
model name	: AMD EPYC 7251 8-Core Processor
stepping	: 2
microcode	: 0x8001250
cpu MHz		: 2100.000
cache size	: 512 KB
physical id	: 0
siblings	: 16
core id		: 0
cpu cores	: 8
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca
bugs		: sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips	: 4192.01
TLB size	: 2560 4K pages
clflush size	: 64
cache_alignment	: 64
address sizes	: 48 bits physical, 48 bits virtual
power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]

Comment 24 Leonardo Bras 2022-01-17 18:16:12 UTC
Not sure if this is important, but I found that both the guest and src host have the x2apic cpu flag present, while the dst host does not.
Any chance something gets messed in this migration because of the missing flag?

Comment 25 Leonardo Bras 2022-01-17 20:03:35 UTC
(In reply to Leonardo Bras from comment #24)
> Not sure if this is important, but I found that both the guest and src host
> have the x2apic cpu flag present, while the dst host does not.
> Any chance something gets messed in this migration because of the missing
> flag?

Oh, nevermind. We should have it emulated anyway.
arch/x86/kvm/cpuid.c:   /* KVM emulates x2apic in software irrespective of host support. */

Comment 26 Leonardo Bras 2022-01-17 20:05:48 UTC
Li Xiaohui, I would like to try debugging this.
Is there any way those hosts can be made available for a few days?

Comment 28 Leonardo Bras 2022-01-19 23:59:11 UTC
Thanks for lending the machines Li Xiaohui!

(In reply to Dr. David Alan Gilbert from comment #9)
> I started to try reproducing this and am seeing a warning on the host that
> doesn't sound great:
> 
> /usr/libexec/qemu-kvm -M q35 -cpu EPYC -m 8G -enable-kvm -smp
> 1,maxcpus=4,cores=2,threads=1,sockets=2 -nographic -drive
> if=virtio,file=/home/rhel-guest-image-9.0-20211129.2.x86_64.qcow2 -device
> EPYC-x86_64-cpu,socket-id=0,core-id=1,thread-id=0,id=cpu1
> device_del cpu1

I started the qemu on Milan host with this exact command-line (changing only the disk file).
Then I started qemu on receiving Naples host with the same command + '-incoming tcp:dst_host:4321'

When the migration finishes, the receiving qemu will print guest kernel errors:

[28211.560039] ------------[ cut here ]------------                                                       
[28211.561250] Bad FPU state detected at __restore_fpregs_from_fpstate+0x36/0x50, reinitializing FPU registers.       
[28211.561300] WARNING: CPU: 0 PID: 1757 at arch/x86/mm/extable.c:65 ex_handler_fprestore+0x53/0x60                   
[28211.564960] Modules linked in: 
[...]

This happens moving from Milan to Naples,but not otherwise.
Also, I started the VM in Naples, migrated successfully to Milan, but the same VM did fail when migrating back to Naples.

It does not seem the same bug from this BZ, but hits every time. 
Am I doing something wrong on this config? Is this expected?

Comment 29 Igor Mammedov 2022-01-20 10:09:20 UTC
(In reply to Leonardo Bras from comment #28)
> Thanks for lending the machines Li Xiaohui!
> 
> (In reply to Dr. David Alan Gilbert from comment #9)
> > I started to try reproducing this and am seeing a warning on the host that
> > doesn't sound great:
> > 
> > /usr/libexec/qemu-kvm -M q35 -cpu EPYC -m 8G -enable-kvm -smp
> > 1,maxcpus=4,cores=2,threads=1,sockets=2 -nographic -drive
> > if=virtio,file=/home/rhel-guest-image-9.0-20211129.2.x86_64.qcow2 -device
> > EPYC-x86_64-cpu,socket-id=0,core-id=1,thread-id=0,id=cpu1
> > device_del cpu1
> 
> I started the qemu on Milan host with this exact command-line (changing only
> the disk file).
> Then I started qemu on receiving Naples host with the same command +
> '-incoming tcp:dst_host:4321'
> 
> When the migration finishes, the receiving qemu will print guest kernel
> errors:
> 
> [28211.560039] ------------[ cut here ]------------                         
> 
> [28211.561250] Bad FPU state detected at
> __restore_fpregs_from_fpstate+0x36/0x50, reinitializing FPU registers.       
> [28211.561300] WARNING: CPU: 0 PID: 1757 at arch/x86/mm/extable.c:65
> ex_handler_fprestore+0x53/0x60                   
> [28211.564960] Modules linked in: 
> [...]
> 
> This happens moving from Milan to Naples,but not otherwise.
> Also, I started the VM in Naples, migrated successfully to Milan, but the
> same VM did fail when migrating back to Naples.

that looks like cpu feature mismatch
though according to comment 19 guest sees the same features.

Does this happen without any hotplug?

Does QEMU print any warnings when starting?

Also we should recheck what features guest sees on both hosts,
what's output of following QMP command with freshly started qemu on each host:

printf '{ "execute": "qmp_capabilities" }\n{"execute": "query-cpu-model-expansion", "arguments": {"model": {"name": "EPYC" }, "type": "full"}}\n' | nc -U path_to_your_qmp_socket

> It does not seem the same bug from this BZ, but hits every time. 
> Am I doing something wrong on this config? Is this expected?
I'd say, It isn't expected.

David,
another question, we are migrating from host with newer CPU to the host with much older CPU
with a lot of features gone and some added, do we really support migration in this case?

here is a diff of host features from comment 21:
36a37
> amd_dcm
45d45
< pcid
48d47
< x2apic
66d64
< ibs
77,79d74
< cat_l3
< cdp_l3
< invpcid_single
82,83d76
< mba
< ibrs
85d77
< stibp
92,94d83
< invpcid
< cqm
< rdt_a
99d87
< clwb
105,108d92
< cqm_llc
< cqm_occup_llc
< cqm_mbm_total
< cqm_mbm_local
112,114d95
< rdpru
< wbnoinvd
< amd_ppin
125a107
> avic
128,134d109
< v_spec_ctrl
< umip
< pku
< ospke
< vaes
< vpclmulqdq
< rdpid
138,140d112
< sme
< sev
< sev_es

Comment 30 Dr. David Alan Gilbert 2022-01-20 10:22:12 UTC
(In reply to Igor Mammedov from comment #29)
> (In reply to Leonardo Bras from comment #28)
> > Thanks for lending the machines Li Xiaohui!
> > 
> > (In reply to Dr. David Alan Gilbert from comment #9)
> > > I started to try reproducing this and am seeing a warning on the host that
> > > doesn't sound great:
> > > 
> > > /usr/libexec/qemu-kvm -M q35 -cpu EPYC -m 8G -enable-kvm -smp
> > > 1,maxcpus=4,cores=2,threads=1,sockets=2 -nographic -drive
> > > if=virtio,file=/home/rhel-guest-image-9.0-20211129.2.x86_64.qcow2 -device
> > > EPYC-x86_64-cpu,socket-id=0,core-id=1,thread-id=0,id=cpu1
> > > device_del cpu1
> > 
> > I started the qemu on Milan host with this exact command-line (changing only
> > the disk file).
> > Then I started qemu on receiving Naples host with the same command +
> > '-incoming tcp:dst_host:4321'
> > 
> > When the migration finishes, the receiving qemu will print guest kernel
> > errors:
> > 
> > [28211.560039] ------------[ cut here ]------------                         
> > 
> > [28211.561250] Bad FPU state detected at
> > __restore_fpregs_from_fpstate+0x36/0x50, reinitializing FPU registers.       
> > [28211.561300] WARNING: CPU: 0 PID: 1757 at arch/x86/mm/extable.c:65
> > ex_handler_fprestore+0x53/0x60                   
> > [28211.564960] Modules linked in: 
> > [...]
> > 
> > This happens moving from Milan to Naples,but not otherwise.
> > Also, I started the VM in Naples, migrated successfully to Milan, but the
> > same VM did fail when migrating back to Naples.
> 
> that looks like cpu feature mismatch
> though according to comment 19 guest sees the same features.
> 
> Does this happen without any hotplug?
> 
> Does QEMU print any warnings when starting?
> 
> Also we should recheck what features guest sees on both hosts,
> what's output of following QMP command with freshly started qemu on each
> host:
> 
> printf '{ "execute": "qmp_capabilities" }\n{"execute":
> "query-cpu-model-expansion", "arguments": {"model": {"name": "EPYC" },
> "type": "full"}}\n' | nc -U path_to_your_qmp_socket
> 
> > It does not seem the same bug from this BZ, but hits every time. 
> > Am I doing something wrong on this config? Is this expected?
> I'd say, It isn't expected.
> 
> David,
> another question, we are migrating from host with newer CPU to the host with
> much older CPU
> with a lot of features gone and some added, do we really support migration
> in this case?

Well we should as long as the -cpu type passed is a correct subset of both CPUs;
the guest shouldn't see any of the newer features of the newer CPU;
 I woudln't expect migrating with -cpu host to work , but specifying the EPYC cpu type should.

> 
> here is a diff of host features from comment 21:
> 36a37
> > amd_dcm
> 45d45
> < pcid
> 48d47
> < x2apic
> 66d64
> < ibs
> 77,79d74
> < cat_l3
> < cdp_l3
> < invpcid_single
> 82,83d76
> < mba
> < ibrs
> 85d77
> < stibp
> 92,94d83
> < invpcid
> < cqm
> < rdt_a
> 99d87
> < clwb
> 105,108d92
> < cqm_llc
> < cqm_occup_llc
> < cqm_mbm_total
> < cqm_mbm_local
> 112,114d95
> < rdpru
> < wbnoinvd
> < amd_ppin
> 125a107
> > avic
> 128,134d109
> < v_spec_ctrl
> < umip
> < pku
> < ospke
> < vaes
> < vpclmulqdq
> < rdpid
> 138,140d112
> < sme
> < sev
> < sev_es

Comment 31 Dr. David Alan Gilbert 2022-01-20 13:33:38 UTC
I wonder if it's worth dumping cpuid as seen by the guest in both cases?

Comment 32 Leonardo Bras 2022-01-20 15:44:09 UTC
(In reply to Igor Mammedov from comment #29)
> (In reply to Leonardo Bras from comment #28)
> > Thanks for lending the machines Li Xiaohui!
> > 
> > (In reply to Dr. David Alan Gilbert from comment #9)
> > > I started to try reproducing this and am seeing a warning on the host that
> > > doesn't sound great:
> > > 
> > > /usr/libexec/qemu-kvm -M q35 -cpu EPYC -m 8G -enable-kvm -smp
> > > 1,maxcpus=4,cores=2,threads=1,sockets=2 -nographic -drive
> > > if=virtio,file=/home/rhel-guest-image-9.0-20211129.2.x86_64.qcow2 -device
> > > EPYC-x86_64-cpu,socket-id=0,core-id=1,thread-id=0,id=cpu1
> > > device_del cpu1
> > 
> > I started the qemu on Milan host with this exact command-line (changing only
> > the disk file).
> > Then I started qemu on receiving Naples host with the same command +
> > '-incoming tcp:dst_host:4321'
> > 
> > When the migration finishes, the receiving qemu will print guest kernel
> > errors:
> > 
> > [28211.560039] ------------[ cut here ]------------                         
> > 
> > [28211.561250] Bad FPU state detected at
> > __restore_fpregs_from_fpstate+0x36/0x50, reinitializing FPU registers.       
> > [28211.561300] WARNING: CPU: 0 PID: 1757 at arch/x86/mm/extable.c:65
> > ex_handler_fprestore+0x53/0x60                   
> > [28211.564960] Modules linked in: 
> > [...]
> > 
> > This happens moving from Milan to Naples,but not otherwise.
> > Also, I started the VM in Naples, migrated successfully to Milan, but the
> > same VM did fail when migrating back to Naples.
> 
> that looks like cpu feature mismatch
> though according to comment 19 guest sees the same features.
> 
> Does this happen without any hotplug?

I am not sure if the -device in the end of the qemu command-line counts as a hotplug, but I haven't hot-added anything through QMP/HMP.

> 
> Does QEMU print any warnings when starting?

No, I did not see anything weird. Only the usual:
SeaBIOS (version 1.15.0-1.el9)                                                                                         
iPXE (http://ipxe.org) 00:02.0 CA00 PCI2.10 PnP PMM+7FF8D410+7FECD410 CA00                                             
Booting from Hard Disk...                                                                                              
.                                                                                                                      
Probing EDD (edd=off to disable)... o                                                                                  
[    0.000000] Linux version 5.14.0-42.el9.x86_64 (mockbuild.eng.bos.redhat.com) (gcc (GCC) 11.2.1 20211203 (Red Hat 11.2.1-7), GNU ld version 2.35.2-14.el9) #1 SMP PREEMPT Thu Jan 13 15:21:10 EST 2022    

> 
> Also we should recheck what features guest sees on both hosts,

On guests, lscpu prints the same output, apart from Bogomips.  

> what's output of following QMP command with freshly started qemu on each
> host:
> 
> printf '{ "execute": "qmp_capabilities" }\n{"execute":
> "query-cpu-model-expansion", "arguments": {"model": {"name": "EPYC" },
> "type": "full"}}\n' | nc -U path_to_your_qmp_socket

The output is exactly the same for both guests. I attached them for reference.


> 
> > It does not seem the same bug from this BZ, but hits every time. 
> > Am I doing something wrong on this config? Is this expected?
> I'd say, It isn't expected.
> 
> David,
> another question, we are migrating from host with newer CPU to the host with
> much older CPU
> with a lot of features gone and some added, do we really support migration
> in this case?
> 
> here is a diff of host features from comment 21:
> 36a37
> > amd_dcm
> 45d45
> < pcid
> 48d47
> < x2apic
> 66d64
> < ibs
> 77,79d74
> < cat_l3
> < cdp_l3
> < invpcid_single
> 82,83d76
> < mba
> < ibrs
> 85d77
> < stibp
> 92,94d83
> < invpcid
> < cqm
> < rdt_a
> 99d87
> < clwb
> 105,108d92
> < cqm_llc
> < cqm_occup_llc
> < cqm_mbm_total
> < cqm_mbm_local
> 112,114d95
> < rdpru
> < wbnoinvd
> < amd_ppin
> 125a107
> > avic
> 128,134d109
> < v_spec_ctrl
> < umip
> < pku
> < ospke
> < vaes
> < vpclmulqdq
> < rdpid
> 138,140d112
> < sme
> < sev
> < sev_es

Re-captured the host flags and diffed Naples->Milan with diff -u, greping only for the changes:
-amd_dcm
+pcid
+x2apic
+ibs
+cat_l3
+cdp_l3
+invpcid_single
+mba
+ibrs
+stibp
+erms
+invpcid
+cqm
+rdt_a
+clwb
+cqm_llc
+cqm_occup_llc
+cqm_mbm_total
+cqm_mbm_local
+rdpru
+wbnoinvd
+amd_ppin
-avic
+v_spec_ctrl
+umip
+pku
+ospke
+vaes
+vpclmu
+lqdq
+rdpid
+fsrm

Could not find any of the above in the guest flags.

Comment 34 Leonardo Bras 2022-01-20 15:54:30 UTC
(In reply to Dr. David Alan Gilbert from comment #30)
> > > Am I doing something wrong on this config? Is this expected?
> > I'd say, It isn't expected.
> > 
> > David,
> > another question, we are migrating from host with newer CPU to the host with
> > much older CPU
> > with a lot of features gone and some added, do we really support migration
> > in this case?
> 
> Well we should as long as the -cpu type passed is a correct subset of both
> CPUs;

At least in terms of 'flags', both guests have the same, and none of them are in 
hosts' diff.

> the guest shouldn't see any of the newer features of the newer CPU;
>  I woudln't expect migrating with -cpu host to work , but specifying the
> EPYC cpu type should.

Ok then,
Since this new bug is blocking this BZ, I will try to debug it, and then come back to this BZ's bug.

Comment 35 Li Xiaohui 2022-01-21 12:42:30 UTC
(In reply to Leonardo Bras from comment #32)
...
> > > 
> > > I started the qemu on Milan host with this exact command-line (changing only
> > > the disk file).
> > > Then I started qemu on receiving Naples host with the same command +
> > > '-incoming tcp:dst_host:4321'
> > > 
> > > When the migration finishes, the receiving qemu will print guest kernel
> > > errors:
> > > 
> > > [28211.560039] ------------[ cut here ]------------                         
> > > 
> > > [28211.561250] Bad FPU state detected at
> > > __restore_fpregs_from_fpstate+0x36/0x50, reinitializing FPU registers.       
> > > [28211.561300] WARNING: CPU: 0 PID: 1757 at arch/x86/mm/extable.c:65
> > > ex_handler_fprestore+0x53/0x60                   
> > > [28211.564960] Modules linked in: 
> > > [...]
> > > 
> > > This happens moving from Milan to Naples,but not otherwise.
> > > Also, I started the VM in Naples, migrated successfully to Milan, but the
> > > same VM did fail when migrating back to Naples.
> > 
> > that looks like cpu feature mismatch
> > though according to comment 19 guest sees the same features.
> > 
> > Does this happen without any hotplug?
> 
> I am not sure if the -device in the end of the qemu command-line counts as a
> hotplug, but I haven't hot-added anything through QMP/HMP.
> 

I don't know which '-device' qemu cmd you mean. But below qemu cmd is for hotpluging vcpu.
-device EPYC-x86_64-cpu,socket-id=0,core-id=1,thread-id=0,id=cpu1 \

Comment 36 Li Xiaohui 2022-01-21 13:57:09 UTC
(In reply to Leonardo Bras from comment #34)
> (In reply to Dr. David Alan Gilbert from comment #30)
> > > > Am I doing something wrong on this config? Is this expected?
> > > I'd say, It isn't expected.
> > > 
> > > David,
> > > another question, we are migrating from host with newer CPU to the host with
> > > much older CPU
> > > with a lot of features gone and some added, do we really support migration
> > > in this case?
> > 
> > Well we should as long as the -cpu type passed is a correct subset of both
> > CPUs;
> 
> At least in terms of 'flags', both guests have the same, and none of them
> are in 
> hosts' diff.
> 
> > the guest shouldn't see any of the newer features of the newer CPU;
> >  I woudln't expect migrating with -cpu host to work , but specifying the
> > EPYC cpu type should.
> 
> Ok then,
> Since this new bug is blocking this BZ, I will try to debug it, and then
> come back to this BZ's bug.

Thanks Leonardo to highlight this issue, I file a new bug for it:
Bug 2043545 - Guest dump after migrate from Milan to Naples machine

Comment 37 Igor Mammedov 2022-01-21 15:15:23 UTC
(In reply to Leonardo Bras from comment #32)
> (In reply to Igor Mammedov from comment #29)
> > (In reply to Leonardo Bras from comment #28)
> > > Thanks for lending the machines Li Xiaohui!
> > > 
> > > (In reply to Dr. David Alan Gilbert from comment #9)
> > > > I started to try reproducing this and am seeing a warning on the host that
> > > > doesn't sound great:
> > > > 
> > > > /usr/libexec/qemu-kvm -M q35 -cpu EPYC -m 8G -enable-kvm -smp
> > > > 1,maxcpus=4,cores=2,threads=1,sockets=2 -nographic -drive
> > > > if=virtio,file=/home/rhel-guest-image-9.0-20211129.2.x86_64.qcow2 -device
> > > > EPYC-x86_64-cpu,socket-id=0,core-id=1,thread-id=0,id=cpu1
> > > > device_del cpu1
> > > 
> > > I started the qemu on Milan host with this exact command-line (changing only
> > > the disk file).
> > > Then I started qemu on receiving Naples host with the same command +
> > > '-incoming tcp:dst_host:4321'
> > > 
> > > When the migration finishes, the receiving qemu will print guest kernel
> > > errors:
> > > 
> > > [28211.560039] ------------[ cut here ]------------                         
> > > 
> > > [28211.561250] Bad FPU state detected at
> > > __restore_fpregs_from_fpstate+0x36/0x50, reinitializing FPU registers.       
> > > [28211.561300] WARNING: CPU: 0 PID: 1757 at arch/x86/mm/extable.c:65
> > > ex_handler_fprestore+0x53/0x60                   
> > > [28211.564960] Modules linked in: 
> > > [...]
> > > 
> > > This happens moving from Milan to Naples,but not otherwise.
> > > Also, I started the VM in Naples, migrated successfully to Milan, but the
> > > same VM did fail when migrating back to Naples.
> > 
> > that looks like cpu feature mismatch
> > though according to comment 19 guest sees the same features.
> > 
> > Does this happen without any hotplug?
> 
> I am not sure if the -device in the end of the qemu command-line counts as a
> hotplug, but I haven't hot-added anything through QMP/HMP.

from guest point of view any device on QEMU command line is cold-plugged
but once guest CPUs are started, it's considered as hotplug (device_add command)
(though in case of CPU that depends on boot stage guest OS is
at the moment it gets hotplug notification)


> > Does QEMU print any warnings when starting?
> 
> No, I did not see anything weird. Only the usual:
> SeaBIOS (version 1.15.0-1.el9)                                              
> 
> iPXE (http://ipxe.org) 00:02.0 CA00 PCI2.10 PnP PMM+7FF8D410+7FECD410 CA00  
> 
> Booting from Hard Disk...                                                   
> 
> .                                                                           
> 
> Probing EDD (edd=off to disable)... o                                       
> 
> [    0.000000] Linux version 5.14.0-42.el9.x86_64
> (mockbuild.eng.bos.redhat.com) (gcc (GCC) 11.2.1 20211203
> (Red Hat 11.2.1-7), GNU ld version 2.35.2-14.el9) #1 SMP PREEMPT Thu Jan 13
> 15:21:10 EST 2022    

I've meant output from QEMU on stderr/stdout on host, but given flags are matching
there probably isn't anything suspicious there as well.

Comment 38 Li Xiaohui 2022-01-23 03:36:19 UTC
(In reply to Igor Mammedov from comment #37)
> 
> 
> > > Does QEMU print any warnings when starting?
> > 
> > No, I did not see anything weird. Only the usual:
> > SeaBIOS (version 1.15.0-1.el9)                                              
> > 
> > iPXE (http://ipxe.org) 00:02.0 CA00 PCI2.10 PnP PMM+7FF8D410+7FECD410 CA00  
> > 
> > Booting from Hard Disk...                                                   
> > 
> > .                                                                           
> > 
> > Probing EDD (edd=off to disable)... o                                       
> > 
> > [    0.000000] Linux version 5.14.0-42.el9.x86_64
> > (mockbuild.eng.bos.redhat.com) (gcc (GCC) 11.2.1 20211203
> > (Red Hat 11.2.1-7), GNU ld version 2.35.2-14.el9) #1 SMP PREEMPT Thu Jan 13
> > 15:21:10 EST 2022    
> 
> I've meant output from QEMU on stderr/stdout on host, but given flags are
> matching
> there probably isn't anything suspicious there as well.

Yes, I did some tests, no warnings from QEMU on stdout.

Comment 39 Igor Mammedov 2022-01-25 12:31:18 UTC
Can you also retest and check if it's a duplicate of Bug 2016959

Comment 40 Li Xiaohui 2022-01-26 06:58:04 UTC
(In reply to Igor Mammedov from comment #39)
> Can you also retest and check if it's a duplicate of Bug 2016959

This bug was found out when verify Bug 2016959.

Comment 41 Igor Mammedov 2022-01-26 09:49:53 UTC
So far it looks like kernel issue, should we move BZ to kernel component?

Comment 42 Leonardo Bras 2022-01-26 21:43:08 UTC
(In reply to Igor Mammedov from comment #41)
> So far it looks like kernel issue, should we move BZ to kernel component?

If it's not a duplicate I would like to try solving this bug.

Is it ok?

Comment 43 Leonardo Bras 2022-02-11 06:30:42 UTC
Hello Li Xiaohui,

Bug BZ#2043545, which blocks this one, have a kernel brew for testing. 

If the bug stops reproducing on the above BZ, could you please also use its brew/setup to check if this BZ (2028337) still reproduces?

Thanks!

Comment 44 Li Xiaohui 2022-02-16 12:54:20 UTC
(In reply to Leonardo Bras from comment #43)
> Hello Li Xiaohui,
> 
> Bug BZ#2043545, which blocks this one, have a kernel brew for testing. 
> 
> If the bug stops reproducing on the above BZ, could you please also use its
> brew/setup to check if this BZ (2028337) still reproduces?

I'm running 200 times for this bug, will update the result tmr.

> 
> Thanks!

Comment 45 Li Xiaohui 2022-02-17 02:49:45 UTC
(In reply to Li Xiaohui from comment #44)
> (In reply to Leonardo Bras from comment #43)
> > Hello Li Xiaohui,
> > 
> > Bug BZ#2043545, which blocks this one, have a kernel brew for testing. 
> > 
> > If the bug stops reproducing on the above BZ, could you please also use its
> > brew/setup to check if this BZ (2028337) still reproduces?
> 
> I'm running 200 times for this bug, will update the result tmr.

Still hit guest call trace. The below build doesn't solve this bug's issue:
https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=42974611
(kernel-5.14.0-58.test.el9.src.rpm, x86_64)

> 
> > 
> > Thanks!

Comment 46 Leonardo Bras 2022-02-17 06:35:57 UTC
(In reply to Li Xiaohui from comment #45)
> (In reply to Li Xiaohui from comment #44)
> > (In reply to Leonardo Bras from comment #43)
> > > Hello Li Xiaohui,
> > > 
> > > Bug BZ#2043545, which blocks this one, have a kernel brew for testing. 
> > > 
> > > If the bug stops reproducing on the above BZ, could you please also use its
> > > brew/setup to check if this BZ (2028337) still reproduces?
> > 
> > I'm running 200 times for this bug, will update the result tmr.
> 
> Still hit guest call trace. The below build doesn't solve this bug's issue:
> https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=42974611
> (kernel-5.14.0-58.test.el9.src.rpm, x86_64)
> 
> > 
> > > 
> > > Thanks!

Thanks for testing Li Xiaohui,

Was it used in both host and L1 guest (which serves as L2 host) ?
What was the configuration (kernel and qemu versions) for host and guests?

> src host: AMD EPYC 7313 16-Core Processor, dst host: AMD EPYC 7251 8-Core Processor
Were the hosts the same in last reproduction? 

Is there any chance that you can lend me the setup to reproduce?
(reproduction rate seems very low, would be great to keep the setup)

Comment 47 Leonardo Bras 2022-02-17 06:43:00 UTC
(In reply to Leonardo Bras from comment #46)
> (In reply to Li Xiaohui from comment #45)
> > (In reply to Li Xiaohui from comment #44)
> > > (In reply to Leonardo Bras from comment #43)
> > > > Hello Li Xiaohui,
> > > > 
> > > > Bug BZ#2043545, which blocks this one, have a kernel brew for testing. 
> > > > 
> > > > If the bug stops reproducing on the above BZ, could you please also use its
> > > > brew/setup to check if this BZ (2028337) still reproduces?
> > > 
> > > I'm running 200 times for this bug, will update the result tmr.
> > 
> > Still hit guest call trace. The below build doesn't solve this bug's issue:
> > https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=42974611
> > (kernel-5.14.0-58.test.el9.src.rpm, x86_64)
> > 
> > > 
> > > > 
> > > > Thanks!
> 
> Thanks for testing Li Xiaohui,
> 
> Was it used in both host and L1 guest (which serves as L2 host) ?

Sorry, that part above got mixed up with another bug :\
Was the kernel used on both hosts?

The rest of the questions are still valid :)

> What was the configuration (kernel and qemu versions) for host and guests?
> 
> > src host: AMD EPYC 7313 16-Core Processor, dst host: AMD EPYC 7251 8-Core Processor
> Were the hosts the same in last reproduction? 
> 
> Is there any chance that you can lend me the setup to reproduce?
> (reproduction rate seems very low, would be great to keep the setup)

Comment 48 Li Xiaohui 2022-02-17 08:40:11 UTC
(In reply to Leonardo Bras from comment #47)
> (In reply to Leonardo Bras from comment #46)
> > (In reply to Li Xiaohui from comment #45)
> > > (In reply to Li Xiaohui from comment #44)
> > > > (In reply to Leonardo Bras from comment #43)
> > > > > Hello Li Xiaohui,
> > > > > 
> > > > > Bug BZ#2043545, which blocks this one, have a kernel brew for testing. 
> > > > > 
> > > > > If the bug stops reproducing on the above BZ, could you please also use its
> > > > > brew/setup to check if this BZ (2028337) still reproduces?
> > > > 
> > > > I'm running 200 times for this bug, will update the result tmr.
> > > 
> > > Still hit guest call trace. The below build doesn't solve this bug's issue:
> > > https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=42974611
> > > (kernel-5.14.0-58.test.el9.src.rpm, x86_64)
> > > 
> > > > 
> > > > > 
> > > > > Thanks!
> > 
> > Thanks for testing Li Xiaohui,
> > 
> > Was it used in both host and L1 guest (which serves as L2 host) ?
> 
> Sorry, that part above got mixed up with another bug :\
> Was the kernel used on both hosts?

Yes, I upgraded two hosts to build kernel.

> 
> The rest of the questions are still valid :)
> 
> > What was the configuration (kernel and qemu versions) for host and guests?
> > 
> > > src host: AMD EPYC 7313 16-Core Processor, dst host: AMD EPYC 7251 8-Core Processor
> > Were the hosts the same in last reproduction? 
> > 
> > Is there any chance that you can lend me the setup to reproduce?
> > (reproduction rate seems very low, would be great to keep the setup)

Comment 49 Li Xiaohui 2022-02-17 08:47:52 UTC
(In reply to Leonardo Bras from comment #46)
> (In reply to Li Xiaohui from comment #45)
> > (In reply to Li Xiaohui from comment #44)
> > > (In reply to Leonardo Bras from comment #43)
> > > > Hello Li Xiaohui,
> > > > 
> > > > Bug BZ#2043545, which blocks this one, have a kernel brew for testing. 
> > > > 
> > > > If the bug stops reproducing on the above BZ, could you please also use its
> > > > brew/setup to check if this BZ (2028337) still reproduces?
> > > 
> > > I'm running 200 times for this bug, will update the result tmr.
> > 
> > Still hit guest call trace. The below build doesn't solve this bug's issue:
> > https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=42974611
> > (kernel-5.14.0-58.test.el9.src.rpm, x86_64)
> > 
> > > 
> > > > 
> > > > Thanks!
> 
> Thanks for testing Li Xiaohui,
> 
> Was it used in both host and L1 guest (which serves as L2 host) ?
> What was the configuration (kernel and qemu versions) for host and guests?

The kernel on hosts is 5.14.0-58.test.el9.x86_64, qemu-kvm-6.2.0-8.el9.x86_64 on hosts;
The guest is rhel 8.6.0, kernel-4.18.0-353.el8.x86_64.

Per Comment 6 from this bug, we could reproduce bug with rhel 8.6.0 and rhel9 guest. So I used rhel8.6.0 guest to test this kernel build. 

> 
> > src host: AMD EPYC 7313 16-Core Processor, dst host: AMD EPYC 7251 8-Core Processor
> Were the hosts the same in last reproduction? 
> 
> Is there any chance that you can lend me the setup to reproduce?
> (reproduction rate seems very low, would be great to keep the setup)

Of course. But I used our automation scripts to reproduce this issue. I think it's maybe hard for you to reproduce manually.

Comment 50 Li Xiaohui 2022-02-17 08:50:52 UTC
Sorry forget one question:
> > src host: AMD EPYC 7313 16-Core Processor, dst host: AMD EPYC 7251 8-Core Processor
> Were the hosts the same in last reproduction? 

Not same machine for Milan, but the cpu is AMD EPYC 7313 16-Core Processor. And I could reproduce bug on them.

Comment 51 Dr. David Alan Gilbert 2022-02-17 09:28:15 UTC
(I wonder if just all hotplug+migration is broken?  We have bz 2053584+2053526 for hotplug of virtio devices + live migration)

Comment 52 Leonardo Bras 2022-02-17 14:34:32 UTC
(In reply to Li Xiaohui from comment #49)
> > 
> > Is there any chance that you can lend me the setup to reproduce?
> > (reproduction rate seems very low, would be great to keep the setup)
> 
> Of course. But I used our automation scripts to reproduce this issue. I
> think it's maybe hard for you to reproduce manually.

Humm, yeah, that makes sense.

Is it easy to run those scripts on host? 
I am thinking I could instrument the kernel, for example, and use your scripts to make the bug reproduce.
(maybe recover a kdump, or so)

Comment 53 Leonardo Bras 2022-02-17 14:36:59 UTC
(In reply to Dr. David Alan Gilbert from comment #51)
> (I wonder if just all hotplug+migration is broken?  We have bz
> 2053584+2053526 for hotplug of virtio devices + live migration)

I think it makes sense. 
I will try to take a better look at this possibility when debugging.

Comment 69 RHEL Program Management 2023-06-02 07:42:02 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.