Bug 2079311 - VMs hang after migration
Summary: VMs hang after migration
Keywords:
Status: VERIFIED
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: kernel
Version: 8.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Dr. David Alan Gilbert
QA Contact: Li Xiaohui
URL:
Whiteboard:
: 2102146 (view as bug list)
Depends On:
Blocks: 2118547 2102146 2131755 2131756
TreeView+ depends on / blocked
 
Reported: 2022-04-27 11:26 UTC by Konstantin Kuzov
Modified: 2022-11-29 09:25 UTC (History)
61 users (show)

Fixed In Version: kernel-4.18.0-430.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2102146 2131755 2131756 (view as bug list)
Environment:
Last Closed:
Type: Bug
Target Upstream Version:


Attachments (Terms of Use)
vdsm produced qemu command line (7.14 KB, text/plain)
2022-04-28 14:26 UTC, Konstantin Kuzov
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/rhel/src/kernel rhel-8 merge_requests 3420 0 None None None 2022-09-27 17:58:39 UTC
Red Hat Issue Tracker RHELPLAN-124381 0 None None None 2022-06-06 16:07:36 UTC
Red Hat Knowledge Base (Solution) 6967834 0 None None None 2022-07-15 04:45:31 UTC

Description Konstantin Kuzov 2022-04-27 11:26:44 UTC
Description of problem:
After upgrading from ovirt-node-ng-4.4.8.1-0.20210826.0 to ovirt-node-ng-4.5.0.1-0.20220426.0 guests have a high chance to hang after migration with 100% cpu usage.

Version-Release number of selected component (if applicable):
OS Version: RHEL 8.6.2204.0-1.el8
OS Description: oVirt Node 4.5.0.1
Kernel Version: 4.18.0-383.el8.x86_64
KVM Version: 6.2.0-5.module_el8.6.0+1087+b42c8331
LIBVIRT Version: libvirt-8.0.0-2.module_el8.6.0+1087+b42c8331
VDSM Version: vdsm-4.50.0.13-1.el8
Hardware: x86_64, Xeon X5570, Secure Nehalem profile

How reproducible:
Migrate vm to another host and back.

Steps to Reproduce:
1. Start VM
2. Migrate VM to another host.
3. If not hanged migrate it back.
4. Repeat 2 & 3 until it hang.

Additional info:
In my case problem reproduced very often, 4 out of 5 tries for a single migration. VM cpu/memory configuration seems doesn't matter.

Downgrading kernel to previously installed 4.18.0-338.el8.x86_64 #1 SMP Fri Aug 27 17:32:14 UTC 2021 x86_64 seems to fix the issue.

Also tried to downgrade qemu-kvm (up to 6.0.0-29), disabling hugepages, memory balloons, etc... That doesn't help.

Comment 1 Sandro Bonazzola 2022-04-28 06:27:58 UTC
Paolo, maybe a kernel/kvm regression?

Comment 2 Paolo Bonzini 2022-04-28 06:36:00 UTC
Hi, can you provide the QEMU command line?

Comment 3 Sandro Bonazzola 2022-04-28 06:41:37 UTC
Arik can you help answering comment #2 ?

Comment 4 Arik 2022-04-28 07:17:58 UTC
(In reply to Sandro Bonazzola from comment #3)
> Arik can you help answering comment #2 ?

It doesn't seem to reproduce it for me with qemu-kvm 6.2.0-11.module+el8.6.0+14707+5aa4b42d
I guess it either means it was solved in the meantime or the VMs I used are different than the ones the reported experienced this with

Konstantin, can you please try to upgrade to the aforementioned version of qemu-kvm and provide the QEMU command line if it reproduces?

Comment 5 Konstantin Kuzov 2022-04-28 14:26:01 UTC
Created attachment 1875701 [details]
vdsm produced qemu command line

(In reply to Arik from comment #4)
Could you provide repo or where I could download rpms of 6.2.0-11.el8.6 build? Couldn't find it anywhere.

Though it looks more like kernel regression. Also tested with 4.18.0-365 and 4.18-0-373 kernels, both doesn't have this issue. So it seems this issue is only affecting 4.18.0-383 kernel.

Attached QEMU commandline produced by vdsm/libvirt. I'll try to minimize it to still reproducible case. But it seems at least some guest activity is required to reproduce it reliably and it is hard to reproduce if,  for example, guest is sitting in seabios/ovmf without disks attached.

Comment 6 Arik 2022-05-01 12:54:46 UTC
Yeah, I see we run the tests with kernel 4.18.0-372 and as you found that it works with 4.18-0-373, it might be an issue that we didn't notice yet

Comment 7 Arik 2022-05-09 12:15:53 UTC
Konstantin, can you please provide the QEMU command line (requested on comment 2) of a VM that you experienced this with?

Comment 8 Konstantin Kuzov 2022-05-12 20:49:32 UTC
There is a minimized version of previously attached qemu command line for local migration against latest Debian installer iso:

/usr/libexec/qemu-kvm -name guest=test,debug-threads=on -monitor stdio \
-machine pc-q35-rhel8.6.0,usb=off,smm=on,dump-guest-core=off,graphics=off -accel kvm -cpu Nehalem,spec-ctrl=on,ssbd=on \
-m size=1048576k,slots=1,maxmem=4194304k -smp 1,maxcpus=1,sockets=1,dies=1,cores=1,threads=1 \
-no-user-config -nodefaults -no-hpet \
-device ide-cd,bus=ide.2,drive=isocd,bootindex=2,werror=report,rerror=report \
-device qxl-vga \
-drive file=/tmp/debian-11.3.0-amd64-netinst.iso,id=isocd,media=cdrom,if=none

Steps to reproduce:
 1. start first qemu-kvm with appended "-vnc :50", connect to vnc, select "Graphical Install" and wait till it booted to installer's language selection screen.
 2. start second qemu-kvm with appended "-incoming tcp:0:4444 -vnc :51"
 3. on first qemu-kvm execute command "migrate -d tcp:127.0.0.1:4444" and wait for "Migration status: completed" in "info migrate"

Also tested with CentOS-7-x86_64-Minimal-2009.iso, with similar boot to installer's language selection screen. With it on first migration guest resets instead of a hang most of the times. But it hangs for subsequent migration.

Also during testing received these error reports on migration completion:
(qemu) KVM internal error. Suberror: 3
extra data[0]: 0x0000000080000b0e
extra data[1]: 0x0000000000000031
extra data[2]: 0x0000000000000182
extra data[3]: 0x000000009018afc8
extra data[4]: 0x000000000000000a
RAX=000000009018a4ef RBX=0000000000000001 RCX=ffffffff9018a4ef RDX=0000000000000000
RSI=0000000000000000 RDI=ffff97df7c6138d0 RBP=ffff97df7c607ec8 RSP=ffff97df73ce1fd0
R8 =0000000000000001 R9 =0000000000000001 R10=000000000000024d R11=0000000000aaaaaa
R12=ffffffff9045e7c4 R13=0000000000000000 R14=0000000000000000 R15=00000000000003e8
RIP=ffffffff9018aadf RFL=00010087 [--S--PC] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0000 0000000000000000 ffffffff 00c00000
CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA]
SS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS   [-WA]
DS =0000 0000000000000000 ffffffff 00c00000
FS =0000 00007f684d03c740 ffffffff 00c00000
GS =0000 0000000000000000 ffffffff 00c00000
LDT=0000 0000000000000000 ffffffff 00c00000
TR =0040 ffff97df7c604000 00002087 00008b00 DPL=0 TSS64-busy
GDT=     ffff97df7c60c000 0000007f
IDT=     ffffffffff528000 00000fff
CR0=80050033 CR2=0000000000004004 CR3=000000002577c000 CR4=000006f0
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000d01
Code=0f ae e8 eb f9 48 ff c8 75 e3 48 81 c4 00 01 00 00 48 89 f4 <65> 48 8b 0c 25 04 40 00 00 48 39 cc 77 2a 48 8d 81 00 fe ff ff 48 39 e0 77 1e 48 29 e1 65

(qemu) KVM internal error. Suberror: 1
emulation failure
EAX=00007b0d EBX=000f00b5 ECX=00001234 EDX=000eaaff
ESI=000eaa64 EDI=000ef9a3 EBP=0000fa38 ESP=000eaa38
EIP=0000fe11 EFL=00000092 [--S-A--] CPL=0 II=0 A20=1 SMM=1 HLT=0
ES =0000 00000000 ffffffff 00809300
CS =3000 00030000 ffffffff 00809300
SS =0000 00000000 ffffffff 00809300
DS =0000 00000000 ffffffff 00809300
FS =0000 00000000 ffffffff 00809300
GS =0000 00000000 ffffffff 00809300
LDT=0000 00000000 ffffffff 00c00000
TR =0008 00000580 00000067 00008b00
GDT=     0000aa90 0000002f
IDT=     00000000 00000000
CR0=00000012 CR2=00000000 CR3=00000000 CR4=00000000
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000000
Code=qemu-kvm: ../hw/core/cpu-sysemu.c:77: cpu_asidx_from_attrs: Assertion `ret < cpu->num_ases && ret >= 0' failed.

But they are very rare, most of the time guest just hang and qemu-kvm consumes 100% of the core without any errors and qemu's monitor is still interactable.

Comment 10 Sergey 2022-05-31 12:43:00 UTC
Hit exactly same issue, upgraded environment from 4.4.10 to 4.5.0 on Centos Stream 8 hosts, VMs migrated to hosts with kernel 4.18.0-383.el8 hang and high CPU usage for qemu-kvm observed,
when starting VMs on this host they also hang, sometimes they hang without any errors, sometimes linux guests randomly hanging and messages like "task XXX hang for 120 seconds" observed inside guests,
sometimes qemu-kvm crashes with log:


KVM internal error. Suberror: 1
emulation failure
EAX=000f618c EBX=00000000 ECX=00009000 EDX=000f3d64
ESI=ffff7000 EDI=000f7000 EBP=00000000 ESP=00000fc8
EIP=000f0de1 EFL=00010006 [-----P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0010 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
CS =0008 00000000 ffffffff 00c09b00 DPL=0 CS32 [-RA]
SS =0010 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
DS =0010 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
FS =0010 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
GS =0010 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT
TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy
GDT=     000f6140 00000037
IDT=     000f617e 00000000
CR0=00000011 CR2=00000000 CR3=00000000 CR4=00000000
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000000
Code=0f 00 b9 fc ff 0f 00 81 e9 88 61 0f 00 be 8c 61 ff ff 89 c7 <f3> a4 c7 05 88 61 0f 00 00 00 00 00 ba f9 0c 00 00 b0 02 ee b0 06 ee cc 80 3d ad 53 0f 00



I've downgraded all hosts to kernel 4.18.0-373.el8 and everything works fine, as expected.

Comment 11 Yash Mankad 2022-06-06 15:27:49 UTC
Changing the product to Fedora (unable to change to CentOS Stream) and reassigning to the kernel team based on #comment5.

The BZ is affecting upstream oVirt deployments on CentOS Stream 8 as per #comment10

Comment 16 Nisim Simsolo 2022-06-23 13:29:28 UTC
I also saw this issue using RHEL 8.6:
kernel version: 4.18.0-372.13.1.el8_6.x86_64
vdsm-4.50.1.3-1.el8ev.x86_64
qemu-kvm-6.2.0-11.module+el8.6.0+15489+bc23efef.1.x86_64
libvirt-8.0.0-5.2.module+el8.6.0+15256+3a0914fe.x86_64

Comment 17 Dr. David Alan Gilbert 2022-06-23 15:54:29 UTC
(In reply to Nisim Simsolo from comment #16)
> I also saw this issue using RHEL 8.6:
> kernel version: 4.18.0-372.13.1.el8_6.x86_64
> vdsm-4.50.1.3-1.el8ev.x86_64
> qemu-kvm-6.2.0-11.module+el8.6.0+15489+bc23efef.1.x86_64
> libvirt-8.0.0-5.2.module+el8.6.0+15256+3a0914fe.x86_64

Can you give details how it fails for you and how exactly you did the migration?

Comment 18 Arik 2022-06-23 15:59:17 UTC
(In reply to Dr. David Alan Gilbert from comment #17)
> (In reply to Nisim Simsolo from comment #16)
> > I also saw this issue using RHEL 8.6:
> > kernel version: 4.18.0-372.13.1.el8_6.x86_64
> > vdsm-4.50.1.3-1.el8ev.x86_64
> > qemu-kvm-6.2.0-11.module+el8.6.0+15489+bc23efef.1.x86_64
> > libvirt-8.0.0-5.2.module+el8.6.0+15256+3a0914fe.x86_64
> 
> Can you give details how it fails for you and how exactly you did the
> migration?

It was done through RHV, with latest RHV 4.4 SP1 batch 1 (oVirt 4.5.1)
Nisim, can you please share logs (or details on how to connect to your setup, privately)?

Comment 19 Arik 2022-06-23 16:00:49 UTC
(In reply to Arik from comment #18)
> It was done through RHV, with latest RHV 4.4 SP1 batch 1 (oVirt 4.5.1)

latest build of RHV 4.4 SP1 batch 1* (it's not released yet)

Comment 20 Nisim Simsolo 2022-06-23 17:11:10 UTC
> Nisim, can you please share logs (or details on how to connect to your
> setup, privately)?

Email sent. please let me know if you need anything else.

Comment 22 Konstantin Kuzov 2022-06-25 13:34:48 UTC
Updated to oVirt 4.5.1 with 4.18.0-394.el8.x86_64 kernel. Same issue.
Switching kernel back to 4.18.0-373.el8.x86_64 resolves it, as expected.

Comment 23 Klaas Demter 2022-06-28 07:39:18 UTC
Is this problem present in the currently released RHV version or just in the unreleased one based on oVirt 4.5.1? Or is this a problem within the kernel that is unrelated to the RHV version? Latest currently released RHEL 8.6 kernel is kernel-4.18.0-372.9.1

Comment 24 Arik 2022-06-28 07:57:25 UTC
(In reply to Klaas Demter from comment #23)
> Is this problem present in the currently released RHV version or just in the
> unreleased one based on oVirt 4.5.1? Or is this a problem within the kernel
> that is unrelated to the RHV version? Latest currently released RHEL 8.6
> kernel is kernel-4.18.0-372.9.1

This issue is RHV version agnostic

Comment 25 Klaas Demter 2022-06-28 08:08:19 UTC
(In reply to Arik from comment #24)
> (In reply to Klaas Demter from comment #23)
> > Is this problem present in the currently released RHV version or just in the
> > unreleased one based on oVirt 4.5.1? Or is this a problem within the kernel
> > that is unrelated to the RHV version? Latest currently released RHEL 8.6
> > kernel is kernel-4.18.0-372.9.1
> 
> This issue is RHV version agnostic

So this all boils down to "can I safely upgrade to latest RHV/RHEL version or will this mean I am then affected by this issue"? :)

Comment 26 Li Xiaohui 2022-06-28 08:44:40 UTC
I tried to reproduce this bug on RHEL 8.7.0 host with kernel-4.18.0-383.el8.x86_64 && qemu-kvm-6.2.0-11.module+el8.6.0+15489+bc23efef.1.x86_64.

Test steps:
1. Boot a rhel 8.7.0 guest on the host
2. In guest, use stressapptest to load cpu workload, nearly 400% cpu usage seeing through 'top', nearly same cpu usage of qemu-kvm on host
3. On same host, Boot the guest with '-incoming defer'
4. Start migration
5. After migration, check guest
6. Ping-pong migration utill guest hang



But I don't hit guest hang after more than 5 times ping-pong migration. 

Could someone help see whether the test steps are same as bug reproduction and the test environment (host and guest version) can reproduce this bug?

Comment 27 Dr. David Alan Gilbert 2022-06-29 13:11:45 UTC
Nisim:
  If I understand we have report that it works on -373 and fails on -383; since Li can't reproduce it, can you try
and see if you can bisect to see which of the kernels betwene -373 and -383 breaks it; ideally we'd get exactly one version that breaks it.

Comment 29 John Ferlan 2022-06-30 12:52:03 UTC
*** Bug 2102146 has been marked as a duplicate of this bug. ***

Comment 30 Nisim Simsolo 2022-06-30 13:38:12 UTC
(In reply to Dr. David Alan Gilbert from comment #27)
> Nisim:
>   If I understand we have report that it works on -373 and fails on -383;
> since Li can't reproduce it, can you try
> and see if you can bisect to see which of the kernels betwene -373 and -383
> breaks it; ideally we'd get exactly one version that breaks it.

This issue still occurs on my environment (kernel 4.18.0-372.13.1.el8_6.x86_64)
Because I'm using this environment for RHV testing, I cannot change hosts kernel easily.

Comment 31 Li Xiaohui 2022-07-01 03:54:31 UTC
(In reply to Nisim Simsolo from comment #30)
> (In reply to Dr. David Alan Gilbert from comment #27)
> > Nisim:
> >   If I understand we have report that it works on -373 and fails on -383;
> > since Li can't reproduce it, can you try
> > and see if you can bisect to see which of the kernels betwene -373 and -383
> > breaks it; ideally we'd get exactly one version that breaks it.
> 
> This issue still occurs on my environment (kernel
> 4.18.0-372.13.1.el8_6.x86_64)
> Because I'm using this environment for RHV testing, I cannot change hosts
> kernel easily.

Which guest did you use when you reproduce this bug? RHEL 8.6.0?

How did you reproduce this bug? Can you help check my test steps in Comment 26? Thanks in advance.

Comment 32 Li Xiaohui 2022-07-01 03:56:50 UTC
(In reply to Li Xiaohui from comment #31)
> (In reply to Nisim Simsolo from comment #30)
> > (In reply to Dr. David Alan Gilbert from comment #27)
> > > Nisim:
> > >   If I understand we have report that it works on -373 and fails on -383;
> > > since Li can't reproduce it, can you try
> > > and see if you can bisect to see which of the kernels betwene -373 and -383
> > > breaks it; ideally we'd get exactly one version that breaks it.
> > 
> > This issue still occurs on my environment (kernel
> > 4.18.0-372.13.1.el8_6.x86_64)
> > Because I'm using this environment for RHV testing, I cannot change hosts
> > kernel easily.
> 
> Which guest did you use when you reproduce this bug? RHEL 8.6.0?

Please help provide the detailed kernel version of guest.

> 
> How did you reproduce this bug? Can you help check my test steps in Comment
> 26? Thanks in advance.

Comment 33 Nisim Simsolo 2022-07-01 21:20:13 UTC
> Which guest did you use when you reproduce this bug? RHEL 8.6.0?
with any guest, issue reproduced with RHEL 8.6, RHEL 8.3 and Windows 10.
 
> How did you reproduce this bug? Can you help check my test steps in Comment
> 26? Thanks in advance.
Log in to the VM and migrate VM few times.
I suspect that because my environment is with different hosts types (different CPU types) this issue happens: 
2 hosts with  Intel(R) Xeon(R) Gold 6246R CPU @ 3.40GHz
1 host with   Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz

Comment 34 Sergey 2022-07-01 21:27:20 UTC
In my environment it was happening with large amount of guests (centos 7/8, other linux vms and even win), all CPUs were same series.

Comment 35 Konstantin Kuzov 2022-07-01 22:21:52 UTC
My nodes are also all the same Proliant BL460c blades with X5570.

I recently had a chat with a user on reddit which complained about the same problem in /r/ovirt with both latest el8s (4.18.0-394 kernel) and el9s (5.14.0-115 kernel) ovirt node images.
In his case only node with Xeon X5675 experience this issue. And on nodes with Xeon E5-2667v2 or Xeon Platinum 8160 problem doesn't reproduce.

Maybe some hardware are just more susceptible to this bug?

Comment 36 Sergey 2022-07-01 22:36:12 UTC
My issue also was in cluster with X56xx and E56xx CPUs, I have clusters with E5-26xx v4, but upgraded them directly to 4.18.0-373.el8, after hitting this bug on first cluster.

Comment 37 Konstantin Kuzov 2022-07-01 22:49:11 UTC
For my environment it is also doesn't matter which guest OS vm is using. Centos 7/8, Debian, FreeBSD, ... All are affected. Couple of times I even managed to hang vm when it was sitting in ovmf, but that was really hard to accomplish. I think the more actively vm's memory changing and subsequently the longer migration process take the more the chance of a hang.

Comment 38 Sergey 2022-07-01 22:54:27 UTC
I don't think migration is the only thing affected, I've observed this behavior in VMs started on this host without any migration.

Comment 39 Milan Zamazal 2022-07-02 10:40:25 UTC
FWIW, in Bug 2101850, they claim something similar happens when they migrate from a TSC-scaling host to a non-TSC-scaling host. I don't have any idea whether it's the same problem or a different one and whether TSC scaling is really involved or not. But it's reproducible in the support lab.

Comment 40 Sergey 2022-07-02 19:24:07 UTC
Checked cluster, ovirt says "TSC Frequency: XX (scaling disabled)" on all hosts, looks like this isn't the case.

Comment 41 Dr. David Alan Gilbert 2022-07-04 11:24:51 UTC
(In reply to Nisim Simsolo from comment #33)
> > Which guest did you use when you reproduce this bug? RHEL 8.6.0?
> with any guest, issue reproduced with RHEL 8.6, RHEL 8.3 and Windows 10.
>  
> > How did you reproduce this bug? Can you help check my test steps in Comment
> > 26? Thanks in advance.
> Log in to the VM and migrate VM few times.
> I suspect that because my environment is with different hosts types
> (different CPU types) this issue happens: 
> 2 hosts with  Intel(R) Xeon(R) Gold 6246R CPU @ 3.40GHz
> 1 host with   Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz


Nisim were your machines directly on those hosts? I noticed one of your test environments was a nested setup.

Comment 42 Nisim Simsolo 2022-07-04 11:53:22 UTC
> Nisim were your machines directly on those hosts? I noticed one of your test
> environments was a nested setup.

Yes, the hosts I use are not with nested configuration.

Comment 44 Dr. David Alan Gilbert 2022-07-04 18:37:42 UTC
I've just spent a while trying and failing to reproduce that here;

Model name:          Intel(R) Xeon(R) CPU E3-1240 V2 @ 3.40GHz
4.18.0-383.el8.x86_64

using:

/usr/libexec/qemu-kvm -M pc-q35-rhel8.6.0,usb=off,smm=on,dump-guest-core=off,graphics=off -accel kvm -cpu Nehalem,spec-ctrl=on,ssbd=on -m size=1G,slots=1,maxmem=4G -smp 1 -nographic -drive if=virtio,file=/home/rhel-guest-image-8.7-980.x86_64.qcow2

and
/usr/libexec/qemu-kvm -M pc-q35-rhel8.6.0,usb=off,smm=on,dump-guest-core=off,graphics=off -accel kvm -cpu Nehalem,spec-ctrl=on,ssbd=on -m size=1G,slots=1,maxmem=4G -smp 1 -device qxl-vga -cdrom debian-11.3.0-amd64-netinst.iso -no-hpet -vnc :50  -monitor stdio

all seems fine on that host.

Comment 45 Nisim Simsolo 2022-07-04 19:14:30 UTC
(In reply to Dr. David Alan Gilbert from comment #44)

The guest is with OS installed?

Comment 46 Dr. David Alan Gilbert 2022-07-05 08:16:14 UTC
(In reply to Nisim Simsolo from comment #45)
> (In reply to Dr. David Alan Gilbert from comment #44)
> 
> The guest is with OS installed?

I tried two things:
  a) Debian CD booted to graphical installer
  b) A RHEL8 pre-installed guest image.

Comment 47 Li Xiaohui 2022-07-06 09:46:55 UTC
Didn't reproduce bug on RHEL 8.6.0 hosts (kernel-4.18.0-372.13.1.el8_6.x86_64 && qemu-kvm-6.2.0-5.module+el8.6.0+14025+ca131e0a.x86_64 && Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz)

1) many times ping-pong migration with installed ovmf rhel 8.6.0 guest(kernel-4.18.0-372.13.1.el8_6.x86_64): high cpu usage in guest and host
2) ping-pong migration with installing seabios rhel 8.6.0 guest at the same time


I'm still trying to loan older cpu machines, will try again after get them.

Comment 48 Dr. David Alan Gilbert 2022-07-11 16:19:06 UTC
Looking at Nisim's guest seems to have different errors from the original reporter here; all of Nisim's crashes reported in the guest are:

[  407.008175] ------------[ cut here ]------------
[  407.008180] Bad FPU state detected at switch_fpu_return+0x79/0x110, reinitializing FPU registers.
[  407.008570] WARNING: CPU: 0 PID: 2070 at arch/x86/mm/extable.c:104 ex_handler_fprestore+0x5f/0x70
[  407.008571] Modules linked in: uinput xt_CHECKSUM ipt_MASQUERADE xt_conntrack ipt_REJECT nf_nat_tftp nft_objref nf_conntrack_tftp nft_counter tun bridge stp llc nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nf_tables_set nft_chain_nat_ipv6 nf_nat_ipv6 nft_chain_route_ipv6 nft_chain_nat_ipv4 nf_nat_ipv4 nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_chain_route_ipv4 ip6_tables nft_compat ip_set nf_tables nfnetlink sunrpc intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel iTCO_wdt iTCO_vendor_support intel_rapl_perf pcspkr joydev i2c_i801 lpc_ich virtio_balloon ip_tables xfs libcrc32c sr_mod cdrom qxl drm_ttm_helper ttm drm_kms_helper syscopyarea sysfillrect sysimgblt sd_mod fb_sys_fops sg ahci libahci drm libata crc32c_intel serio_raw virtio_net virtio_console net_failover virtio_scsi failover dm_mirror dm_region_hash dm_log dm_mod fuse
[  407.008673] CPU: 0 PID: 2070 Comm: packagekitd Kdump: loaded Not tainted 4.18.0-221.el8.x86_64 #1
[  407.008675] Hardware name: Red Hat RHEL/RHEL-AV, BIOS 1.15.0-2.module+el8.6.0+14757+c25ee005 04/01/2014
[  407.008677] RIP: 0010:ex_handler_fprestore+0x5f/0x70

and then it goes bad from there on in.

Comment 51 Dr. David Alan Gilbert 2022-07-12 17:50:26 UTC
I finally found an old machine;  first kernel tried is -408 - and that host paniced on staritng the debian VM.

[ 5090.913526] BUG: unable to handle kernel NULL pointer dereference at 000000000000000b
[ 5090.921356] PGD 0 P4D 0
[ 5090.923893] Oops: 0002 [#1] SMP PTI
[ 5090.927381] CPU: 1 PID: 7667 Comm: qemu-kvm Kdump: loaded Not tainted 4.18.0-408.el8.x86_64 #1
[ 5090.935984] Hardware name: IBM System x3550 M3 -[7944H2G]-/69Y4438, BIOS -[D6E158AUS-1.16]- 11/26/2012
[ 5090.945280] RIP: 0010:kvm_replace_memslot+0xa5/0x310 [kvm]

there's some suggestion sonline that this is fixed by 51c4476c00c110486a06aae7eb93dec622ed28ed
x86/fpu: KVM: Set the base guest FPU uABI size to sizeof(struct kvm_xsave)

Comment 52 Dr. David Alan Gilbert 2022-07-12 18:02:37 UTC
Testing kernel 4.18.0-372.16.1.el8_6.mr2854_220705_1625.x86_64 from bz 2088287#c25 and it succesfully runs
the debian VM and migrates it (twice).

Comment 54 Dr. David Alan Gilbert 2022-07-12 18:39:50 UTC
hmm, the 348.7.1.el8_5 kernel also was OK.

Comment 55 Dr. David Alan Gilbert 2022-07-12 18:59:18 UTC
With the -383 kernel I confirm the hang with the Debian

So, to summarise, using an old Xeon L5640

  -383 Migration hang on debian graphical installer as reported
  -372.16.1 from bz 2088287#25 worked nicely
  -408 kernel panic'd - probably due to bz 2092066

Comment 56 Arik 2022-07-13 10:12:13 UTC
(In reply to Nisim Simsolo from comment #50)
> Issue was not reproduced using
> kernel-4.18.0-372.16.1.el8_6.mr2854_220705_1625.x86_64 from
> https://bugzilla.redhat.com/show_bug.cgi?id=2088287#c25
> I did more than 10 VM migration and VM is still running and functional (OS
> inside console is functional and ssh is accessible).

This aligns with David's observation in comment 48 that the issues we saw on RHEL 8.6 are different
So seems we're back at where we've been few weeks - there's an issue on centos stream that u/s users are affected by, also confirmed in comment 55, that was not seen on out RHEL 8.6 based environments

Comment 57 Klaas Demter 2022-07-13 12:20:18 UTC
so kernel-4.18.0-372.16.1 was released today -- is that sufficiently safe to update to for rhv customers or do you need to do additional investigating?

Comment 58 Marina Kalinin 2022-07-14 00:57:49 UTC
Hi Klaas, from what I read in bugzilla, this only happened on CentOS hosts and cannot be reproducible on RHEL8.6. In case I misread and you have experienced this in your RHV environment as well. Can you please clarify?

Comment 59 Klaas Demter 2022-07-14 06:25:43 UTC
(In reply to Marina Kalinin from comment #58)
> Hi Klaas, from what I read in bugzilla, this only happened on CentOS hosts
> and cannot be reproducible on RHEL8.6. In case I misread and you have
> experienced this in your RHV environment as well. Can you please clarify?

Hi Marina,
I noticed this bug before updating to my RHV to the affected kernel versions, so I do not experience it myself, I have however halted my updates because of this issue, I do heavily rely on a working live migration.

For details you should contact your fellow red hatters, read the whole bz for example:
https://bugzilla.redhat.com/show_bug.cgi?id=2079311#c16
https://bugzilla.redhat.com/show_bug.cgi?id=2079311#c24

Greetings
Klaas

Comment 60 Germano Veit Michel 2022-07-15 04:21:13 UTC
(In reply to Klaas Demter from comment #59)
> (In reply to Marina Kalinin from comment #58)
> > Hi Klaas, from what I read in bugzilla, this only happened on CentOS hosts
> > and cannot be reproducible on RHEL8.6. In case I misread and you have
> > experienced this in your RHV environment as well. Can you please clarify?
> 
> Hi Marina,
> I noticed this bug before updating to my RHV to the affected kernel
> versions, so I do not experience it myself, I have however halted my updates
> because of this issue, I do heavily rely on a working live migration.
> 
> For details you should contact your fellow red hatters, read the whole bz
> for example:
> https://bugzilla.redhat.com/show_bug.cgi?id=2079311#c16
> https://bugzilla.redhat.com/show_bug.cgi?id=2079311#c24

Hi Klaas, you cannot see all comments but that comment #16 turned out to be another problem. It means so far we have not seen this on RHEL, just on CentOS, which is why Marina asked the question if you had actually hit it. So the answer is no because you are holding your upgrades until there is more certainty. Thanks!

> 
> Greetings
> Klaas

Comment 61 Li Xiaohui 2022-07-19 11:37:52 UTC
Reproduce this bug with CentOS Linux release 8.5.2111 on Nehalem machine (lenovo-thinkstation-01.khw2.lab.eng.bos.redhat.com): Xeon(R) CPU X5570

hosts info: kernel-4.18.0-383.el8.x86_64 & qemu-kvm-6.0.0-33.el8.x86_64

Test:
1. When install CentOs-7.9 guest, guest would hang after migration 
2. Use a installed CentOS-7.9 guest, guest would hit core dump and restart then

Note: This Nehalem machine doesn't have choose to install RHEL 8 or RHEL 9, so have no chance to try on RHEL 8/9


Also try on Sandybridge and Haswell machines, all didn't hit above issue:
SandyBridge machine (hpe-z210-02.hpe2.lab.eng.bos.redhat.com): Intel(R) Core(TM) i5-2400 CPU
Haswell machine (intel-sharkbay-dh-07.ml3.eng.bos.redhat.com): Intel(R) Core(TM) i5-4670T CPU

Comment 62 Li Xiaohui 2022-07-20 14:14:06 UTC
Reinstall the Nehalem machine (same as Comment 61) to be RHEL 8.7.0, reproduce this bug, install vm hang after migration.

host info: kernel-4.18.0-383.el8.x86_64 & qemu-kvm-6.2.0-5.module+el8.6.0+14025+ca131e0a.x86_64
guest info: RHEL-8.7.0-20220719.0-x86_64-dvd1.iso
host has 3G memory, and 8 cpus

Qemu command line:
/usr/libexec/qemu-kvm  \
-name "mouse-vm" \
-sandbox on \
-machine q35 \
-cpu Nehalem,spec-ctrl=on,ssbd=on \
-nodefaults  \
-chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1,server=on,wait=off \
-chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor,server=on,wait=off \
-mon chardev=qmp_id_qmpmonitor1,mode=control \
-mon chardev=qmp_id_catch_monitor,mode=control \
-device pcie-root-port,port=0x10,chassis=1,id=root0,bus=pcie.0,multifunction=on,addr=0x2 \
-device pcie-root-port,port=0x11,chassis=2,id=root1,bus=pcie.0,addr=0x2.0x1 \
-device pcie-root-port,port=0x12,chassis=3,id=root2,bus=pcie.0,addr=0x2.0x2 \
-device pcie-root-port,port=0x13,chassis=4,id=root3,bus=pcie.0,addr=0x2.0x3 \
-device pcie-root-port,port=0x14,chassis=5,id=root4,bus=pcie.0,addr=0x2.0x4 \
-device pcie-root-port,port=0x15,chassis=6,id=root5,bus=pcie.0,addr=0x2.0x5 \
-device pcie-root-port,port=0x16,chassis=7,id=root6,bus=pcie.0,addr=0x2.0x6 \
-device pcie-root-port,port=0x17,chassis=8,id=root7,bus=pcie.0,addr=0x2.0x7 \
-device pcie-root-port,port=0x20,chassis=21,id=extra_root0,bus=pcie.0,multifunction=on,addr=0x3 \
-device pcie-root-port,port=0x21,chassis=22,id=extra_root1,bus=pcie.0,addr=0x3.0x1 \
-device pcie-root-port,port=0x22,chassis=23,id=extra_root2,bus=pcie.0,addr=0x3.0x2 \
-device nec-usb-xhci,id=usb1,bus=root0,addr=0x0 \
-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=root1,addr=0x0 \
-device scsi-hd,id=image1,drive=drive_image1,bus=virtio_scsi_pci0.0,channel=0,scsi-id=0,lun=0,bootindex=0,write-cache=on \
-device virtio-net-pci,mac=9a:8a:8b:8c:8d:8e,id=net0,netdev=tap0,bus=root2,addr=0x0 \
-device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
-device virtio-balloon-pci,id=balloon0,bus=root3,addr=0x0 \
-device VGA,id=video0,vgamem_mb=16,bus=pcie.0,addr=0x1 \
-blockdev driver=file,auto-read-only=on,discard=unmap,aio=threads,cache.direct=on,cache.no-flush=off,filename=/home/rhel870.qcow2,node-name=drive_sys1 \
-blockdev driver=qcow2,node-name=drive_image1,read-only=off,cache.direct=on,cache.no-flush=off,file=drive_sys1 \
-netdev tap,id=tap0,vhost=on \
-m 1024 \
-smp 4,maxcpus=4,cores=2,threads=1,sockets=2 \
-vnc :10 \
-rtc base=utc,clock=host \
-boot menu=off,strict=off,order=cdn,once=c \
-enable-kvm  \
-qmp tcp:0:3333,server=on,wait=off \
-monitor stdio \
-msg timestamp=on \
-device ide-cd,bus=ide.2,drive=isocd,bootindex=2,werror=report,rerror=report \
-drive file=/home/RHEL-8.7.0-20220719.0-x86_64-dvd1.iso,id=isocd,media=cdrom,if=none \


I will try the latest RHEL 8.7.0 to see whether reproduce bug, and try which kernel version bring out this bug.

Comment 63 Li Xiaohui 2022-07-21 09:04:32 UTC
When migrate with installed rhel8.7.0 guest (kernel-4.18.0-409.el8.x86_64) on Nehalem machine (same env as Comment 62), guest kernel panic after migration, core files see below:
http://kvmqe-tools.qe.lab.eng.nay.redhat.com/logjump.html?target=pek&path=xiaohli/bug//bz2079311_coredumpinfo/

Comment 64 Dr. David Alan Gilbert 2022-07-27 10:47:42 UTC
From Li Xiaohui's dmesg:

[  172.050784] WARNING: can't access iret registers at page_fault+0x3/0x30
[  172.050798] PANIC: double fault, error_code: 0x0
[  172.050799] CPU: 0 PID: 759 Comm: sssd Kdump: loaded Not tainted 4.18.0-409.el8.x86_64 #1
[  172.050800] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.16.0-2.module+el8.7.0+15506+033991b0 04/01/2014
[  172.050801] RIP: 0010:page_fault+0x3/0x30
[  172.050801] Code: 89 e7 48 8b 74 24 78 48 c7 44 24 78 ff ff ff ff e8 12 30 62 ff e9 2d 03 00 00 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 66 66 90 <e8> b8 01 00 00 48 89 e7 48 8b 74 24 78 48 c7 44 24 78 ff ff ff ff
[  172.050802] RSP: 0008:00007fffebb3a000 EFLAGS: 00010046
[  172.050803] RAX: 000000000002d100 RBX: 0000000000000000 RCX: ffffffff97400b87
[  172.050804] RDX: 0000000000000000 RSI: 000000000002d100 RDI: 00007fffebb3a068
[  172.050805] RBP: 00007fffebb3a068 R08: 0000000000000000 R09: 0000000000000000
[  172.050805] R10: 0000000000000000 R11: 0000000000000000 R12: 000000000002d100
[  172.050806] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[  172.050806] FS:  00007f70276e1940(0000) GS:ffff8b1ffec00000(0000) knlGS:0000000000000000
[  172.050807] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  172.050808] CR2: 00007fffebb39ff8 CR3: 0000000002aa0000 CR4: 00000000000006f0
[  172.050808] Call Trace:
[  172.050808] Kernel panic - not syncing: Machine halted.

Comment 65 Li Xiaohui 2022-07-27 11:07:53 UTC
On July 25, I tried to do local migration on same Nehalem machine with host kernel-4.18.0-410.el8.x86_64, don't hit guest hang/kernel panic and host panic when repeat more than 10 times:
1. Migrate with trying to install rhel 8.7.0 guest with iso RHEL-8.7.0-20220722.0-x86_64-dvd1.iso 
2. Migrate with installed rhel 8.7.0 guest, kernel-4.18.0-410.el8.x86_64.

Note: Reproduce this bug on the same machine with local migration.


But today, when I migrate from this Nehalem machine to the Cascadelake-Server machine on kernel-4.18.0-410.el8.x86_64 both for hosts and guest, still hit guest kernel panic, same vmcore info as Comment 63, see below:
http://kvmqe-tools.qe.lab.eng.nay.redhat.com/logjump.html?target=pek&path=xiaohli/bug/bz2079311_coredumpinfo22/Nehalem_to_Cascadelake


Nehalem machine: lenovo-thinkstation-01.khw2.lab.eng.bos.redhat.com,  Intel(R) Xeon(R) CPU           X5570
Cascadelake-Server: dell-pr7920-03.khw2.lab.eng.bos.redhat.com,  Intel(R) Xeon(R) Gold 6258R CPU



Seems kernel-410 didn't fix this bug

Comment 66 Dr. David Alan Gilbert 2022-07-27 11:33:24 UTC
The oops from comment 65:

[  542.595891] PANIC: double fault, error_code: 0x0
[  542.595897] CPU: 0 PID: 642 Comm: systemd-journal Kdump: loaded Not tainted 4.18.0-410.el8.x86_64 #1
[  542.595900] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.16.0-2.module+el8.7.0+15506+033991b0 04/01/2014
[  542.595901] RIP: 0010:do_page_fault+0x0/0x130
[  542.595902] Code: ff 00 48 39 c2 73 1d e9 8e fb ff ff 48 b8 00 f0 ff ff ff 7f 00 00 eb ea 48 b8 00 f0 ff ff ff ff ff 00 eb de e9 31 f8 ff ff 90 <41> 56 41 55 49 89 f5 41 54 55 48 89 fd 53 0f 20 d0 66 66 66 90 49
[  542.595903] RSP: 0008:00007fff4b9cd000 EFLAGS: 00010093
[  542.595903] RAX: 00000000b4600b87 RBX: 0000000000000000 RCX: ffffffffb4600b87
[  542.595904] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007fff4b9cd008
[  542.595904] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
[  542.595905] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[  542.595905] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[  542.595905] FS:  00007f693c490980(0000) GS:ffff9c4ebbc00000(0000) knlGS:0000000000000000
[  542.595906] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  542.595906] CR2: 00007fff4b9ccff8 CR3: 0000000107d8a000 CR4: 00000000000006f0
[  542.595907] Call Trace:
[  542.595910] Kernel panic - not syncing: Machine halted.
[  542.595911] CPU: 0 PID: 642 Comm: systemd-journal Kdump: loaded Not tainted 4.18.0-410.el8.x86_64 #1
[  542.595911] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.16.0-2.module+el8.7.0+15506+033991b0 04/01/2014
[  542.595912] Call Trace:
[  542.595912]  <#DF>
[  542.595912]  dump_stack+0x41/0x60
[  542.595936]  panic+0xe7/0x2ac
[  542.595937]  df_debug+0x29/0x36
[  542.595937]  do_double_fault+0xe8/0x170
[  542.595941]  double_fault+0x1e/0x30
[  542.595941] RIP: 0010:do_page_fault+0x0/0x130
[  542.595942] Code: ff 00 48 39 c2 73 1d e9 8e fb ff ff 48 b8 00 f0 ff ff ff 7f 00 00 eb ea 48 b8 00 f0 ff ff ff ff ff 00 eb de e9 31 f8 ff ff 90 <41> 56 41 55 49 89 f5 41 54 55 48 89 fd 53 0f 20 d0 66 66 66 90 49
[  542.595942] RSP: 0008:00007fff4b9cd000 EFLAGS: 00010093
[  542.595943] RAX: 00000000b4600b87 RBX: 0000000000000000 RCX: ffffffffb4600b87
[  542.595943] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007fff4b9cd008
[  542.595944] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
[  542.595944] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[  542.595945] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[  542.595945]  ? native_iret+0x7/0x7
[  542.595945]  </#DF>

Comment 67 Geraldo Viana 2022-08-04 21:41:14 UTC
Same problem here using the kernel 4.18.0-408.el8.x86_64.

The virtual Machine hang after migration with 100% cpu usage

Comment 68 Li Xiaohui 2022-08-08 12:35:09 UTC
Didn't reproduce on kernel-4.18.0-376.el8.x86_64, but from kernel-4.18.0-377.el8.x86_64, all can reproduce this bug.

So kernel-4.18.0-377.el8.x86_64 brough out this issue.

Comment 69 Dr. David Alan Gilbert 2022-08-09 12:36:15 UTC
I failed to reproduce this on an E5506 Xeon using local migration;

host kernel:4.18.0-415.el8.x86_64
host qemu: qemu-kvm-6.2.0-18.module+el8.7.0+15999+d24f860e.x86_64
guest: rhel-guest-image-8.7-1218

qemu command line: /usr/libexec/qemu-kvm -M q35,accel=kvm -smp 4 -m 2G -nographic -drive if=virtio,file=/home/rhel-guest-image-8.7-1218.x86_64.qcow2

Comment 70 Dr. David Alan Gilbert 2022-08-09 12:40:29 UTC
/usr/libexec/qemu-kvm -M q35,accel=kvm -smp 4 -m 2G -nographic -drive if=virtio,file=/home/rhel-guest-image-8.7-1218.x86_64.qcow2 -cpu Nehalem,spec-ctrl=on,ssbd=on   is also fine.

Li Xiaohui: Any suggestions on how to do the local reproducer?

Comment 71 Li Xiaohui 2022-08-09 13:57:18 UTC
Hi,
In my tests, on kernel-410, I can easily reproduce when migrate from Nehalem to Cascadelake/Haswell, but can't reproduce when do the local migration (I mean use one host Nehalem to do migration).

But before kernel-410, such as kernel-383, kernel-380, kernel-379, kernel-378, kernel-377, usually reproduce this bug when do local migration, can't reproduce on these kernel versions when do multi migration (I mean from Nehalem to Cascadelake/Haswell).


So I guess you should use two hosts, migrate from Nehalem/Westmere to other higher cpu model for kernel-415, then maybe you can reproduce.

Comment 72 Dr. David Alan Gilbert 2022-08-10 10:20:20 UTC
(In reply to Li Xiaohui from comment #71)
> Hi,
> In my tests, on kernel-410, I can easily reproduce when migrate from Nehalem
> to Cascadelake/Haswell, but can't reproduce when do the local migration (I
> mean use one host Nehalem to do migration).
> 
> But before kernel-410, such as kernel-383, kernel-380, kernel-379,
> kernel-378, kernel-377, usually reproduce this bug when do local migration,
> can't reproduce on these kernel versions when do multi migration (I mean
> from Nehalem to Cascadelake/Haswell).
> 
> 
> So I guess you should use two hosts, migrate from Nehalem/Westmere to other
> higher cpu model for kernel-415, then maybe you can reproduce.


OK, yes I can reproduce that, migrating from virtlab209 (E5540) to virtlab608 (Silver 4214)
/usr/libexec/qemu-kvm -M pc-q35-rhel8.6.0,usb=off,smm=on,dump-guest-core=off,graphics=off -accel kvm -cpu Nehalem,spec-ctrl=on,ssbd=on -m size=1G,slots=1,maxmem=4G -smp 4 -nographic -drive if=virtio,file=/home/vms/dgilbert/rhel-guest-image-9.1-20220320.0.x86_64.qcow2

[ 8203.745175] traps: PANIC: double fault, error_code: 0x0
[ 8203.745196] double fault: 0000 [#1] PREEMPT SMP PTI
[ 8203.745207] CPU: 0 PID: 655 Comm: gssproxy Not tainted 5.14.0-72.el9.x86_64 #1
[ 8203.745213] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module+el8.7.0+16134+e5908aa2 04/01/2014
[ 8203.745214] RIP: 0010:error_entry+0x18/0xe0
[ 8203.745236] Code: fe ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 fc 56 48 8b 74 24 08 48 89 7c 24 08 52 51 50 41 50 41 51 41 52 41 53 53 <55> 41 54 41 55 41 56 41 57 56 31 d2 31 c9 45 31 c0 45 31 c9 45 31
[ 8203.745238] RSP: 0008:00007ffe700fa000 EFLAGS: 00010093
[ 8203.745243] RAX: 0000000087c00fb7 RBX: 0000000000000000 RCX: ffffffff87c00fb7
[ 8203.745245] RDX: 0000000000000000 RSI: ffffffff87c00ab8 RDI: 00007ffe700fa0a8
[ 8203.745246] RBP: 00007ffe700fa0a8 R08: 0000000000000000 R09: 0000000000000000
[ 8203.745247] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000016f40
[ 8203.745248] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 8203.745249] FS:  00007fca1a9a2c80(0000) GS:ffff97d17ec00000(0000) knlGS:0000000000000000
[ 8203.745255] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 8203.745256] CR2: 00007ffe700f9ff8 CR3: 000000003439c000 CR4: 00000000000006f0
[ 8203.745257] Call Trace:
[ 8203.745259] Modules linked in: rfkill iTCO_wdt iTCO_vendor_support bochs_drm pcspkr drm_vram_helper drm_ttm_helper joydev ttm drm_kms_helper i2c_i801 syscopyarea sysfillrect sysimgblt lpc_ich i2c_smbus fb_sys_fops cec vfat fat drm fuse xfs libcrc32c sr_mod cdrom sg ahci libahci e1000e libata crc32c_intel serio_raw virtio_blk sunrpc dm_mirror dm_region_hash dm_log dm_mod
[ 8203.770755] ---[ end trace f61066ee2740c9ab ]---

Comment 73 Dr. David Alan Gilbert 2022-08-10 11:09:23 UTC
The host is also showing a warning for me in dmesg, which I think is:
                /*
                 * It should be impossible for the hypervisor timer to be in
                 * use before KVM has ever run the vCPU.
                 */
                WARN_ON_ONCE(kvm_lapic_hv_timer_in_use(vcpu));

(From recent upstream 98c25ead5eda5 )
[60578.918179] WARNING: CPU: 18 PID: 44564 at arch/x86/kvm/x86.c:10507 kvm_arch_vcpu_ioctl_run+0x5f2/0x600 [kvm]
[60578.928135] Modules linked in: rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache sunrpc intel_rapl_msr intel_rapl_common isst_if_common skx_edac nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp ipmi_ssif kvm_intel kvm irqbypass crct10dif_pclmul dell_smbios iTCO_wdt crc32_pclmul iTCO_vendor_support ghash_clmulni_intel dell_wmi_descriptor wmi_bmof dcdbas rapl acpi_ipmi intel_cstate mei_me ipmi_si pcspkr intel_uncore i2c_i801 lpc_ich mei ipmi_devintf wmi ipmi_msghandler acpi_power_meter xfs libcrc32c sd_mod t10_pi sg mgag200 i2c_algo_bit drm_shmem_helper drm_kms_helper syscopyarea ahci sysfillrect sysimgblt libahci fb_sys_fops drm libata crc32c_intel tg3 megaraid_sas dm_mirror dm_region_hash dm_log dm_mod
[60578.993239] CPU: 18 PID: 44564 Comm: qemu-kvm Kdump: loaded Tainted: G          I      --------- -  - 4.18.0-415.el8.x86_64 #1
[60579.004626] Hardware name: Dell Inc. PowerEdge R440/04JN2K, BIOS 2.8.1 06/30/2020
[60579.012115] RIP: 0010:kvm_arch_vcpu_ioctl_run+0x5f2/0x600 [kvm]
[60579.018067] Code: 08 07 00 00 00 48 83 83 48 20 00 00 01 e9 b9 fc ff ff 48 8b 43 68 c7 40 08 0a 00 00 00 48 83 83 20 20 00 00 01 e9 9b fc ff ff <0f> 0b e9 01 ff ff ff 0f 1f 80 00 00 00 00 0f 1f 44 00 00 80 bf 10
[60579.036814] RSP: 0018:ffff9edfc91ffdb0 EFLAGS: 00010202
[60579.042037] RAX: 0000000000000001 RBX: ffff912dd1618000 RCX: 0000000000000001
[60579.049172] RDX: 00002d92ffc20f00 RSI: 00000000fffffe01 RDI: ffff912dd1618000
[60579.056303] RBP: ffff912e11054000 R08: ffff912dd9e7a068 R09: 0000000000000001
[60579.063437] R10: ffff9edfc91ffed8 R11: 0000000000000001 R12: ffff912dd1618000
[60579.070568] R13: 0000000000000000 R14: ffff912dd1618048 R15: ffff912dd8f6a400
[60579.077701] FS:  00007f8779cba700(0000) GS:ffff914cc0080000(0000) knlGS:0000000000000000
[60579.085797] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[60579.091543] CR2: 00007f8783049001 CR3: 00000001094d8003 CR4: 00000000007726e0
[60579.098675] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[60579.105807] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[60579.112938] PKRU: 55555554
[60579.115652] Call Trace:
[60579.118117]  kvm_vcpu_ioctl+0x2c9/0x640 [kvm]
[60579.122509]  ? do_futex+0xc6/0x4d0
[60579.125915]  do_vfs_ioctl+0xa4/0x690
[60579.129492]  ksys_ioctl+0x64/0xa0
[60579.132813]  __x64_sys_ioctl+0x16/0x20
[60579.136564]  do_syscall_64+0x5b/0x1b0
[60579.140230]  entry_SYSCALL_64_after_hwframe+0x61/0xc6
[60579.145282] RIP: 0033:0x7f877f8ca72b
[60579.148862] Code: 73 01 c3 48 8b 0d 5d 67 38 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 2d 67 38 00 f7 d8 64 89 01 48
[60579.167651] RSP: 002b:00007f8779cb96e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[60579.175233] RAX: ffffffffffffffda RBX: 0000559fa04371c0 RCX: 00007f877f8ca72b
[60579.182366] RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 000000000000000d
[60579.189497] RBP: 0000559fa0444260 R08: 0000559f9db09708 R09: 0000000000000000
[60579.196630] R10: 0000000000000000 R11: 0000000000000246 R12: 0000559f9d8eeae0
[60579.203763] R13: 0000559f9db39f60 R14: 00007ffc9d6829a0 R15: 00007f8783049000
[60579.210897] ---[ end trace e67a51619635663f ]---

Comment 74 Dr. David Alan Gilbert 2022-08-10 13:51:06 UTC
both the guest and host panic also still happen on upstream 5.19

Comment 75 Dr. David Alan Gilbert 2022-08-11 10:41:45 UTC
also fails with source being 5.19
also fails with upstream qemu
also fails removing the spec-ctrl and ssbd flags

Inspired by one of Peter's patches, I checked whether the CPU state was being loaded correctly:

do_kvm_cpu_synchronize_post_init: failed with -22
do_kvm_cpu_synchronize_post_init: failed with -22
do_kvm_cpu_synchronize_post_init: failed with -22
do_kvm_cpu_synchronize_post_init: failed with -22

Nope! So I think I'll go and follow where that came from

Comment 76 Dr. David Alan Gilbert 2022-08-11 13:06:18 UTC
qemu is calling :
[pid 66528] ioctl(12, KVM_SET_XSAVE, 0x7fabf4001000) = -1 EINVAL (Invalid argument)

now, the old CPU doesn't have xsave, but qemu calls that if:
    has_xsave = kvm_check_extension(s, KVM_CAP_XSAVE);

so we need to figure out if qemu shouldn't be calling it, or if the kernel is wrong to reject it
or if the data passed is sane.

Comment 77 Dr. David Alan Gilbert 2022-08-11 15:51:50 UTC
kvm_check_extension becomes arch/x86/kvm/x86.c kvm_vm_ioctl_check_extension  and that always returns true for KVM_SET_XSAVE, so qemu always calls 
set_xsave

I'm suspicious that kernel ea4d6938d4c07 from last year has changed the fpu_copy_uabi_to_guest_fpstate
code used to implement set_xsave so it's more fussy.

Comment 78 Dr. David Alan Gilbert 2022-08-11 17:02:54 UTC
Hack: making qemu's kvm_put_xsave always return 0 and ignore the failure to load the xsave state makes the migration succeed,
because it allows all the rest of the state to be succesfully loaded rather than silently failing part way.

[ 4708.984843] kvm_vcpu_ioctl_x86_set_xsave: supported_xcr0=2ff uxfeatures=3 umxcsr=1fa0 mxcsr_feature_mask=ffff cfe(XSAVE)=1

I think fpu_copy_uabi_to_guest_fpstate is checking *host* xsave cpu_features_enabled, which is true - so it doesn't go down the
quick memcpy path; need to figure out which path it's actually failing on

Comment 79 Dr. David Alan Gilbert 2022-08-11 17:36:15 UTC
[  273.006105] fpu_copy_uabi_to_guest_fpstate: error -22 on copy_uabi_from_kernel_to_xstate

down the rabbit hole...

Comment 80 Dr. David Alan Gilbert 2022-08-11 18:07:19 UTC
The test that's failing is:

static int validate_user_xstate_header(const struct xstate_header *hdr,
                                       struct fpstate *fpstate)
{
        /* No unknown or supervisor features may be set */
        if (hdr->xfeatures & ~fpstate->user_xfeatures) {
                pr_debug("%s: xfeatures: hdr: %llx user: %llx\n", __func__, hdr->xfeatures, fpstate->user_xfeatures);
                return -EINVAL;
        }


[  206.573972] validate_user_xstate_header: xfeatures: hdr: 3 user: 0
[  206.573977] copy_uabi_to_xstate: validate_user_xstate_header failed
[  206.573978] fpu_copy_uabi_to_guest_fpstate: error -22 on copy_uabi_from_kernel_to_xstate


So I think that's saying the state we're trying to load has xfeatures=3 (which I think is FP+SSE)
but user_xfeatures is 0 - now I'm not clear which is wrong.  I think it might validly be 0
because the guest doesn't support xsave.
(It should come from kvm_vcpu_after_set_cpuid?)

Comment 81 Min Deng 2022-08-14 04:48:55 UTC
Hi all, 
I hit a hanging issue while doing ping-pong stable guest abi from rhel8.6 to rhel9.1, but it hung on SOURCE SIDE. Is it the same reason to this bug ? Thanks.
Build information,
RHEL8.6
kernel-4.18.0-372.22.1.el8_6.x86_64
qemu-kvm-6.2.0-11.module+el8.6.0+16271+0f1054e8.3.x86_64
RHEL9.1
kernel-5.14.0-142.el9.x86_64
qemu-kvm-7.0.0-10.el9.x86_64
Steps,
1. running heavy load test in the guest 
2. doing post copy
3. the source side's qemu process hung up, but it seems that the guest worked on DST host by connect it with vnc.
Guest cmdline,
/usr/libexec/qemu-kvm -name win2022 -machine pc-q35-rhel7.6.0,accel=kvm,usb=off,pflash0=drive_ovmf_code,pflash1=drive_ovmf_vars -blockdev node-name=file_ovmf_code,driver=file,filename=/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd,auto-read-only=on,discard=unmap -blockdev node-name=drive_ovmf_code,driver=raw,read-only=on,file=file_ovmf_code -blockdev node-name=file_ovmf_vars,driver=file,filename=OVMF_VARS.fd,auto-read-only=on,discard=unmap -blockdev node-name=drive_ovmf_vars,driver=raw,read-only=off,file=file_ovmf_vars -cpu SandyBridge,hv_crash,hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt -sandbox off -m 7680 -smp 8,cores=1,threads=1,sockets=8 -uuid 49a3438a-70a3-4ba8-92ce-3a05e0934608 -no-user-config -nodefaults -rtc base=localtime,clock=host,driftfix=slew -no-hpet -boot order=c,menu=on,splash-time=3000,strict=on -chardev socket,id=charmonitor,path=/home/tmp,server=on,wait=off -mon chardev=charmonitor,id=monitor,mode=control -drive file=virtio-win-1.9.24.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device ahci,id=ahci0,bus=pcie.0,addr=0x3 -device pcie-root-port,id=pcie-root-port0,bus=pcie.0,addr=0x4,multifunction=on,port=0x10,chassis=1 -device virtio-scsi-pci,id=scsi0,bus=pcie-root-port0 -device pcie-root-port,id=pcie-root-port1,bus=pcie.0,addr=0x4.1,chassis=2,port=0x11 -drive file=win2022-64-virtio-scsi.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,discard=unmap,werror=stop,rerror=stop,aio=threads -device scsi-hd,bus=scsi0.0,lun=0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=0 -drive file=virtio-scsi-disk,if=none,id=drive-scsi-disk,format=qcow2,cache=none,werror=stop,rerror=stop -device scsi-hd,drive=drive-scsi-disk,bus=scsi0.0,lun=1,id=data-disk1,bootindex=1 -device virtio-serial-pci,id=virtio-serial0,bus=pcie-root-port1 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel1,path=/home/tmp2,server=on,wait=off -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device intel-hda,id=sound0,bus=pcie.0,addr=0x7 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device intel-hda,id=sound1,bus=pcie.0,addr=0x8 -device hda-micro,id=sound1-codec0,bus=sound1.0 -device intel-hda,id=sound2,bus=pcie.0,addr=0x9 -device hda-output,id=sound2-codec0,bus=sound2.0,cad=0 -device ich9-intel-hda,id=sound3,bus=pcie.0,addr=0xa -device hda-duplex,id=sound3-codec0,bus=sound3.0,cad=0 -device pvpanic,ioport=1285 -msg timestamp=on -device pcie-root-port,id=pcie-root-port2,bus=pcie.0,addr=0x4.2,chassis=3,port=0x12 -netdev tap,id=hostnet1,vhost=on,script=/etc/qemu-ifup -device e1000e,netdev=hostnet1,id=virtio-net-pci1,mac=00:52:68:26:31:03,bus=pcie-root-port2 -device pcie-root-port,id=pcie-root-port3,bus=pcie.0,addr=0x4.3,chassis=4,port=0x13 -netdev tap,id=hostnet2,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet2,id=virtio-net-pci2,mac=00:52:68:26:31:04,bus=pcie-root-port3 -drive file=ide-disk,if=none,id=drive-data-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop,copy-on-read=off,media=disk -device ide-hd,drive=drive-data-disk,id=system-disk,logical_block_size=512,physical_block_size=512,min_io_size=512,opt_io_size=512,discard_granularity=512,ver=fuxc-ver,bus=ide.0,unit=0 -device pcie-root-port,id=pcie-root-port4,bus=pcie.0,addr=0x4.4,chassis=5,port=0x14 -device ich9-usb-uhci6,id=uhci6,bus=pcie-root-port4 -device usb-kbd,id=kdb0,bus=uhci6.0 -device pcie-root-port,id=pcie-root-port5,bus=pcie.0,addr=0x4.5,chassis=6,port=0x15 -device ich9-usb-uhci5,id=uhci5,bus=pcie-root-port5 -device usb-mouse,id=mouse0,bus=uhci5.0 -device pcie-root-port,id=pcie-root-port6,bus=pcie.0,addr=0x4.6,chassis=7,port=0x16 -device qemu-xhci,id=xhci,bus=pcie-root-port6 -device pcie-root-port,id=pcie-root-port7,bus=pcie.0,addr=0x4.7,chassis=8,port=0x17 -device pcie-root-port,id=pcie-root-port8,bus=pcie.0,addr=0x10,multifunction=on,chassis=9,port=0x18 -device usb-ehci,id=ehci,bus=pcie-root-port8 -device pcie-root-port,id=pcie-root-port9,bus=pcie.0,addr=0x10.1,chassis=10,port=0x19 -device piix3-usb-uhci,id=usb,bus=pcie-root-port9 -device pcie-root-port,id=pcie-root-port10,bus=pcie.0,addr=0x10.2,chassis=11,port=0x20 -device ich9-usb-uhci3,id=uhci,bus=pcie-root-port10 -device usb-storage,drive=drive-usb-0,id=usb-0,removable=on,bus=uhci.0,port=1 -drive file=usb-uhci,if=none,id=drive-usb-0,media=disk,format=qcow2 -device pcie-root-port,id=pcie-root-port11,bus=pcie.0,addr=0x10.3,chassis=12,port=0x21 -device pcie-root-port,id=pcie-root-port12,bus=pcie.0,addr=0x10.4,chassis=13,port=0x22 -device ich9-usb-ehci1,id=ehci1,bus=pcie-root-port11 -device usb-storage,drive=drive-usb-1,id=usb-1,removable=on,bus=ehci1.0,port=1 -drive file=usb-ehci,if=none,id=drive-usb-1,media=disk,format=qcow2 -device qemu-xhci,id=xhci1,bus=pcie-root-port12 -device usb-storage,drive=drive-usb-2,id=usb-2,removable=on,bus=xhci1.0,port=1 -drive file=usb-xhci,if=none,id=drive-usb-2,media=disk,format=qcow2 -device pcie-root-port,id=pcie-root-port13,bus=pcie.0,addr=0x10.5,chassis=14,port=0x23 -object rng-random,filename=/dev/urandom,id=objrng0 -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pcie-root-port13 -device pcie-root-port,id=pcie-root-port14,bus=pcie.0,addr=0x10.6,chassis=15,port=0x24 -device virtio-balloon-pci,id=balloon0,bus=pcie-root-port14 -device pcie-root-port,id=pcie-root-port15,bus=pcie.0,addr=0x10.7,chassis=16,port=0x25 -device vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=4,bus=pcie-root-port15 -device pcie-root-port,id=pcie-root-port17,bus=pcie.0,addr=0x11,multifunction=on,chassis=18,port=0x26 -device pcie-pci-bridge,id=pci.1,bus=pcie-root-port17 -device i6300esb,bus=pci.1,addr=0x1 -watchdog-action reset -monitor stdio -qmp tcp:0:4467,server=on,nowait -serial unix:/tmp/ttym,server=on,wait=off -k en-us -vnc :1 -device virtio-vga

Comment 82 Dr. David Alan Gilbert 2022-08-15 08:50:26 UTC
(In reply to Min Deng from comment #81)
> Hi all, 
> I hit a hanging issue while doing ping-pong stable guest abi from rhel8.6 to
> rhel9.1, but it hung on SOURCE SIDE. Is it the same reason to this bug ?
> Thanks.
> Build information,
> RHEL8.6
> kernel-4.18.0-372.22.1.el8_6.x86_64
> qemu-kvm-6.2.0-11.module+el8.6.0+16271+0f1054e8.3.x86_64
> RHEL9.1
> kernel-5.14.0-142.el9.x86_64
> qemu-kvm-7.0.0-10.el9.x86_64
> Steps,
> 1. running heavy load test in the guest 
> 2. doing post copy
> 3. the source side's qemu process hung up, but it seems that the guest
> worked on DST host by connect it with vnc.

That's a separate bug, please open a new bz and let me know.

> Guest cmdline,
> /usr/libexec/qemu-kvm -name win2022 -machine
> pc-q35-rhel7.6.0,accel=kvm,usb=off,pflash0=drive_ovmf_code,
> pflash1=drive_ovmf_vars -blockdev
> node-name=file_ovmf_code,driver=file,filename=/usr/share/edk2/ovmf/OVMF_CODE.
> secboot.fd,auto-read-only=on,discard=unmap -blockdev
> node-name=drive_ovmf_code,driver=raw,read-only=on,file=file_ovmf_code
> -blockdev
> node-name=file_ovmf_vars,driver=file,filename=OVMF_VARS.fd,auto-read-only=on,
> discard=unmap -blockdev
> node-name=drive_ovmf_vars,driver=raw,read-only=off,file=file_ovmf_vars -cpu
> SandyBridge,hv_crash,hv_stimer,hv_synic,hv_vpindex,hv_relaxed,
> hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,
> hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt -sandbox off -m
> 7680 -smp 8,cores=1,threads=1,sockets=8 -uuid
> 49a3438a-70a3-4ba8-92ce-3a05e0934608 -no-user-config -nodefaults -rtc
> base=localtime,clock=host,driftfix=slew -no-hpet -boot
> order=c,menu=on,splash-time=3000,strict=on -chardev
> socket,id=charmonitor,path=/home/tmp,server=on,wait=off -mon
> chardev=charmonitor,id=monitor,mode=control -drive
> file=virtio-win-1.9.24.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw
> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device
> ahci,id=ahci0,bus=pcie.0,addr=0x3 -device
> pcie-root-port,id=pcie-root-port0,bus=pcie.0,addr=0x4,multifunction=on,
> port=0x10,chassis=1 -device virtio-scsi-pci,id=scsi0,bus=pcie-root-port0
> -device
> pcie-root-port,id=pcie-root-port1,bus=pcie.0,addr=0x4.1,chassis=2,port=0x11
> -drive
> file=win2022-64-virtio-scsi.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,
> cache=none,discard=unmap,werror=stop,rerror=stop,aio=threads -device
> scsi-hd,bus=scsi0.0,lun=0,drive=drive-virtio-disk0,id=virtio-disk0,
> bootindex=0 -drive
> file=virtio-scsi-disk,if=none,id=drive-scsi-disk,format=qcow2,cache=none,
> werror=stop,rerror=stop -device
> scsi-hd,drive=drive-scsi-disk,bus=scsi0.0,lun=1,id=data-disk1,bootindex=1
> -device virtio-serial-pci,id=virtio-serial0,bus=pcie-root-port1 -chardev
> pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0
> -chardev socket,id=charchannel1,path=/home/tmp2,server=on,wait=off -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,
> name=org.qemu.guest_agent.0 -device intel-hda,id=sound0,bus=pcie.0,addr=0x7
> -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device
> intel-hda,id=sound1,bus=pcie.0,addr=0x8 -device
> hda-micro,id=sound1-codec0,bus=sound1.0 -device
> intel-hda,id=sound2,bus=pcie.0,addr=0x9 -device
> hda-output,id=sound2-codec0,bus=sound2.0,cad=0 -device
> ich9-intel-hda,id=sound3,bus=pcie.0,addr=0xa -device
> hda-duplex,id=sound3-codec0,bus=sound3.0,cad=0 -device pvpanic,ioport=1285
> -msg timestamp=on -device
> pcie-root-port,id=pcie-root-port2,bus=pcie.0,addr=0x4.2,chassis=3,port=0x12
> -netdev tap,id=hostnet1,vhost=on,script=/etc/qemu-ifup -device
> e1000e,netdev=hostnet1,id=virtio-net-pci1,mac=00:52:68:26:31:03,bus=pcie-
> root-port2 -device
> pcie-root-port,id=pcie-root-port3,bus=pcie.0,addr=0x4.3,chassis=4,port=0x13
> -netdev tap,id=hostnet2,vhost=on,script=/etc/qemu-ifup -device
> virtio-net-pci,netdev=hostnet2,id=virtio-net-pci2,mac=00:52:68:26:31:04,
> bus=pcie-root-port3 -drive
> file=ide-disk,if=none,id=drive-data-disk,format=raw,cache=none,aio=native,
> werror=stop,rerror=stop,copy-on-read=off,media=disk -device
> ide-hd,drive=drive-data-disk,id=system-disk,logical_block_size=512,
> physical_block_size=512,min_io_size=512,opt_io_size=512,
> discard_granularity=512,ver=fuxc-ver,bus=ide.0,unit=0 -device
> pcie-root-port,id=pcie-root-port4,bus=pcie.0,addr=0x4.4,chassis=5,port=0x14
> -device ich9-usb-uhci6,id=uhci6,bus=pcie-root-port4 -device
> usb-kbd,id=kdb0,bus=uhci6.0 -device
> pcie-root-port,id=pcie-root-port5,bus=pcie.0,addr=0x4.5,chassis=6,port=0x15
> -device ich9-usb-uhci5,id=uhci5,bus=pcie-root-port5 -device
> usb-mouse,id=mouse0,bus=uhci5.0 -device
> pcie-root-port,id=pcie-root-port6,bus=pcie.0,addr=0x4.6,chassis=7,port=0x16
> -device qemu-xhci,id=xhci,bus=pcie-root-port6 -device
> pcie-root-port,id=pcie-root-port7,bus=pcie.0,addr=0x4.7,chassis=8,port=0x17
> -device
> pcie-root-port,id=pcie-root-port8,bus=pcie.0,addr=0x10,multifunction=on,
> chassis=9,port=0x18 -device usb-ehci,id=ehci,bus=pcie-root-port8 -device
> pcie-root-port,id=pcie-root-port9,bus=pcie.0,addr=0x10.1,chassis=10,
> port=0x19 -device piix3-usb-uhci,id=usb,bus=pcie-root-port9 -device
> pcie-root-port,id=pcie-root-port10,bus=pcie.0,addr=0x10.2,chassis=11,
> port=0x20 -device ich9-usb-uhci3,id=uhci,bus=pcie-root-port10 -device
> usb-storage,drive=drive-usb-0,id=usb-0,removable=on,bus=uhci.0,port=1 -drive
> file=usb-uhci,if=none,id=drive-usb-0,media=disk,format=qcow2 -device
> pcie-root-port,id=pcie-root-port11,bus=pcie.0,addr=0x10.3,chassis=12,
> port=0x21 -device
> pcie-root-port,id=pcie-root-port12,bus=pcie.0,addr=0x10.4,chassis=13,
> port=0x22 -device ich9-usb-ehci1,id=ehci1,bus=pcie-root-port11 -device
> usb-storage,drive=drive-usb-1,id=usb-1,removable=on,bus=ehci1.0,port=1
> -drive file=usb-ehci,if=none,id=drive-usb-1,media=disk,format=qcow2 -device
> qemu-xhci,id=xhci1,bus=pcie-root-port12 -device
> usb-storage,drive=drive-usb-2,id=usb-2,removable=on,bus=xhci1.0,port=1
> -drive file=usb-xhci,if=none,id=drive-usb-2,media=disk,format=qcow2 -device
> pcie-root-port,id=pcie-root-port13,bus=pcie.0,addr=0x10.5,chassis=14,
> port=0x23 -object rng-random,filename=/dev/urandom,id=objrng0 -device
> virtio-rng-pci,rng=objrng0,id=rng0,bus=pcie-root-port13 -device
> pcie-root-port,id=pcie-root-port14,bus=pcie.0,addr=0x10.6,chassis=15,
> port=0x24 -device virtio-balloon-pci,id=balloon0,bus=pcie-root-port14
> -device
> pcie-root-port,id=pcie-root-port15,bus=pcie.0,addr=0x10.7,chassis=16,
> port=0x25 -device
> vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=4,bus=pcie-root-port15 -device
> pcie-root-port,id=pcie-root-port17,bus=pcie.0,addr=0x11,multifunction=on,
> chassis=18,port=0x26 -device pcie-pci-bridge,id=pci.1,bus=pcie-root-port17
> -device i6300esb,bus=pci.1,addr=0x1 -watchdog-action reset -monitor stdio
> -qmp tcp:0:4467,server=on,nowait -serial unix:/tmp/ttym,server=on,wait=off
> -k en-us -vnc :1 -device virtio-vga

Comment 83 Dr. David Alan Gilbert 2022-08-15 09:35:28 UTC
ea4d6938d4c0 is apparently working OK for me, so what ever broke it is after that.  Mind you it's part of a big series.

Comment 84 Dr. David Alan Gilbert 2022-08-15 12:14:47 UTC
ok, somewhere in 5.17 between:
56d33754481fe0dc7436 and 1ebdbeb03efe89f01f15
bisecting...

Comment 85 Li Xiaohui 2022-08-15 13:23:22 UTC
Hi David, do you think we shall clone a bug for RHEL 9.1.0?  I tried to install rhel 9.1.0 on Nehalem machine, but always fail. 

If you look at the code and think that RHEL 9.1.0 will reproduce the bug, we can clone one first on RHEL 9.1.0 (of course I will try to loan other Nehalem/Westmere machines to install 9.1.0 and reproduce)

Comment 86 Dr. David Alan Gilbert 2022-08-15 19:01:06 UTC
git bisect says:
ad856280ddea3401e1f5060ef20e6de9f6122c76 is the first bad commit
commit ad856280ddea3401e1f5060ef20e6de9f6122c76
Author: Leonardo Bras <leobras>
Date:   Thu Feb 17 02:30:29 2022 -0300

    x86/kvm/fpu: Limit guest user_xfeatures to supported bits of XCR0

trying: dfd42facf1e4ada021b939b4e19c935dcdd55566 (~5.17rc3) - good!
  gb: 8efd0d9c316af470377894a6a0f9ff63ce18c177 - good
      0f907c3880f82cf9e8884c98aa70dd9e61221dfc - good
      78b390bd5657e79f8e60b736f81ac1a3203777ea - good
      8b97cae315cafd7debf3601f88621e2aa8956ef3 - good
      73878e5eb1bd3c9656685ca60bc3a49d17311e0c - good
      de7b2efacf4e83954aed3f029d347dfc0b7a4f49 - good
      e8240addd0a3919e0fd7436416afe9aa6429c484 - good
      4cb9a998b1ce25fad74a82f5a5c45a4ef40de337 - good
      3a55f729240a686aa8af00af436306c0cd532522 - good
      ad856280ddea3401e1f5060ef20e6de9f6122c76 - bad
      988896bb61827345c6d074dd5f2af1b7b008193f - bad
      e910a53fb4f20aa012e46371ffb4c32c8da259b4 - bad
      7ee022567bf9e2e0b3cd92461a2f4986ecc99673 - bad
      c0419188b5c1a7735b12cf1405cafc3f8d722819 - bad
      69d1dea852b54eecd8ad2ec92a7fd371e9aec4bd - bad
trying: 1ebdbeb03efe89f01f15 - fails!

Comment 87 Dr. David Alan Gilbert 2022-08-15 19:02:30 UTC
I suspect this is omitting the FP/SSE bits that are the default even if we don't have xsave; I need to follow it more, but in my case what I was seeing was:

[  206.573972] validate_user_xstate_header: xfeatures: hdr: 3 user: 0
[  206.573977] copy_uabi_to_xstate: validate_user_xstate_header failed
[  206.573978] fpu_copy_uabi_to_guest_fpstate: error -22 on copy_uabi_from_kernel_to_xstate

I think it's that user 0 that's the problem.

Comment 88 Dr. David Alan Gilbert 2022-08-16 18:33:09 UTC
Posted upstream:
Subject: [PATCH] KVM: x86: Always enable legacy fp/sse

Comment 89 Dr. David Alan Gilbert 2022-08-24 11:49:34 UTC
Sean has posted a 3 patch series including this

https://lore.kernel.org/lkml/20220824033057.3576315-1-seanjc@google.com/

Given that he states that there's a risk of guest state corruption in other cases, on more modern CPUs,
I think it may be worth asking for this in 8.7

Comment 91 Yash Mankad 2022-09-11 14:07:48 UTC
This BZ is now too late for 8.7
Moving to 8.8, and setting ITM=6

Once the BZ is further along the development process (MODIFIED or later), I will set zstream+ and clone the bug for 8.6 ZStream.

Comment 92 Dr. David Alan Gilbert 2022-09-20 18:47:14 UTC
Still waiting for it to go in upstream; but note it also fixes 2118547 which is a slightly different case

Comment 93 Dr. David Alan Gilbert 2022-09-26 08:28:00 UTC
Merged upstream:
a1020a25e69755a8a1a37735d674b91d6f02939f

probably best to take the other pair of 2 which went with it from Sean:
50b2d49bafa16e6311ab KVM: x86: Inject #UD on emulated XSETBV if XSAVES isn't enabled
ee519b3a2ae3027c341b KVM: x86: Reinstate kvm_vcpu_arch.guest_supported_xcr0

Comment 94 Dr. David Alan Gilbert 2022-09-27 14:12:28 UTC
This is now in a merge request for centos-stream 9;
https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/1351/diffs?commit_id=4a0a2cc969845ce5dc5096023bb428d445662fd9

we need to do a backport to 8.8.
(I'm not currently planning to do an 8.6.z unless someone is explicitly requesting it; please shout it you need it!)

Comment 97 John 2022-09-28 12:04:30 UTC
Yes please, to backporting to 8.6.z

Comment 98 Li Xiaohui 2022-09-29 03:32:46 UTC
Mark qe_test_coverage- since this bug only reproduces when migrating from Nehalem/Westmere to a higher intel CPU model or doing local migration on Nehalem or Westmere(the source and destination host are the same machine), but CPU Nehalem and Westmere aren't supported on RHEL8 and RHEL9:
https://docs.engineering.redhat.com/pages/viewpage.action?spaceKey=RHELPLAN&title=RHEL+9+Hardware+Enablement+Requirements

Comment 99 Li Xiaohui 2022-09-29 07:40:36 UTC
Extend ITM from 6 to 7 since Oct 1 to Oct 7 are Chine National Day

Comment 102 Li Xiaohui 2022-10-05 09:20:58 UTC
Reproduce this bug on kernel-4.18.0-427.el8.x86_64 when migrating from Westmere to Haswell machine.

Preverify this bug on kernel-4.18.0-427.el8.mr3420_220927_1758.gc540.x86_64, run the following tests, all pass.
1. Run Tier1 migration cases from Westmere to Haswell machine
2. Run Tier1 migration cases from Haswell machine to Westmere machine.


I'm also doing regression tests between Icelake machines to see whether the fix brings some regression issues.

Comment 104 Li Xiaohui 2022-10-12 11:42:50 UTC
This scenario has been covered in the migration test plan, and will be tested in the future. So changed qe_test_coverage from - to +

Comment 105 Li Xiaohui 2022-10-13 07:58:46 UTC
Verify this bug on kernel-4.18.0-430.el8.x86_64 && qemu-kvm-6.2.0-22.module+el8.8.0+16816+1d3555ec.x86_64, run following test loops:
1. Run Tier1 migration cases from Westmere to Haswell machine --PASS
2. Run Tier1 migration cases from Haswell machine to Westmere machine --PASS
3. Run all Tiers migration cases from Icelake-Server to Icelake-Server machine --PASS
4. Run Tier1 migration cases from Naples to Milan machine --PASS


As all test loops pass, will mark this bug verified once it's on_qa status.

Comment 106 Li Xiaohui 2022-10-13 13:10:27 UTC
Hi, could you help add this bug to the errata so that it can be changed into on_qa status? 
Then I can mark this bug verified, thanks in advance.

Comment 107 Li Xiaohui 2022-10-17 03:06:46 UTC
Hi Lucas, can you help handle this bug to be on_qa status and ensure it's in the errata system?

Comment 108 Lucas Zampieri 2022-10-17 13:46:23 UTC
Hey Li, Sorry for the delayed reply, I was on PTO last week, that build should've been automatically added, but for some reason it wasn't, I'm looking into it

Comment 111 Li Xiaohui 2022-10-17 14:11:53 UTC
Thank you very much Lucas.

Move this bug verified per Comment 15.


Note You need to log in before you can comment on or make changes to this bug.