Bug 1856298 - [ppc]kdump failed and had to kill qemu on host since guest hung at [ 0.000000] numa:NODE_DATA [mem 0x47f33c80-0x47f3ffff]
Summary: [ppc]kdump failed and had to kill qemu on host since guest hung at [ 0.00000...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.3
Hardware: ppc64le
OS: Linux
high
high
Target Milestone: rc
: 8.3
Assignee: Laurent Vivier
QA Contact: Min Deng
URL:
Whiteboard:
Depends On:
Blocks: 1776265
TreeView+ depends on / blocked
 
Reported: 2020-07-13 10:56 UTC by Min Deng
Modified: 2020-11-17 17:50 UTC (History)
14 users (show)

Fixed In Version: qemu-kvm-5.1.0-2.module+el8.3.0+7652+b30e6901
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-17 17:50:15 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
IBM Linux Technology Center 187052 0 None None None 2020-07-22 23:46:27 UTC

Description Min Deng 2020-07-13 10:56:04 UTC
Description of problem:
[ppc]kdump failed and had to kill qemu on host since guest hung at [  0.000000] numa:NODE_DATA [mem 0x47f33c80-0x47f3ffff]

Version-Release number of selected component (if applicable):
qemu-kvm-5.0.0-0.scrmod+el8.3.0+7308+053b39e7.wrb200708.ppc64le
SLOF-20200327-1.git8e012d6f.scrmod+el8.3.0+7308+053b39e7.noarch
kernel-4.18.0-221.el8.ppc64le
or
kernel-3.10.0-1154/56.el7.ppc64le

How reproducible:
always

Steps to Reproduce:
1.boot up a guest with 
  /usr/libexec/qemu-kvm -name 'avocado-vt-vm1' -sandbox on -machine pseries -nodefaults -m 20G -smp 8 -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 -blockdev node-name=file_image1,driver=file,aio=threads,filename=rhel830-ppc64le-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 -device scsi-hd,id=image1,drive=drive_image1,write-cache=on -vnc :1 -enable-kvm -monitor stdio -chardev socket,id=chardev_serial0,server,nowait,path=/var/tmp/serial-serial -device spapr-vty,id=serial0,reg=0x30000000,chardev=chardev_serial0

2.# dmesg | grep crashkernel
dmesg | grep crashkernel
[    0.000000] Using crashkernel=auto, the size chosen is a best effort estimation.
[    0.000000] Reserving 1024MB of memory at 128MB for crashkernel (System RAM: 20480MB)
[    0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-4.18.0-221.el8.ppc64le root=/dev/mapper/rhel_dhcp19--129--4-root ro console=ttyS0,115200 crashkernel=auto rd.lvm.lv=rhel_dhcp19-129-4/root rd.lvm.lv=rhel_dhcp19-129-4/swap biosdevname=0 net.ifnames=0 console=tty0 biosdevname=0 net.ifnames=0 console=hvc0,38400


3.[root@localhost ~]# service kdump start	
service kdump start
Redirecting to /bin/systemctl start kdump.service


4.[root@localhost ~]# service kdump status            - make sure kdump service is on
service kdump status       
Redirecting to /bin/systemctl status kdump.service
● kdump.service - Crash recovery kernel arming
   Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor prese>
   Active: active (exited) since Mon 2020-07-13 14:33:01 CST; 2min 11s ago
  Process: 1343 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCES>
 Main PID: 1343 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 123224)
   Memory: 0B
   CGroup: /system.slice/kdump.service

3.[root@localhost ~]# echo c >/proc/sysrq-trigger    -trigger it from console
echo c >/proc/sysrq-trigger

Actual results:
The guest hung, it could occur on both hpt and radxi guest,can't quit qemu with "q" but had to kill the process.

Radix guest,
[  181.282251] sysrq: SysRq : Trigger a crash
[  181.282285] Unable to handle kernel paging request for data at address 0x00000000
[  181.282333] Faulting instruction address: 0xc0000000008ac3c8
[  181.282374] Oops: Kernel access of bad area, sig: 11 [#1]
[  181.282406] LE SMP NR_CPUS=2048 NUMA pSeries
[  181.287506] Modules linked in: nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nf_tables_set nft_chain_nat_ipv6 nf_nat_ipv6 nft_chain_route_ipv6 nft_chain_nat_ipv4 nf_nat_ipv4 nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_chain_route_ipv4 ip6_tables nft_compat ip_set nf_tables nfnetlink uio_pdrv_genirq ip_tables xfs libcrc32c sd_mod sg xts vmx_crypto virtio_scsi dm_multipath dm_mirror dm_region_hash dm_log dm_mod be2iscsi bnx2i cnic uio cxgb4i cxgb4 libcxgbi libcxgb qla4xxx iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi
[  181.287838] CPU: 1 PID: 10676 Comm: bash Kdump: loaded Not tainted 4.18.0-221.el8.ppc64le #1
[  181.287893] NIP:  c0000000008ac3c8 LR: c0000000008ad3d4 CTR: c0000000008ac3a0
[  181.287942] REGS: c0000004e4f83a50 TRAP: 0300   Not tainted  (4.18.0-221.el8.ppc64le)
[  181.287990] MSR:  8000000000009033 <SF,EE,ME,IR,DR,RI,LE>  CR: 28222282  XER: 20040000
[  181.288046] CFAR: c0000000008ad3d0 DAR: 0000000000000000 DSISR: 42000000 IRQMASK: 0 
[  181.288046] GPR00: c0000000008ad3d4 c0000004e4f83cd0 c000000001ac0700 0000000000000063 
[  181.288046] GPR04: c0000004ffa8cf90 c0000004ffb15628 000000000000013b 0000000000000001 
[  181.288046] GPR08: 0000000000000007 0000000000000001 0000000000000000 c0000004e4f8386f 
[  181.288046] GPR12: c0000000008ac3a0 c0000003fffcee80 0000000040000000 0000000100a89798 
[  181.288046] GPR16: 0000000100a89724 0000000100a26988 00000001009bf290 0000000100a8d568 
[  181.288046] GPR20: 000000011034dfb0 0000000000000001 0000000100a39708 00007ffffe3d3134 
[  181.288046] GPR24: 00007ffffe3d3130 c00000000172ade8 0000000000000000 0000000000000007 
[  181.288046] GPR28: 0000000000000000 0000000000000063 c000000001af247c c0000000016d3a90 
[  181.288465] NIP [c0000000008ac3c8] sysrq_handle_crash+0x28/0x30
[  181.288506] LR [c0000000008ad3d4] __handle_sysrq+0xe4/0x230
[  181.288539] Call Trace:
[  181.288557] [c0000004e4f83cd0] [c0000000008ad3b8] __handle_sysrq+0xc8/0x230 (unreliable)
[  181.288606] [c0000004e4f83d70] [c0000000008adb58] write_sysrq_trigger+0x68/0x90
[  181.288655] [c0000004e4f83da0] [c0000000005faed0] proc_reg_write+0x90/0x1a0
[  181.288697] [c0000004e4f83dd0] [c00000000051feb4] sys_write+0x134/0x3a0
[  181.288740] [c0000004e4f83e30] [c00000000000b408] system_call+0x5c/0x70
[  181.288780] Instruction dump:
[  181.288805] 4bfffe38 00000000 3c4c0121 38424360 7c0802a6 60000000 39200001 3d42ffc1 
[  181.288857] 394abaf0 912a0000 7c0004ac 39400000 <992a0000> 4e800020 3c4c0121 38424330 
[  181.288912] ---[ end trace 39aaf64263eb17e6 ]---
[  181.290547] 
[  181.290785] Sending IPI to other CPUs
[  181.297165] IPI complete
[  181.300087] kexec: Starting switchover sequence.
I'm in purgatory
[    0.000000] radix-mmu: Page sizes from device-tree:
[    0.000000] radix-mmu: Page size shift = 12 AP=0x0
[    0.000000] radix-mmu: Page size shift = 16 AP=0x5
[    0.000000] radix-mmu: Page size shift = 21 AP=0x1
[    0.000000] radix-mmu: Page size shift = 30 AP=0x2
[    0.000000] lpar: Using radix MMU under hypervisor
[    0.000000] radix-mmu: Mapped 0x0000000000000000-0x0000000040000000 with 1.00 GiB pages
[    0.000000] radix-mmu: Mapped 0x0000000040000000-0x0000000048000000 with 2.00 MiB pages
[    0.000000] radix-mmu: Process table (____ptrval____) and radix root for kernel: (____ptrval____)
[    0.000000] Linux version 4.18.0-221.el8.ppc64le (mockbuild.eng.bos.redhat.com) (gcc version 8.3.1 20191121 (Red Hat 8.3.1-5) (GCC)) #1 SMP Thu Jun 25 20:53:41 UTC 2020
[    0.000000] Found initrd at 0xc00000000a730000:0xc00000000bc28429
[    0.000000] Using pSeries machine description
[    0.000000] printk: bootconsole [udbg0] enabled
[    0.000000] Partition configured for 8 cpus.
[    0.000000] CPU maps initialized for 1 thread per core
[    0.000000] NUMA disabled by user
[    0.000000] -----------------------------------------------------
[    0.000000] ppc64_pft_size    = 0x0
[    0.000000] phys_mem_size     = 0x48000000
[    0.000000] dcache_bsize      = 0x80
[    0.000000] icache_bsize      = 0x80
[    0.000000] cpu_features      = 0x0001c06f8f4f91a7
[    0.000000]   possible        = 0x0003fbffcf5fb1a7
[    0.000000]   always          = 0x00000003800081a1
[    0.000000] cpu_user_features = 0xdc0065c2 0xaee00000
[    0.000000] mmu_features      = 0x3c006041
[    0.000000] firmware_features = 0x00000085455a445f
[    0.000000] physical_start    = 0x8000000
[    0.000000] -----------------------------------------------------
[    0.000000] numa:   NODE_DATA [mem 0x47f33c80-0x47f3ffff]


HPT guest,
I'm in purgatory
[    0.000000] Using pSeries machine description
[    0.000000] Page sizes from device-tree:
[    0.000000] base_shift=12: shift=12, sllp=0x0000, avpnm=0x00000000, tlbiel=1, penc=0
[    0.000000] base_shift=16: shift=16, sllp=0x0110, avpnm=0x00000000, tlbiel=1, penc=1
[    0.000000] Using 1TB segments
[    0.000000] Found initrd at 0xc0000000098e0000:0xc00000000aa3c780
[    0.000000] bootconsole [udbg0] enabled
[    0.000000] Partition configured for 8 cpus.
[    0.000000] CPU maps initialized for 1 thread per core
[    0.000000] Starting Linux PPC64 #1 SMP Thu Jul 2 09:19:15 UTC 2020
[    0.000000] -----------------------------------------------------
[    0.000000] ppc64_pft_size                = 0x1c
[    0.000000] physicalMemorySize            = 0x48000000
[    0.000000] htab_hash_mask                = 0x1fffff
[    0.000000] physical_start                = 0x8000000
[    0.000000] -----------------------------------------------------
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.10.0-1154.el7.ppc64le (mockbuild.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) ) #1 SMP Thu Jul 2 09:19:15 UTC 2020
CF000012
Setup Arch[    0.000000] [boot]0012 Setup Arch
[    0.000000] NUMA disabled by user


Expected results:
The guest can reboot successfully and the vmcore can be generated successfully 


Additional info:

Comment 1 Min Deng 2020-07-13 10:59:08 UTC
It should be regression, since it wasn't reproducible on the following build with the *same image* 
qemu-kvm-4.2.0-29.module+el8.3.0+7212+401047e6.ppc64le
kernel-4.18.0-221.el8.ppc64le

Comment 2 Laurent Vivier 2020-07-22 14:10:34 UTC
Reproduced on POWER8 with qemu-kvm-5.0.0-2.module+el8.3.0+7379+0505d6ca and guest kernel 4.18.0-211.el8.
Not reproduced in the same condition with qemu-kvm-4.2.0.

Bisecting QEMU...

Comment 3 Laurent Vivier 2020-07-22 17:32:38 UTC
It's not a reproducible at 100% (so bisect is not eassy)

The problem appears with this change in QEMU:

commit ec010c00665ba1e78e6b3df104f923c4ea68504a
Author: Nicholas Piggin <npiggin>
Date:   Thu Mar 26 00:29:03 2020 +1000

    ppc/spapr: KVM FWNMI should not be enabled until guest requests it

    The KVM FWNMI capability should be enabled with the "ibm,nmi-register"
    rtas call. Although MCEs from KVM will be delivered as architected
    interrupts to the guest before "ibm,nmi-register" is called, KVM has
    different behaviour depending on whether the guest has enabled FWNMI
    (it attempts to do more recovery on behalf of a non-FWNMI guest).

    Signed-off-by: Nicholas Piggin <npiggin>
    Message-Id: <20200325142906.221248-2-npiggin>
    Reviewed-by: Greg Kurz <groug>
    Signed-off-by: David Gibson <david.id.au>

According to this, adding "-machine cap-fwnmi=off" to QEMU command line is a workaround for this problem.

I'm going to check if the upstream kernel is able to manage this new behavior correctly.

Comment 4 Laurent Vivier 2020-07-23 10:19:13 UTC
One of the vCPU hangs on the call to kvmppc_set_fwnmi() in QEMU:

#0  0x00007fffaff84860 in ioctl () at /lib64/libc.so.6
#1  0x000000011f956bc0 in kvm_vcpu_ioctl (cpu=0x1001b94f7a0, type=<optimized out>)
    at .../qemu/accel/kvm/kvm-all.c:2631

#2  0x000000011fb5487c in kvmppc_set_fwnmi () at .../qemu/target/ppc/kvm.c:2079

#3  0x000000011fa3d5e4 in rtas_ibm_nmi_register
    (cpu=<optimized out>, token=<optimized out>, nargs=<optimized out>, nret=<optimized out>, rets=<optimized out>, args=<optimized out>, spapr=0x1001b3b0400) at .../qemu/hw/ppc/spapr_rtas.c:441
#4  0x000000011fa3d5e4 in rtas_ibm_nmi_register
    (cpu=<optimized out>, spapr=0x1001b3b0400, token=<optimized out>, nargs=<optimized out>, args=<optimized out>, nret=<optimized out>, rets=<optimized out>) at .../qemu/hw/ppc/spapr_rtas.c:410
#5  0x000000011fa3cc9c in spapr_rtas_call
    (cpu=<optimized out>, spapr=<optimized out>, token=<optimized out>, nargs=<optimized out>, args=<optimized out>, nret=<optimized out>, rets=<optimized out>) at .../qemu/hw/ppc/spapr_rtas.c:512
#6  0x000000011fa34f4c in h_rtas
    (cpu=0x1001b9978f0, spapr=0x1001b3b0400, opcode=<optimized out>, args=<optimized out>)
    at .../qemu/hw/ppc/spapr_hcall.c:1216
#7  0x000000011fa395f8 in spapr_hypercall (cpu=0x1001b9978f0, opcode=61440, args=0x7fffa2680030)
    at .../qemu/hw/ppc/spapr_hcall.c:2071
#8  0x000000011fb56ba8 in kvm_arch_handle_exit (cs=0x1001b9978f0, run=0x7fffa2680000)
    at .../qemu/target/ppc/kvm.c:1683
#9  0x000000011f9571cc in kvm_cpu_exec (cpu=0x1001b9978f0)
    at .../qemu/accel/kvm/kvm-all.c:2567
#10 0x000000011fa85d18 in qemu_kvm_cpu_thread_fn (arg=0x1001b9978f0)
    at .../qemu/softmmu/cpus.c:1188
#11 0x000000011fa85d18 in qemu_kvm_cpu_thread_fn (arg=0x1001b9978f0)
    at .../qemu/softmmu/cpus.c:1160
#12 0x000000011ffedc20 in qemu_thread_start (args=<optimized out>)
    at .../qemu/util/qemu-thread-posix.c:521
#13 0x00007fffb0088878 in start_thread () at /lib64/libpthread.so.0
#14 0x00007fffaff932c8 in clone () at /lib64/libc.so.6

Comment 5 Laurent Vivier 2020-07-23 12:53:56 UTC
Same result with host 5.8.0-rc6+ kernel (d15be546031c)

Note: detaching GDB from QEMU unblocks the VM...

Comment 6 Laurent Vivier 2020-07-23 17:08:27 UTC
(In reply to Laurent Vivier from comment #4)
> One of the vCPU hangs on the call to kvmppc_set_fwnmi() in QEMU:
> 
> #0  0x00007fffaff84860 in ioctl () at /lib64/libc.so.6
> #1  0x000000011f956bc0 in kvm_vcpu_ioctl (cpu=0x1001b94f7a0, type=<optimized
> out>)
>     at .../qemu/accel/kvm/kvm-all.c:2631
> #2  0x000000011fb5487c in kvmppc_set_fwnmi () at
> .../qemu/target/ppc/kvm.c:2079

In the kernel, the ioctl() is waiting for the vcpu->mutex in virt/kvm/kvm_main.c:

3120 static long kvm_vcpu_ioctl(struct file *filp,
3121                            unsigned int ioctl, unsigned long arg)
3122 {
...
3143         if (mutex_lock_killable(&vcpu->mutex))
3144                 return -EINTR;
...

I think this is because qemu issues the ioctl() on the first CPU, in target/ppc/kvm.c:

   2074 int kvmppc_set_fwnmi(void)
   2075 {
   2076     PowerPCCPU *cpu = POWERPC_CPU(first_cpu);
   2077     CPUState *cs = CPU(cpu);
   2078 
   2079     return kvm_vcpu_enable_cap(cs, KVM_CAP_PPC_FWNMI, 0);
   2080 }

and if the first CPU is running (not the one doing the rtas_ibm_nmi_register()) they dead lock.

To avoid this, we should run the kvmppc_set_fwnmi() on the vCPU doing the rtas_ibm_nmi_register():

diff --git a/hw/ppc/spapr_rtas.c b/hw/ppc/spapr_rtas.c
index bcac0d00e7..513c7a8435 100644
--- a/hw/ppc/spapr_rtas.c
+++ b/hw/ppc/spapr_rtas.c
@@ -438,7 +438,7 @@ static void rtas_ibm_nmi_register(PowerPCCPU *cpu,
     }
 
     if (kvm_enabled()) {
-        if (kvmppc_set_fwnmi() < 0) {
+        if (kvmppc_set_fwnmi(cpu) < 0) {
             rtas_st(rets, 0, RTAS_OUT_NOT_SUPPORTED);
             return;
         }
diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
index 2692f76130..d85ba8ffe0 100644
--- a/target/ppc/kvm.c
+++ b/target/ppc/kvm.c
@@ -2071,9 +2071,8 @@ bool kvmppc_get_fwnmi(void)
     return cap_fwnmi;
 }
 
-int kvmppc_set_fwnmi(void)
+int kvmppc_set_fwnmi(PowerPCCPU *cpu)
 {
-    PowerPCCPU *cpu = POWERPC_CPU(first_cpu);
     CPUState *cs = CPU(cpu);
 
     return kvm_vcpu_enable_cap(cs, KVM_CAP_PPC_FWNMI, 0);
diff --git a/target/ppc/kvm_ppc.h b/target/ppc/kvm_ppc.h
index 701c0c262b..72e05f1cd2 100644
--- a/target/ppc/kvm_ppc.h
+++ b/target/ppc/kvm_ppc.h
@@ -28,7 +28,7 @@ void kvmppc_set_papr(PowerPCCPU *cpu);
 int kvmppc_set_compat(PowerPCCPU *cpu, uint32_t compat_pvr);
 void kvmppc_set_mpic_proxy(PowerPCCPU *cpu, int mpic_proxy);
 bool kvmppc_get_fwnmi(void);
-int kvmppc_set_fwnmi(void);
+int kvmppc_set_fwnmi(PowerPCCPU *cpu);
 int kvmppc_smt_threads(void);
 void kvmppc_error_append_smt_possible_hint(Error *const *errp);
 int kvmppc_set_smt_threads(int smt);
@@ -169,7 +169,7 @@ static inline bool kvmppc_get_fwnmi(void)
     return false;
 }
 
-static inline int kvmppc_set_fwnmi(void)
+static inline int kvmppc_set_fwnmi(PowerPCCPU *cpu)
 {
     return -1;
 }

Comment 7 Laurent Vivier 2020-07-23 17:24:14 UTC
I'm wondering if this capability should be moved from vcpu to vm as it is global (not vcpu specific)?

So in the kernel moving KVM_CAP_PPC_FWNMI from kvm_vcpu_ioctl_enable_cap() to kvm_vm_ioctl_enable_cap():

arch/powerpc/kvm/powerpc.c:

   1860 static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
   1861                                      struct kvm_enable_cap *cap)
...
   1969 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
   1970         case KVM_CAP_PPC_FWNMI:
   1971                 r = -EINVAL;
   1972                 if (!is_kvmppc_hv_enabled(vcpu->kvm))
   1973                         break;
   1974                 r = 0;
   1975                 vcpu->kvm->arch.fwnmi_enabled = true;
   1976                 break;
   1977 #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */

to

   2136 int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
   2137                             struct kvm_enable_cap *cap)
...
   2188 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
   2189         case KVM_CAP_PPC_FWNMI:
   2190                 r = -EINVAL;
   2191                 if (!is_kvmppc_hv_enabled(kvm))
   2192                         break;
   2193                 r = 0;
   2194                 kvm->arch.fwnmi_enabled = true;
   2195                 break;
   2196 #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */

Moreover the QEMU kvmppc_get_fwnmi() has the information from the vm state (cap_fwnmi = kvm_vm_check_extension(s, KVM_CAP_PPC_FWNMI);)

But this changes the API.

Comment 9 David Gibson 2020-07-24 02:01:27 UTC
Yes, I think it makes sense to move it.  Although, we'll need backwards compat gunk in practice, of course.  Changing qemu to issue the ioctl on the same cpu doing the RTAS call looks like a reasonable workaround fix in the meantime.

Comment 10 Laurent Vivier 2020-07-24 08:52:20 UTC
Patch sent upstream:

[PATCH] pseries: fix kvmppc_set_fwnmi()
        https://patchew.org/QEMU/20200724083533.281700-1-lvivier@redhat.com/

Comment 12 Laurent Vivier 2020-07-27 09:42:16 UTC
Merged upstream in QEMU v5.1.0-rc2

aef92d87c59d pseries: fix kvmppc_set_fwnmi()
             https://github.com/qemu/qemu/commit/aef92d87c59d257c0ff24ba1dc82506a03f1f522

Comment 15 Min Deng 2020-08-17 05:53:45 UTC
Verified the bug on the following build
qemu-kvm-5.1.0-2.module+el8.3.0+7652+b30e6901.ppc64le
kernel-4.18.0-232.el8.ppc64le
Steps, please see description

Actual result,
Kdump worked well and original issue has gone.
Expected result,
Kdump works well without any

The issue has been fixed and move it to be verified, thanks.

Comment 18 errata-xmlrpc 2020-11-17 17:50:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:8.3 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5137


Note You need to log in before you can comment on or make changes to this bug.