This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2151878 - KVM/nested traceback on AMD CPU when deploying CRC
Summary: KVM/nested traceback on AMD CPU when deploying CRC
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.3
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Bandan Das
QA Contact: yduan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-12-08 12:50 UTC by dpawlik
Modified: 2023-09-22 13:40 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-22 13:40:30 UTC
Type: Bug
Target Upstream Version:
Embargoed:
zhguo: needinfo-


Attachments (Terms of Use)
Dmesg (122.58 KB, application/gzip)
2022-12-08 12:50 UTC, dpawlik
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-7131 0 None Migrated None 2023-09-22 13:40:29 UTC
Red Hat Issue Tracker RHELPLAN-141665 0 None None None 2022-12-08 12:57:48 UTC

Description dpawlik 2022-12-08 12:50:01 UTC
Created attachment 1931041 [details]
Dmesg

Description of problem:
I'm trying to deploy CRC (Red Hat CodeReady Containers) on Centos 9 Stream, which is failing each time ONLY on AMD CPU.
Same image, another hypervisor base on Intel CPU all is working as expected.

On same AMD hypervisor, but on Centos 8 stream all is working fine, without any traceback (checked 2 days ago).

Version-Release number of selected component (if applicable):
 - Operating system: CentOS 9 Stream
 - Architecture: x86_64
 - kernel version: 5.14.0-206.el9.x86_64
 - libvirt version: 8.9.0-2.el9.x86_64
 - Hypervisor and version: is running on Ubuntu 20.04, libvirt version unknown
 - CPU: AMD EPYC 7402

How reproducible:
each time

Steps to Reproduce:
1. Update system
2. Follow CRC setup instuction - https://crc.dev/crc/#installation_gsg
3. Traceback will raise multiple times during `crc start` command

Actual results:
System traceback

    [ 1020.891646] RIP: 0033:0x7f1dcf43ec6b
    [ 1020.891826] Code: 73 01 c3 48 8b 0d b5 b1 1b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0
     ff ff 73 01 c3 48 8b 0d 85 b1 1b 00 f7 d8 64 89 01 48
    [ 1020.892749] RSP: 002b:00007f1b77ffd4a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
    [ 1020.893139] RAX: ffffffffffffffda RBX: 00007f1dc4cb1e50 RCX: 00007f1dcf43ec6b
    [ 1020.893500] RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000019
    [ 1020.893848] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000000000ff
    [ 1020.894196] R10: 00007f1b6c052d80 R11: 0000000000000246 R12: 000055ff6c2babf0
    [ 1020.894569] R13: 00007f1dc4cb1ff0 R14: da3d1b18a8100500 R15: 00007f1dc4cb1e48
    [ 1020.894965] </TASK>
    [ 1020.895806] Call Trace:
    [ 1020.895948] <TASK>
    [ 1020.896060] amd_pmu_enable_all+0x44/0x60
    [ 1020.896447] __perf_install_in_context+0x16c/0x220
    [ 1020.896717] remote_function+0x47/0x50
    [ 1020.896921] generic_exec_single+0x78/0xb0
    [ 1020.897129] smp_call_function_single+0xeb/0x130
    [ 1020.897367] ? sw_perf_event_destroy+0x60/0x60
    [ 1020.897592] ? perf_lock_task_context+0xa3/0x100
    [ 1020.897824] perf_install_in_context+0xcf/0x200
    [ 1020.898064] ? ctx_resched+0xe0/0xe0
    [ 1020.898262] perf_event_create_kernel_counter+0x114/0x180
    [ 1020.898538] pmc_reprogram_counter.constprop.0+0xec/0x220 [kvm]
    [ 1020.898892] amd_pmu_set_msr+0x106/0x170 [kvm_amd]
    [ 1020.899145] ? __svm_vcpu_run+0x67/0x110 [kvm_amd]
    [ 1020.899393] ? get_gp_pmc_amd+0x129/0x200 [kvm_amd]
    [ 1020.899635] __kvm_set_msr+0x7f/0x1c0 [kvm]
    [ 1020.899889] kvm_emulate_wrmsr+0x52/0x1b0 [kvm]
    [ 1020.900153] vcpu_enter_guest+0x667/0x1010 [kvm]
    [ 1020.900428] ? __rseq_handle_notify_resume+0x32/0x50
    [ 1020.900428] ? __rseq_handle_notify_resume+0x32/0x50
    [ 1020.900673] vcpu_run+0x33/0x250 [kvm]
    [ 1020.900892] kvm_arch_vcpu_ioctl_run+0x104/0x620 [kvm]
    [ 1020.901173] kvm_vcpu_ioctl+0x271/0x670 [kvm]
    [ 1020.901422] __x64_sys_ioctl+0x8a/0xc0
    [ 1020.901612] do_syscall_64+0x5c/0x90
    [ 1020.901793] ? syscall_exit_work+0x11a/0x150
    [ 1020.902014] ? syscall_exit_to_user_mode+0x12/0x30
    [ 1020.902255] ? do_syscall_64+0x69/0x90
    [ 1020.902465] ? syscall_exit_work+0x11a/0x150
    [ 1020.902694] ? syscall_exit_to_user_mode+0x12/0x30
    [ 1020.902961] ? do_syscall_64+0x69/0x90
    [ 1020.903155] ? syscall_exit_to_user_mode+0x12/0x30
    [ 1020.903393] ? do_syscall_64+0x69/0x90
    [ 1020.903582] ? do_syscall_64+0x69/0x90
    [ 1020.903777] ? sysvec_apic_timer_interrupt+0x3c/0x90
    [ 1020.904052] entry_SYSCALL_64_after_hwframe+0x63/0xcd
    [ 1020.904325] RIP: 0033:0x7f1dcf43ec6b
    [ 1020.904514] Code: 73 01 c3 48 8b 0d b5 b1 1b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0
     ff ff 73 01 c3 48 8b 0d 85 b1 1b 00 f7 d8 64 89 01 48
    [ 1020.905445] RSP: 002b:00007f1b77ffd4a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
    [ 1020.905841] RAX: ffffffffffffffda RBX: 00007f1dc4cb1e50 RCX: 00007f1dcf43ec6b
    [ 1020.906212] RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000019
    [ 1020.906564] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000000000ff
    [ 1020.906924] R10: 00007f1b6c052d80 R11: 0000000000000246 R12: 000055ff6c2babf0
    [ 1020.907258] R13: 00007f1dc4cb1ff0 R14: da3d1b18a8100500 R15: 00007f1dc4cb1e48
    [ 1020.907606] </TASK>
    [ 1020.907766] Call Trace:
    [ 1020.907900] <TASK>
    [ 1020.908007] x86_pmu_stop+0x50/0xb0
    [ 1020.908183] x86_pmu_del+0x73/0x190
    [ 1020.908363] event_sched_out.part.0+0x7a/0x1f0
    [ 1020.908591] group_sched_out.part.0+0x93/0xf0
    [ 1020.908804] ctx_sched_out+0x124/0x2a0
    [ 1020.908993] perf_event_context_sched_out+0x1a5/0x460
    [ 1020.909249] __perf_event_task_sched_out+0x50/0x170
    [ 1020.909497] ? pick_next_task+0x51/0x940
    [ 1020.909698] prepare_task_switch+0xbd/0x2a0
    [ 1020.909915] __schedule+0x1cb/0x620
    [ 1020.910093] schedule+0x5a/0xc0
    [ 1020.910247] xfer_to_guest_mode_handle_work+0xac/0xe0
    [ 1020.910487] vcpu_run+0x1f5/0x250 [kvm]
    [ 1020.910703] kvm_arch_vcpu_ioctl_run+0x104/0x620 [kvm]
    [ 1020.910991] kvm_vcpu_ioctl+0x271/0x670 [kvm]
    [ 1020.911230] __x64_sys_ioctl+0x8a/0xc0
    [ 1020.911413] do_syscall_64+0x5c/0x90

Expected results:
Instance would work normally.

Additional info:
Libvirt instance XML:
    <domain type='kvm' id='1'>
      <name>crc</name>
      <uuid>deb99e6c-6949-44ab-8776-79940a88a23c</uuid>
      <memory unit='KiB'>9437184</memory>
      <currentMemory unit='KiB'>9437184</currentMemory>
      <memoryBacking>
        <source type='memfd'/>
        <access mode='shared'/>
      </memoryBacking>
      <vcpu placement='static'>4</vcpu>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-q35-rhel9.0.0'>hvm</type>
        <boot dev='hd'/>
        <bootmenu enable='no'/>
      </os>
      <features>
        <acpi/>
        <apic/>
        <pae/>
      </features>
      <cpu mode='host-passthrough' check='none' migratable='on'>
        <feature policy='disable' name='rdrand'/>
      </cpu>
      <clock offset='utc'/>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>destroy</on_crash>
      <devices>
        <emulator>/usr/libexec/qemu-kvm</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2'/>
          <source file='/home/cloud-user/.crc/machines/crc/crc.qcow2' index='1'/>
          <backingStore type='file' index='2'>
            <format type='qcow2'/>
            <source file='/home/cloud-user/.crc/cache/crc_libvirt_4.11.13_amd64/crc.qcow2'/>
            <backingStore/>
          </backingStore>
          <target dev='vda' bus='virtio'/>
          <alias name='virtio-disk0'/>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
        </disk>
        <controller type='usb' index='0' model='qemu-xhci'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <alias name='ide'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'>
          <alias name='pcie.0'/>
        </controller>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x10'/>
          <alias name='pci.1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x11'/>
          <alias name='pci.2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0x12'/>
          <alias name='pci.3'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0x13'/>
          <alias name='pci.4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
        </controller>
        <controller type='pci' index='5' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='5' port='0x14'/>
          <alias name='pci.5'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
        </controller>
        <controller type='pci' index='6' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='6' port='0x15'/>
          <alias name='pci.6'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
        </controller>
        <filesystem type='mount' accessmode='passthrough'>
          <driver type='virtiofs'/>
          <binary path='/usr/libexec/virtiofsd'/>
          <source dir='/home/cloud-user'/>
          <target dir='dir0'/>
          <alias name='fs0'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </filesystem>
        <interface type='network'>
          <mac address='52:fd:fc:07:21:82'/>
          <source network='crc' portid='a6e692ab-3e8f-40fe-8f23-7ef86295de2b' bridge='crc'/>
          <target dev='vnet0'/>
          <model type='virtio'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </interface>
        <serial type='stdio'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
          <alias name='serial0'/>
        </serial>
        <console type='stdio'>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        </console>
        <input type='mouse' bus='ps2'>
          <alias name='input0'/>
        </input>
        <input type='keyboard' bus='ps2'>
          <alias name='input1'/>
        </input>
        <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
          <listen type='address' address='127.0.0.1'/>
        </graphics>
        <audio id='1' type='none'/>
        <video>
          <model type='cirrus' vram='16384' heads='1' primary='yes'/>
          <alias name='video0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
        </video>
        <memballoon model='none'/>
        <rng model='virtio'>
          <backend model='random'>/dev/urandom</backend>
          <alias name='rng0'/>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
        </rng>
      </devices>
      <seclabel type='dynamic' model='selinux' relabel='yes'>
        <label>system_u:system_r:svirt_t:s0:c284,c514</label>
        <imagelabel>system_u:object_r:svirt_image_t:s0:c284,c514</imagelabel>
      </seclabel>
      <seclabel type='dynamic' model='dac' relabel='yes'>
        <label>+107:+107</label>
        <imagelabel>+107:+107</imagelabel>
      </seclabel>
    </domain>

Whole dmesg log in the attachment.

Comment 4 Dr. David Alan Gilbert 2022-12-14 20:23:33 UTC
I'd bet disabling the pmu would help
(But have a vague memory of a recent bug where -perf didn't do it on AMD, but can't find the reference to it)

Comment 5 dpawlik 2022-12-15 07:59:57 UTC
(In reply to Dr. David Alan Gilbert from comment #4)
> I'd bet disabling the pmu would help
> (But have a vague memory of a recent bug where -perf didn't do it on AMD,
> but can't find the reference to it)

Not really, but it raises longer traceback.

virsh dumpxml crc > crc.xml
virsh stop crc
virsh destroy crc
virsh undefine crc

<add <pmu state='off'/> to the features in crc.xml>

virsh define crc.xml
virsh start crc


Current traceback:


[Thu Dec 15 02:58:34 2022] Code: 73 01 c3 48 8b 0d b5 b1 1b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 85 b1 1b 00 f7 d8 64 89 01 48
[Thu Dec 15 02:58:34 2022] RSP: 002b:00007f4e46ffc4a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[Thu Dec 15 02:58:34 2022] RAX: ffffffffffffffda RBX: 00007f4e477fee50 RCX: 00007f4e5163ec6b
[Thu Dec 15 02:58:34 2022] RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000019
[Thu Dec 15 02:58:34 2022] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000000000ff
[Thu Dec 15 02:58:34 2022] R10: 00007f4be82b8790 R11: 0000000000000246 R12: 00005574ce88b870
[Thu Dec 15 02:58:34 2022] R13: 00007f4e477feff0 R14: 696833cf082bd200 R15: 00007f4e477fee48
[Thu Dec 15 02:58:34 2022]  </TASK>
[Thu Dec 15 02:58:34 2022] Call Trace:
[Thu Dec 15 02:58:34 2022]  <TASK>
[Thu Dec 15 02:58:34 2022]  x86_pmu_stop+0x50/0xb0
[Thu Dec 15 02:58:34 2022]  x86_pmu_del+0x73/0x190
[Thu Dec 15 02:58:34 2022]  event_sched_out.part.0+0x7a/0x1f0
[Thu Dec 15 02:58:34 2022]  group_sched_out.part.0+0x93/0xf0
[Thu Dec 15 02:58:34 2022]  __perf_event_disable+0xdc/0x1a0
[Thu Dec 15 02:58:34 2022]  event_function+0x91/0xe0
[Thu Dec 15 02:58:34 2022]  ? group_sched_out.part.0+0xf0/0xf0
[Thu Dec 15 02:58:34 2022]  remote_function+0x47/0x50
[Thu Dec 15 02:58:34 2022]  generic_exec_single+0x78/0xb0
[Thu Dec 15 02:58:34 2022]  smp_call_function_single+0xeb/0x130
[Thu Dec 15 02:58:34 2022]  ? sw_perf_event_destroy+0x60/0x60
[Thu Dec 15 02:58:34 2022]  ? __hrtimer_start_range_ns+0x215/0x300
[Thu Dec 15 02:58:34 2022]  event_function_call+0x9c/0x160
[Thu Dec 15 02:58:34 2022]  ? group_sched_out.part.0+0xf0/0xf0
[Thu Dec 15 02:58:34 2022]  ? perf_kprobe_event_init+0x90/0x90
[Thu Dec 15 02:58:34 2022]  perf_event_pause+0x58/0xa0
[Thu Dec 15 02:58:34 2022]  reprogram_counter+0x7b/0x320 [kvm]
[Thu Dec 15 02:58:34 2022]  amd_pmu_set_msr+0x106/0x170 [kvm_amd]
[Thu Dec 15 02:58:34 2022]  ? __svm_vcpu_run+0x67/0x110 [kvm_amd]
[Thu Dec 15 02:58:34 2022]  ? get_gp_pmc_amd+0x129/0x200 [kvm_amd]
[Thu Dec 15 02:58:34 2022]  __kvm_set_msr+0x7f/0x1c0 [kvm]
[Thu Dec 15 02:58:34 2022]  kvm_emulate_wrmsr+0x52/0x1b0 [kvm]
[Thu Dec 15 02:58:34 2022]  vcpu_enter_guest+0x667/0x1010 [kvm]
[Thu Dec 15 02:58:34 2022]  ? kvm_vcpu_kick+0x13/0xb0 [kvm]
[Thu Dec 15 02:58:34 2022]  ? __apic_accept_irq+0xe0/0x300 [kvm]
[Thu Dec 15 02:58:34 2022]  vcpu_run+0x33/0x250 [kvm]
[Thu Dec 15 02:58:34 2022]  kvm_arch_vcpu_ioctl_run+0x104/0x620 [kvm]
[Thu Dec 15 02:58:34 2022]  kvm_vcpu_ioctl+0x271/0x670 [kvm]
[Thu Dec 15 02:58:34 2022]  ? __seccomp_filter+0x45/0x470
[Thu Dec 15 02:58:34 2022]  ? security_file_ioctl+0x32/0x50
[Thu Dec 15 02:58:34 2022]  __x64_sys_ioctl+0x8a/0xc0
[Thu Dec 15 02:58:34 2022]  do_syscall_64+0x5c/0x90
[Thu Dec 15 02:58:34 2022]  ? security_file_ioctl+0x32/0x50
[Thu Dec 15 02:58:34 2022]  ? kvm_on_user_return+0x82/0x90 [kvm]
[Thu Dec 15 02:58:34 2022]  ? fire_user_return_notifiers+0x41/0x60
[Thu Dec 15 02:58:34 2022]  ? exit_to_user_mode_prepare+0xca/0x100
[Thu Dec 15 02:58:34 2022]  ? syscall_exit_to_user_mode+0x12/0x30
[Thu Dec 15 02:58:34 2022]  ? do_syscall_64+0x69/0x90
[Thu Dec 15 02:58:34 2022]  ? syscall_exit_work+0x11a/0x150
[Thu Dec 15 02:58:34 2022]  ? syscall_exit_to_user_mode+0x12/0x30
[Thu Dec 15 02:58:34 2022]  ? do_syscall_64+0x69/0x90
[Thu Dec 15 02:58:34 2022]  ? do_syscall_64+0x69/0x90
[Thu Dec 15 02:58:34 2022]  ? sysvec_apic_timer_interrupt+0x3c/0x90
[Thu Dec 15 02:58:34 2022]  entry_SYSCALL_64_after_hwframe+0x63/0xcd
[Thu Dec 15 02:58:34 2022] RIP: 0033:0x7f4e5163ec6b
[Thu Dec 15 02:58:34 2022] Code: 73 01 c3 48 8b 0d b5 b1 1b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 85 b1 1b 00 f7 d8 64 89 01 48
[Thu Dec 15 02:58:34 2022] RSP: 002b:00007f4e46ffc4a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[Thu Dec 15 02:58:34 2022] RAX: ffffffffffffffda RBX: 00007f4e477fee50 RCX: 00007f4e5163ec6b
[Thu Dec 15 02:58:34 2022] RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000019
[Thu Dec 15 02:58:34 2022] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000000000ff
[Thu Dec 15 02:58:34 2022] R10: 00007f4be82b8790 R11: 0000000000000246 R12: 00005574ce88b870
[Thu Dec 15 02:58:34 2022] R13: 00007f4e477feff0 R14: 696833cf082bd200 R15: 00007f4e477fee48
[Thu Dec 15 02:58:34 2022]  </TASK>

Comment 6 Dr. David Alan Gilbert 2022-12-15 15:50:26 UTC
(In reply to dpawlik from comment #5)
> (In reply to Dr. David Alan Gilbert from comment #4)
> > I'd bet disabling the pmu would help
> > (But have a vague memory of a recent bug where -perf didn't do it on AMD,
> > but can't find the reference to it)
> 
> Not really, but it raises longer traceback.

Well that's the thing about the recent bug, which I think is:
https://marc.info/?l=kvm&m=166886061623174&w=2

if I read that right, then passing -perf might not work, and the guest still thinks
it has counters.

But, please go back a step - I notice on re-reading this is nested;  I don't think I quite understand it though:
   L0 (comment 1) RHEL8.4 openstack
   L1 (comment 2) CentOS 9 stream (Qemu 7.1 !)
   L?? (comment 3) ???  That says L3 but surely you mean L2?

are those errors in the L1 dmesg?

That qemu 7.1 RPM in CS 9 is *very* untested at the moment.

> virsh dumpxml crc > crc.xml
> virsh stop crc
> virsh destroy crc
> virsh undefine crc
> 
> <add <pmu state='off'/> to the features in crc.xml>
> 
> virsh define crc.xml
> virsh start crc
> 
> 
> Current traceback:
> 
> 
> [Thu Dec 15 02:58:34 2022] Code: 73 01 c3 48 8b 0d b5 b1 1b 00 f7 d8 64 89
> 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 10 00 00
> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 85 b1 1b 00 f7 d8 64 89 01 48
> [Thu Dec 15 02:58:34 2022] RSP: 002b:00007f4e46ffc4a8 EFLAGS: 00000246
> ORIG_RAX: 0000000000000010
> [Thu Dec 15 02:58:34 2022] RAX: ffffffffffffffda RBX: 00007f4e477fee50 RCX:
> 00007f4e5163ec6b
> [Thu Dec 15 02:58:34 2022] RDX: 0000000000000000 RSI: 000000000000ae80 RDI:
> 0000000000000019
> [Thu Dec 15 02:58:34 2022] RBP: 0000000000000000 R08: 0000000000000000 R09:
> 00000000000000ff
> [Thu Dec 15 02:58:34 2022] R10: 00007f4be82b8790 R11: 0000000000000246 R12:
> 00005574ce88b870
> [Thu Dec 15 02:58:34 2022] R13: 00007f4e477feff0 R14: 696833cf082bd200 R15:
> 00007f4e477fee48
> [Thu Dec 15 02:58:34 2022]  </TASK>
> [Thu Dec 15 02:58:34 2022] Call Trace:
> [Thu Dec 15 02:58:34 2022]  <TASK>
> [Thu Dec 15 02:58:34 2022]  x86_pmu_stop+0x50/0xb0
> [Thu Dec 15 02:58:34 2022]  x86_pmu_del+0x73/0x190
> [Thu Dec 15 02:58:34 2022]  event_sched_out.part.0+0x7a/0x1f0
> [Thu Dec 15 02:58:34 2022]  group_sched_out.part.0+0x93/0xf0
> [Thu Dec 15 02:58:34 2022]  __perf_event_disable+0xdc/0x1a0
> [Thu Dec 15 02:58:34 2022]  event_function+0x91/0xe0
> [Thu Dec 15 02:58:34 2022]  ? group_sched_out.part.0+0xf0/0xf0
> [Thu Dec 15 02:58:34 2022]  remote_function+0x47/0x50
> [Thu Dec 15 02:58:34 2022]  generic_exec_single+0x78/0xb0
> [Thu Dec 15 02:58:34 2022]  smp_call_function_single+0xeb/0x130
> [Thu Dec 15 02:58:34 2022]  ? sw_perf_event_destroy+0x60/0x60
> [Thu Dec 15 02:58:34 2022]  ? __hrtimer_start_range_ns+0x215/0x300
> [Thu Dec 15 02:58:34 2022]  event_function_call+0x9c/0x160
> [Thu Dec 15 02:58:34 2022]  ? group_sched_out.part.0+0xf0/0xf0
> [Thu Dec 15 02:58:34 2022]  ? perf_kprobe_event_init+0x90/0x90
> [Thu Dec 15 02:58:34 2022]  perf_event_pause+0x58/0xa0
> [Thu Dec 15 02:58:34 2022]  reprogram_counter+0x7b/0x320 [kvm]
> [Thu Dec 15 02:58:34 2022]  amd_pmu_set_msr+0x106/0x170 [kvm_amd]
> [Thu Dec 15 02:58:34 2022]  ? __svm_vcpu_run+0x67/0x110 [kvm_amd]
> [Thu Dec 15 02:58:34 2022]  ? get_gp_pmc_amd+0x129/0x200 [kvm_amd]
> [Thu Dec 15 02:58:34 2022]  __kvm_set_msr+0x7f/0x1c0 [kvm]
> [Thu Dec 15 02:58:34 2022]  kvm_emulate_wrmsr+0x52/0x1b0 [kvm]
> [Thu Dec 15 02:58:34 2022]  vcpu_enter_guest+0x667/0x1010 [kvm]
> [Thu Dec 15 02:58:34 2022]  ? kvm_vcpu_kick+0x13/0xb0 [kvm]
> [Thu Dec 15 02:58:34 2022]  ? __apic_accept_irq+0xe0/0x300 [kvm]
> [Thu Dec 15 02:58:34 2022]  vcpu_run+0x33/0x250 [kvm]
> [Thu Dec 15 02:58:34 2022]  kvm_arch_vcpu_ioctl_run+0x104/0x620 [kvm]
> [Thu Dec 15 02:58:34 2022]  kvm_vcpu_ioctl+0x271/0x670 [kvm]
> [Thu Dec 15 02:58:34 2022]  ? __seccomp_filter+0x45/0x470
> [Thu Dec 15 02:58:34 2022]  ? security_file_ioctl+0x32/0x50
> [Thu Dec 15 02:58:34 2022]  __x64_sys_ioctl+0x8a/0xc0
> [Thu Dec 15 02:58:34 2022]  do_syscall_64+0x5c/0x90
> [Thu Dec 15 02:58:34 2022]  ? security_file_ioctl+0x32/0x50
> [Thu Dec 15 02:58:34 2022]  ? kvm_on_user_return+0x82/0x90 [kvm]
> [Thu Dec 15 02:58:34 2022]  ? fire_user_return_notifiers+0x41/0x60
> [Thu Dec 15 02:58:34 2022]  ? exit_to_user_mode_prepare+0xca/0x100
> [Thu Dec 15 02:58:34 2022]  ? syscall_exit_to_user_mode+0x12/0x30
> [Thu Dec 15 02:58:34 2022]  ? do_syscall_64+0x69/0x90
> [Thu Dec 15 02:58:34 2022]  ? syscall_exit_work+0x11a/0x150
> [Thu Dec 15 02:58:34 2022]  ? syscall_exit_to_user_mode+0x12/0x30
> [Thu Dec 15 02:58:34 2022]  ? do_syscall_64+0x69/0x90
> [Thu Dec 15 02:58:34 2022]  ? do_syscall_64+0x69/0x90
> [Thu Dec 15 02:58:34 2022]  ? sysvec_apic_timer_interrupt+0x3c/0x90
> [Thu Dec 15 02:58:34 2022]  entry_SYSCALL_64_after_hwframe+0x63/0xcd
> [Thu Dec 15 02:58:34 2022] RIP: 0033:0x7f4e5163ec6b
> [Thu Dec 15 02:58:34 2022] Code: 73 01 c3 48 8b 0d b5 b1 1b 00 f7 d8 64 89
> 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 10 00 00
> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 85 b1 1b 00 f7 d8 64 89 01 48
> [Thu Dec 15 02:58:34 2022] RSP: 002b:00007f4e46ffc4a8 EFLAGS: 00000246
> ORIG_RAX: 0000000000000010
> [Thu Dec 15 02:58:34 2022] RAX: ffffffffffffffda RBX: 00007f4e477fee50 RCX:
> 00007f4e5163ec6b
> [Thu Dec 15 02:58:34 2022] RDX: 0000000000000000 RSI: 000000000000ae80 RDI:
> 0000000000000019
> [Thu Dec 15 02:58:34 2022] RBP: 0000000000000000 R08: 0000000000000000 R09:
> 00000000000000ff
> [Thu Dec 15 02:58:34 2022] R10: 00007f4be82b8790 R11: 0000000000000246 R12:
> 00005574ce88b870
> [Thu Dec 15 02:58:34 2022] R13: 00007f4e477feff0 R14: 696833cf082bd200 R15:
> 00007f4e477fee48
> [Thu Dec 15 02:58:34 2022]  </TASK>

Comment 8 dpawlik 2022-12-16 07:53:12 UTC
(In reply to Dr. David Alan Gilbert from comment #6)
> (In reply to dpawlik from comment #5)
> > (In reply to Dr. David Alan Gilbert from comment #4)
> > > I'd bet disabling the pmu would help
> > > (But have a vague memory of a recent bug where -perf didn't do it on AMD,
> > > but can't find the reference to it)
> >
> > Not really, but it raises longer traceback.
>
> Well that's the thing about the recent bug, which I think is:
> https://marc.info/?l=kvm&m=166886061623174&w=2
>
> if I read that right, then passing -perf might not work, and the guest still
> thinks
> it has counters.
>
> But, please go back a step - I notice on re-reading this is nested;  I don't
> think I quite understand it though:
>    L0 (comment 1) RHEL8.4 openstack
>    L1 (comment 2) CentOS 9 stream (Qemu 7.1 !)
>    L?? (comment 3) ???  That says L3 but surely you mean L2?

Sorry for my mistake, yes, it should be L2 - CRC instance.

> are those errors in the L1 dmesg?

Yes. I don't have access to L0. Details described earlier was from L1.

What is worth to mention on L1:
- On Centos 8 stream without enabling nested virtualization => WORKS
- On Centos 8 stream with enabled nested virtualization ( options kvm_amd nested=1 in /etc/modprobe.d/kvm.conf + reboot the VM) => DOES NOT WORK
- On Centos 9 stream without enabled nested virtualization => DOES NOT WORK
- On Centos 9 stream with enabled nested virtualization ( options kvm_amd nested=1 in /etc/modprobe.d/kvm.conf + reboot the VM) => DOES NOT WORK

Now I see that logs from comment #5 are not correct, due there was nested virtualization enabled (I rebuild the host, but it seems that it have not done it).

"Fresh" logs from L1, without enabled nested virtualization + disabled pmu looks like:

[Fri Dec 16 02:42:09 2022] RIP: 0033:0x7efeaee3ec6b
[Fri Dec 16 02:42:09 2022] Code: 73 01 c3 48 8b 0d b5 b1 1b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 f
f ff 73 01 c3 48 8b 0d 85 b1 1b 00 f7 d8 64 89 01 48
[Fri Dec 16 02:42:09 2022] RSP: 002b:00007efc5d7f84a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[Fri Dec 16 02:42:09 2022] RAX: ffffffffffffffda RBX: 00007efc5dffae50 RCX: 00007efeaee3ec6b
[Fri Dec 16 02:42:09 2022] RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 000000000000001c
[Fri Dec 16 02:42:09 2022] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000000000ff
[Fri Dec 16 02:42:09 2022] R10: 00007efc5068d830 R11: 0000000000000246 R12: 00005606d0903610
[Fri Dec 16 02:42:09 2022] R13: 00007efc5dffaff0 R14: edb821d8ed042100 R15: 00007efc5dffae48
[Fri Dec 16 02:42:09 2022]  </TASK>
[Fri Dec 16 02:42:09 2022] Call Trace:
[Fri Dec 16 02:42:09 2022]  <TASK>
[Fri Dec 16 02:42:09 2022]  x86_pmu_stop+0x50/0xb0
[Fri Dec 16 02:42:09 2022]  x86_pmu_del+0x73/0x190
[Fri Dec 16 02:42:09 2022]  event_sched_out.part.0+0x7a/0x1f0
[Fri Dec 16 02:42:09 2022]  group_sched_out.part.0+0x93/0xf0
[Fri Dec 16 02:42:09 2022]  ctx_sched_out+0x124/0x2a0
[Fri Dec 16 02:42:09 2022]  perf_event_context_sched_out+0x1a5/0x460
[Fri Dec 16 02:42:09 2022]  __perf_event_task_sched_out+0x50/0x170
[Fri Dec 16 02:42:09 2022]  ? pick_next_task+0x51/0x940
[Fri Dec 16 02:42:09 2022]  prepare_task_switch+0xbd/0x2a0
[Fri Dec 16 02:42:09 2022]  __schedule+0x1cb/0x620
[Fri Dec 16 02:42:09 2022]  ? kvm_complete_insn_gp+0x60/0x80 [kvm]
[Fri Dec 16 02:42:09 2022]  schedule+0x5a/0xc0
[Fri Dec 16 02:42:09 2022]  xfer_to_guest_mode_handle_work+0xac/0xe0
[Fri Dec 16 02:42:09 2022]  vcpu_run+0x1f5/0x250 [kvm]
[Fri Dec 16 02:42:09 2022]  kvm_arch_vcpu_ioctl_run+0x104/0x620 [kvm]
[Fri Dec 16 02:42:09 2022]  kvm_vcpu_ioctl+0x271/0x670 [kvm]
[Fri Dec 16 02:42:09 2022]  ? __seccomp_filter+0x45/0x470
[Fri Dec 16 02:42:09 2022]  ? security_file_ioctl+0x32/0x50
[Fri Dec 16 02:42:09 2022]  __x64_sys_ioctl+0x8a/0xc0
[Fri Dec 16 02:42:09 2022]  do_syscall_64+0x5c/0x90
[Fri Dec 16 02:42:09 2022]  ? exit_to_user_mode_prepare+0xca/0x100
[Fri Dec 16 02:42:09 2022]  ? syscall_exit_to_user_mode+0x12/0x30
[Fri Dec 16 02:42:09 2022]  ? do_syscall_64+0x69/0x90
[Fri Dec 16 02:42:09 2022]  ? syscall_exit_to_user_mode+0x12/0x30
[Fri Dec 16 02:42:09 2022]  ? do_syscall_64+0x69/0x90
[Fri Dec 16 02:42:09 2022]  ? syscall_exit_work+0x11a/0x150
[Fri Dec 16 02:42:09 2022]  ? syscall_exit_to_user_mode+0x12/0x30
[Fri Dec 16 02:42:09 2022]  ? do_syscall_64+0x69/0x90
[Fri Dec 16 02:42:09 2022]  ? do_syscall_64+0x69/0x90
[Fri Dec 16 02:42:09 2022]  entry_SYSCALL_64_after_hwframe+0x63/0xcd
[Fri Dec 16 02:42:09 2022] RIP: 0033:0x7efeaee3ec6b
[Fri Dec 16 02:42:09 2022] Code: 73 01 c3 48 8b 0d b5 b1 1b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 f
f ff 73 01 c3 48 8b 0d 85 b1 1b 00 f7 d8 64 89 01 48
[Fri Dec 16 02:42:09 2022] RSP: 002b:00007efc5d7f84a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[Fri Dec 16 02:42:09 2022] RAX: ffffffffffffffda RBX: 00007efc5dffae50 RCX: 00007efeaee3ec6b
[Fri Dec 16 02:42:09 2022] RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 000000000000001c
[Fri Dec 16 02:42:09 2022] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000000000ff
[Fri Dec 16 02:42:09 2022] R10: 00007efc5068d830 R11: 0000000000000246 R12: 00005606d0903610
[Fri Dec 16 02:42:09 2022] R13: 00007efc5dffaff0 R14: edb821d8ed042100 R15: 00007efc5dffae48
[Fri Dec 16 02:42:09 2022]  </TASK>
[Fri Dec 16 02:42:09 2022] Call Trace:
[Fri Dec 16 02:42:09 2022]  <TASK>
[Fri Dec 16 02:42:09 2022]  amd_pmu_enable_all+0x44/0x60
[Fri Dec 16 02:42:09 2022]  __perf_event_task_sched_in+0x24e/0x290
[Fri Dec 16 02:42:09 2022]  ? __perf_event_task_sched_out+0x60/0x170
[Fri Dec 16 02:42:09 2022]  finish_task_switch.isra.0+0x1fa/0x2a0
[Fri Dec 16 02:42:09 2022]  __schedule+0x250/0x620
[Fri Dec 16 02:42:09 2022]  ? kvm_complete_insn_gp+0x60/0x80 [kvm]
[Fri Dec 16 02:42:09 2022]  schedule+0x5a/0xc0
[Fri Dec 16 02:42:09 2022]  xfer_to_guest_mode_handle_work+0xac/0xe0
[Fri Dec 16 02:42:09 2022]  vcpu_run+0x1f5/0x250 [kvm]
[Fri Dec 16 02:42:09 2022]  kvm_arch_vcpu_ioctl_run+0x104/0x620 [kvm]
[Fri Dec 16 02:42:09 2022]  kvm_vcpu_ioctl+0x271/0x670 [kvm]
[Fri Dec 16 02:42:09 2022]  ? __seccomp_filter+0x45/0x470
[Fri Dec 16 02:42:09 2022]  ? security_file_ioctl+0x32/0x50
[Fri Dec 16 02:42:09 2022]  __x64_sys_ioctl+0x8a/0xc0
[Fri Dec 16 02:42:09 2022]  do_syscall_64+0x5c/0x90
[Fri Dec 16 02:42:09 2022]  ? exit_to_user_mode_prepare+0xca/0x100
[Fri Dec 16 02:42:09 2022]  ? syscall_exit_to_user_mode+0x12/0x30
[Fri Dec 16 02:42:09 2022]  ? do_syscall_64+0x69/0x90
[Fri Dec 16 02:42:09 2022]  ? syscall_exit_to_user_mode+0x12/0x30
[Fri Dec 16 02:42:09 2022]  ? do_syscall_64+0x69/0x90
[Fri Dec 16 02:42:09 2022]  ? syscall_exit_work+0x11a/0x150
[Fri Dec 16 02:42:09 2022]  ? syscall_exit_to_user_mode+0x12/0x30
[Fri Dec 16 02:42:09 2022]  ? do_syscall_64+0x69/0x90
[Fri Dec 16 02:42:09 2022]  ? do_syscall_64+0x69/0x90
[Fri Dec 16 02:42:09 2022]  entry_SYSCALL_64_after_hwframe+0x63/0xcd



Details about L2 (crc instance):

Red Hat Enterprise Linux CoreOS 411.86.202211291109-0

[core@crc-pbwlw-master-0 ~]$ rpm -qa | grep -Ei 'qemu|libvirt|kernel|release' | sort
kernel-4.18.0-372.32.1.el8_6.x86_64
kernel-core-4.18.0-372.32.1.el8_6.x86_64
kernel-modules-4.18.0-372.32.1.el8_6.x86_64
kernel-modules-extra-4.18.0-372.32.1.el8_6.x86_64
qemu-guest-agent-6.2.0-11.module+el8.6.0+16538+01ea313d.6.x86_64
redhat-release-8.6-0.1.el8.x86_64
redhat-release-eula-8.6-0.1.el8.x86_64

uname -a
Linux crc-pbwlw-master-0 4.18.0-372.32.1.el8_6.x86_64 #1 SMP Fri Oct 7 12:35:10 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux

dmesg:
added as an attachment

Comment 15 Qinghua Cheng 2023-03-24 06:49:05 UTC
I can not reproduce this issue on rhel 8.4 host:

Host:
RHEL 8.4
kernel: 4.18.0-305.25.1.el8_4.x86_64
qemu-kvm: qemu-kvm-5.2.0-16.module+el8.4.0+16683+63f410ce.18.x86_64
libvirt: libvirt-daemon-kvm-7.0.0-14.8.module+el8.4.0+15255+f7eff4dd.x86_64

CPU info:
Model name: AMD EPYC 7502 32-Core Processor

L1 guest: (CPU: host-passthrough)
CentOS-Stream 9 
kernel: 5.14.0-283.el9.x86_64
qemu-kvm: qemu-kvm-7.2.0-11.el9.x86_64

L2 guest: same as L1

There is no traceback when start L2 guest.

Comment 16 dpawlik 2023-03-24 10:50:03 UTC
Hi,
this makes the bug more interesting.


Some part of our VMs for CI testing are running in our Cloud Provider, where we can not ask them to change the L0 system do different one.
According to the comment made in CRC project issue [1], it does not work also in others Cloud Providers VMs, which can be a problem for
other future CRC members.


I made a table with testing cases:

+----------------+------------------------------+-------------------------------------------------+--------------------------------------------------+--+-----------------+--------------------------+-----------------+-------------------------+------------------------+--------------------+-----------------+---------------+
|   L0 system    |      L0 Kernel version       |               L0 Libvirt version                |                 L0 Qemu version                  |  |  L0 CPU Model   | L0 libvirt CPU model set |    L1 system    | L1 Nested virt enabled? |   L1 Kernel version    | L1 Libvirt version | L1 Qemu version | Does it work? |
+----------------+------------------------------+-------------------------------------------------+--------------------------------------------------+--+-----------------+--------------------------+-----------------+-------------------------+------------------------+--------------------+-----------------+---------------+
| Ubuntu 20.04.3 | 5.13.0-30-generic            | 6.0.0-0ubuntu8.15                               | 1:4.2-3ubuntu6.19                                |  | AMD EPYC 7402   | host-passthrough         | Centos 8 Stream | yes                     | 4.18.0-448.el8.x86_64  | 8.0.0-14           | 6.2.0-28        | yes           |
| Ubuntu 20.04.3 | 5.13.0-30-generic            | 6.0.0-0ubuntu8.15                               | 1:4.2-3ubuntu6.19                                |  | AMD EPYC 7402   | host-passthrough         | Centos 8 Stream | no                      | 4.18.0-448.el8.x86_64  | 8.0.0-14           | 6.2.0-28        | yes           |
| Ubuntu 20.04.3 | 5.13.0-30-generic            | 6.0.0-0ubuntu8.15                               | 1:4.2-3ubuntu6.19                                |  | AMD EPYC 7402   | host-passthrough         | Centos 9 Stream | yes                     | 5.14.0-289.el9.x86_64  | 9.0.0-7.el9.x86_64 | 7.2.0-11.el9    | no            |
| Ubuntu 20.04.3 | 5.13.0-30-generic            | 6.0.0-0ubuntu8.15                               | 1:4.2-3ubuntu6.19                                |  | AMD EPYC 7402   | host-passthrough         | Centos 9 Stream | no                      | 5.14.0-289.el9.x86_64  | 9.0.0-7.el9.x86_64 | 7.2.0-11.el9    | no            |
| Ubuntu 20.04.3 | 5.13.0-30-generic            | 6.0.0-0ubuntu8.15                               | 1:4.2-3ubuntu6.19                                |  | AMD EPYC 7402   | host-passthrough         | Centos 9 Stream | yes                     | 6.2.7-1.el9.elrepo     | 9.0.0-7.el9.x86_64 | 7.2.0-11.el9    | yes           |
| Ubuntu 20.04.3 | 5.13.0-30-generic            | 6.0.0-0ubuntu8.15                               | 1:4.2-3ubuntu6.19                                |  | AMD EPYC 7402   | host-passthrough         | Centos 9 Stream | no                      | 6.2.7-1.el9.elrepo     | 9.0.0-7.el9.x86_64 | 7.2.0-11.el9    | yes           |
| RHEL 8.4       | 4.18.0-305.25.1.el8_4.x86_64 | 7.0.0-14.8.module+el8.4.0+15255+f7eff4dd.x86_64 | 5.2.0-16.module+el8.4.0+16683+63f410ce.18.x86_64 |  | AMD EPYC 7502   | host-passthrough         | Centos 9 Stream | ?                       | 5.14.0-283.el9.x86_64  | ?                  | 7.2.0-11.el9    | yes           |
+----------------+------------------------------+-------------------------------------------------+--------------------------------------------------+--+-----------------+--------------------------+-----------------+-------------------------+------------------------+--------------------+-----------------+---------------+

Dan

[1] https://github.com/crc-org/crc/issues/3446#issuecomment-1342923882

Comment 37 RHEL Program Management 2023-09-22 13:38:56 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 38 RHEL Program Management 2023-09-22 13:40:30 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.