RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2174351 - Failed to get win2022 guest sockets number with wmic command
Summary: Failed to get win2022 guest sockets number with wmic command
Keywords:
Status: CLOSED DUPLICATE of bug 2169904
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virtio-win
Version: 9.2
Hardware: x86_64
OS: Windows
unspecified
medium
Target Milestone: rc
: ---
Assignee: Yvugenfi@redhat.com
QA Contact: Yiqian Wei
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-03-01 08:55 UTC by Yiqian Wei
Modified: 2023-07-27 07:02 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-06-28 05:30:00 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-150258 0 None None None 2023-03-01 08:56:24 UTC

Description Yiqian Wei 2023-03-01 08:55:00 UTC
Description of problem:
Failed to get win2022 guest sockets number with wmic command

Version-Release number of selected component (if applicable):
host versin:
kernel-5.14.0-282.el9.x86_64
qemu-kvm-7.2.0-10.el9.x86_64
edk2-ovmf-20221207gitfff6d81270b5-7.el9.noarch
guest: win2022

How reproducible:
2/2

Steps to Reproduce:
1.1.boot a guest with "-smp 16,maxcpus=16,cores=8,threads=1,dies=1,sockets=2"
/usr/libexec/qemu-kvm \
     -name 'avocado-vt-vm1'  \
     -sandbox on  \
     -blockdev node-name=file_ovmf_code,driver=file,filename=/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd,auto-read-only=on,discard=unmap \
     -blockdev node-name=drive_ovmf_code,driver=raw,read-only=on,file=file_ovmf_code \
     -blockdev node-name=file_ovmf_vars,driver=file,filename=/home/OVMF_VARS.fd,auto-read-only=on,discard=unmap \
     -blockdev node-name=drive_ovmf_vars,driver=raw,read-only=off,file=file_ovmf_vars \
     -machine q35,memory-backend=mem-machine_mem,pflash0=drive_ovmf_code,pflash1=drive_ovmf_vars \
     -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
     -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
     -nodefaults \
     -device VGA,bus=pcie.0,addr=0x2 \
     -m 14336 \
     -object '{"qom-type": "memory-backend-ram", "size": 15032385536, "id": "mem-machine_mem"}'  \
      -smp 16,maxcpus=16,cores=8,threads=1,dies=1,sockets=2  \
     -cpu 'Skylake-Server',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \
     -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
     -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
     -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
     -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
     -device '{"id": "virtio_scsi_pci0", "driver": "virtio-scsi-pci", "bus": "pcie-root-port-2", "addr": "0x0", "aer": true, "ats": true}' \
     -blockdev '{"node-name": "file_image1", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "threads", "filename": "/home/win2022-64-virtio-scsi.qcow2", "cache": {"direct": true, "no-flush": false}}' \
     -blockdev '{"node-name": "drive_image1", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_image1"}' \
     -device '{"driver": "scsi-hd", "id": "image1", "drive": "drive_image1", "write-cache": "on", "serial": "DGAvVxWslZ6P49"}' \
     -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
     -device virtio-net-pci,mac=9a:43:14:31:bf:07,id=idt3sh7F,netdev=idftIBZi,bus=pcie-root-port-3,addr=0x0,aer=on,ats=on   \
     -netdev tap,id=idftIBZi,vhost=on  \
     -vnc :1  \
     -rtc base=localtime,clock=host,driftfix=slew  \
     -boot menu=off,order=cdn,once=c,strict=off \
     -enable-kvm \
     -monitor stdio \

2.In guest, check cpu cores,threads,sockets number by wmic
1)CPU Cores number
-> wmic cpu get NumberOfCores | more +1
8
8

2)CPU Logical_processors number
-> wmic cpu get NumberOfLogicalProcessors | more +1
8
8

3)CPU Sockets number
-> wmic cpu get SocketDesignation | find /c "CPU"
0


Actual results:
After steps2, CPU Sockets number: 0

Expected results:
After steps2, CPU Sockets number: 2

Additional info:
1) for win2019 and win11 guests, hit the same issue.
2) with "qemu-kvm-7.1.0-7.el9.x86_64" version, not hit this issue.

Comment 1 Akihiko Odaki 2023-06-22 09:21:56 UTC
I believe this is a duplicate of bug 742915. Please check if advisory RHSA-2023:2162 solves the problem:
https://access.redhat.com/errata/RHSA-2023:2162

Comment 2 Akihiko Odaki 2023-06-22 09:22:41 UTC
(In reply to Akihiko Odaki from comment #1)
> I believe this is a duplicate of bug 742915. Please check if advisory
> RHSA-2023:2162 solves the problem:
> https://access.redhat.com/errata/RHSA-2023:2162

Sorry, I meant bug 2169904.


Note You need to log in before you can comment on or make changes to this bug.