RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1977580 - [svvp] job "Hardware Security Testability Interface Test" failed on Win2022
Summary: [svvp] job "Hardware Security Testability Interface Test" failed on Win2022
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: unspecified
Hardware: x86_64
OS: Windows
high
medium
Target Milestone: rc
: ---
Assignee: Marek Kedzierski
QA Contact: dehanmeng
URL:
Whiteboard:
Depends On:
Blocks: 1968315 2057757
TreeView+ depends on / blocked
 
Reported: 2021-06-30 06:33 UTC by menli@redhat.com
Modified: 2023-02-01 08:39 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-12 07:41:42 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description menli@redhat.com 2021-06-30 06:33:55 UTC
Description of problem:
as $summary

Version-Release number of selected component (if applicable):
qemu-kvm-6.0.0-18.module+el8.5.0+11243+5269aaa1.x86_64
seabios-1.14.0-1.module+el8.4.0+8855+a9e237a9.x86_64
kernel-4.18.0-305.7.el8.kpq1.x86_64
virtio-win-1.9.16-2.el8.iso

How reproducible:
100%

Steps to Reproduce:
1.boot up with win2022

/usr/libexec/qemu-kvm -name SUTINT850ATEST1 -machine q35,kernel-irqchip=split -cpu Broadwell,hv_stimer,hv_synic,hv_time,hv_vpindex,hv_relaxed,hv_spinlocks=0xfff,hv_vapic,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,hv-vendor-id=KVMtest -enable-kvm -nodefaults -m 16G -smp 25,cores=25 -k en-us -boot menu=on -uuid 8a342302-d6b3-417e-bdeb-92267e897eaf -device piix3-usb-uhci,id=usb -device usb-tablet,id=tablet0 -rtc base=localtime,clock=host,driftfix=slew -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x3 -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x3.0x1 -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x3.0x2 -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x3.0x3 -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x3.0x4 -netdev tap,script=/etc/qemu-ifup,id=hostnet1,vhost=on -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:52:20:22:a9:a9,mq=on,bus=pci.3,iommu_platform=on,ats=on,disable-legacy=on,disable-modern=off -blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=SUTINT850ATEST1,node-name=system_file -blockdev driver=qcow2,node-name=drive_system_disk,file=system_file -object iothread,id=thread0 -device virtio-blk-pci,scsi=off,iothread=thread0,drive=drive_system_disk,id=virtio-disk0,bootindex=1,bus=pci.4,disable-legacy=on,disable-modern=off,iommu_platform=on,ats=on -device usb-ehci,id=ehci0,bus=pci.5 -drive id=drive_cd1,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/ISO/Win2022/Windows_InsiderPreview_Server_vNext_en-us_20344.iso -device ide-cd,id=cd1,drive=drive_cd1,bus=ide.0,unit=0 -drive id=drive_cd3,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=SUTINT850ATEST1.iso -device ide-cd,id=cd3,drive=drive_cd3,bus=ide.1,unit=0 -cdrom /home/kvm_autotest_root/iso/windows/virtio-win-latest-signed-el8.iso -vnc :0 -vga std -monitor stdio -drive file=usb-disk-UKm.raw,if=none,id=drive-usb-2-0,media=disk,format=raw,cache=none,werror=stop,rerror=stop,aio=threads -device usb-storage,bus=ehci0.0,drive=drive-usb-2-0,id=usb-2-0,removable=on 

2. submit hlk svvp job ""Hardware Security Testability Interface Test" 

Actual results:
job failed:
"Hardware Security Testability Interface Test" job error:
[HRESULT: 0x8007007E] A failure occurred while preparing to run tests in 'HSTILogoTest.dll'. (Failed to load "C:\HLK\JobsWorkingDir\Tasks\WTTJobRun360C0BF6-EAC8-EB11-9954-0052253A9E00\HSTILogoTest.dll" or one of its dependencies. Try running TE.exe with the /reportLoadingIssue switch to get more details. If that doesn't help, use gflags.exe to enable loader snaps for TE.ProcessHost.exe (gflags -i TE.ProcessHost.exe +sls). Then run your tests under a debugger so you can view the loader snaps output while TAEF loads your test DLL.)






Expected results:
job can pass

Additional info:

Comment 1 John Ferlan 2021-07-08 12:48:17 UTC
Assigned to Meirav for initial triage per bz process and age of bug created or assigned to virt-maint without triage.

Comment 2 Marek Kedzierski 2021-07-08 13:46:27 UTC
Secure Boot should be enabled before starting the test.

However, even with Secure Boot enabled test doesn't pass.

The analysis:
  
HLK calls function GetHstiBlob. This function calls NtQuerySystemInformation in the following way:

 
  NTSTATUS status = NtQuerySystemInformation(0xA6,
        NULL,
        0,
        &hstiBlobSize);
 
 NtQuerySystemInformation is called with with undocumented class 0xA6 which is described as 
  'SystemHardwareSecurityTestInterfaceResultsInformation'

  NtQuerySystemInformation for 0xA6 class calls undocumented kernel function SeQueryHSTIResults, which 
  returns status 0xc0000004. It causes the test failure.

  Interestingly, the call fails but hstiBlobSize is set correctly.

  So test can be fixed by Microsoft in the following way:

  ULONG hstiBlobSize = 0;
  NTSTATUS status = NtQuerySystemInformation(0xA6,
         NULL,
         0,
         &hstiBlobSize);
    
    // Ignore the status (0xc0000004) and if hstiBlobSize is greater 
    // then zero execute NtQuerySystemInformation again to obtain
    // the blob

    if (hstiBlobSize != 0)
    {
        hstiBuffer = new BYTE[hstiBlobSize];
        
        status = NtQuerySystemInformation(0xa6,
            hstiBuffer,
            hstiBlobSize,
            NULL);

        if (NT_SUCCESS(status))
        {
            printf("HSTI blob obtained! \n");
        }
        else
        {
            printf("HSTI blob not obtained, error [0x%x] \n", status); 
        }
  <skipped the rest of the code>

There are, of course, other methods for fixing the test.

Comment 3 John Ferlan 2021-07-08 15:12:48 UTC
Setting Triaged flag and leaving on the backlog (assigned to virt-maint) as it appears some level of triage has been done.

Unclear from Marek's comment whether a retest with Secure Boot enabled is desired

Comment 7 xiagao 2021-09-03 07:43:33 UTC
Still hit this issue on the official ws2022 guest.

guest version: windows server 2022 datacenter(10.0.20344 Build 20344)
virtio-win-1.9.18-2.el8.iso

Comment 8 John Ferlan 2021-09-09 15:08:36 UTC
Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 15 dehanmeng 2022-01-25 01:29:00 UTC
Passed this case after running it with Marek's OVMF.fd files. Update result here as tracker. thanks Marek for your time and efforts. thanks all.


Note You need to log in before you can comment on or make changes to this bug.