Bug 1749250

Summary: [RFE] Add EDID support to OVMF
Product: Red Hat Enterprise Linux 9 Reporter: Gerd Hoffmann <kraxel>
Component: edk2Assignee: Gerd Hoffmann <kraxel>
Status: CLOSED CURRENTRELEASE QA Contact: Xueqiang Wei <xuwei>
Severity: unspecified Docs Contact:
Priority: low    
Version: CentOS StreamCC: berrange, coli, jinzhao, kraxel, lersek, pbonzini, ppolawsk, virt-maint, yduan, yihyu, zhguo
Target Milestone: rcKeywords: FutureFeature, Reopened, TestOnly, Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-11-01 07:28:49 UTC Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2074831    
Bug Blocks:    

Description Gerd Hoffmann 2019-09-05 08:36:43 UTC
qemu got support for edid, it would be useful to have edid support in edk2 too.

Seems there is a EfiEdidDiscoveredProtocol to allow EFI applications access edid information.
See MdePkg/Include/Protocol/EdidDiscovered.h
Also https://blog.fpmurphy.com/2012/09/accessing-edid-information-from-uefi-shell.html and https://github.com/fpmurphy/UEFI-Utilities/tree/master/showedid

The protocol is a small struct with only size and pointer to the edid blob, which gets attached to the GOP.  Actually parsing the blob is not required even though some minimal parsing to read the preferred resolution might be useful.

Comment 1 Gerd Hoffmann 2019-09-05 08:41:49 UTC
For reference the respective seabios commits:

https://git.seabios.org/cgit/seabios.git/commit/?id=a307d0adc50f41787424a9453145293248a1908c
infrastructure: add global variable for the edid blob, add vgabios support for edid/ddc queries.

https://git.seabios.org/cgit/seabios.git/commit/?id=083801db10b08d6d4e569d84f99a1398044eb483
EDID support for "qemu -device VGA"

https://git.seabios.org/cgit/seabios.git/commit/?id=2f87fe393cd48327c4c0d5614bdef8f00d63012a
EDID support for "qemu -device bochs-display"

https://git.seabios.org/cgit/seabios.git/commit/?id=a3fd63c2342bd84a5487e280e2899250ec440d98
Minimal EDID parser (read preferred resolution) for "qemu -device bochs-display"

Comment 2 Gerd Hoffmann 2019-09-05 08:45:54 UTC
Note: virtio-gpu supports edid too (there is a virtio command to ask for it).
I'm not sure how useful EDID support actually would be for virtio-gpu given
that the device can't support GOP anyway.  Maybe it is even impossible.

Comment 3 Laszlo Ersek 2019-09-05 14:59:37 UTC
Hi Gerd,

the UEFI spec describes EFI_EDID_DISCOVERED_PROTOCOL and
EFI_EDID_ACTIVE_PROTOCOL.

An instance of each of those protocols can be installed on such a child
handle that carries EFI_GRAPHICS_OUTPUT_PROTOCOL, and does not stand for
a combination of video outputs (heads).

To explain a bit more... Graphics drivers in UEFI are "bus drivers".
Which means that they bind a controller handle (usually a handle with a
PciIo protocol instance on it), and then produce a number of child
handles.

The device path protocol instance on each child handle is built from the
original controller handle's device path protocol instance, PLUS a
special ACPI_DEVICE_PATH/ACPI_ADR_DP node appended. The appended devpath
node describes a kind of "logical" video display -- a "combination of
video output devices" (or put differenently, a "topology" of heads).

In the basic case, the appended devpath node simply stands for the
single physical head that is connected to the controller (usually a PCI
controller).

In a more complex example, the driver could create multiple child
handles:
- a child handle where the ACPI _ADR node at the end of the device path
  means physical head#0,
- a child handle where the ACPI _ADR node at the end of the device path
  means physical head#1,
- and further child handles where the ACPI _ADR node at the end of the
  device path describes some kind of multi-head arrangement (see "Table
  B-2: Video Output Device Attributes" in the ACPI 3 spec).

The UEFI graphics driver is supposed to install an
EFI_GRAPHICS_OUTPUT_PROTOCOL instance on each one of these child
handles. And, if we're talking EDID, EFI_EDID_DISCOVERED_PROTOCOL and
EFI_EDID_ACTIVE_PROTOCOL instances would be installed on each such child
handle *assuming* the child handle stands directly for a physical head,
and not for a combination.

In practice, both QemuVideoDxe and VirtioGpuDxe create exactly one child
handle -- standing for a single physical head -- for the controller
handle. And both drivers install a GOP instance on that child handle.
Therefore, when considering EDID, those two EDID-related protocols would
be installed on the same child handles.

This in fact doesn't look very complex -- whenever the driver binding
Start() function installs the GOP on the child handle, it could fetch
the EDID blob from the hardware, and install the two EDID protocol
instances too ("discovered" and "active").

(BTW, virtio-gpu-pci does support GOP; what it doesn't support are
linear framebuffer graphics modes. But, such graphics modes are
optional:

* EFI_GRAPHICS_OUTPUT_PROTOCOL.Mode                --> EFI_GRAPHICS_OUTPUT_PROTOCOL_MODE
* EFI_GRAPHICS_OUTPUT_PROTOCOL_MODE.Info           --> EFI_GRAPHICS_OUTPUT_MODE_INFORMATION
* EFI_GRAPHICS_OUTPUT_MODE_INFORMATION.PixelFormat --> EFI_GRAPHICS_PIXEL_FORMAT
* EFI_GRAPHICS_PIXEL_FORMAT::PixelBltOnly          --> "This mode does not support a physical frame buffer."

VirtioGpuDxe sets "PixelBltOnly" for all modes. QemuVideoDxe uses
"PixelBitMask" and "PixelBlueGreenRedReserved8BitPerColor".)

The complex part is the consumption of the EDID information. In edk2,
the only component that reads either of the EDID protocols is the UEFI
shell, for dumping the EDID information. Nothing actually *acts* upon
the EDID information.

The preferred resolution for the boot is determined by
OvmfPkg/PlatformDxe. It sets two dynamic PCDs, from a non-volatile UEFI
variable to which the manually selected resolution has been written,
during the previous boot. The PCDs (PcdVideoHorizontalResolution,
PcdVideoVerticalResolution) are consumed by

  MdeModulePkg/Universal/Console/GraphicsConsoleDxe

It's not trivial to set the PCDs from the EDID information. PlatformDxe
is a platform DXE driver, and as such, it is dispatched before UEFI
drivers (like the video drivers in question). PlatformDxe learns about
the possible resolutions (with which it populates the drop-down list, in
the setup TUI) via a GOP notify callback. And at that time, setting the
new resolution is too late for the *current* boot. Hence PlatformDxe
writes the user's choice to the non-volatile UEFI variable. And, at
*next* boot, PlatformDxe sets the PCD (early enough -- in its entry
point function, and not in the protocol notify callback) from the
non-volatile UEFI variable.

Here's the order, as a list:
#01 PlatformDxe starts up, loads the non-volatie UEFI variable, sets the
    PCDs
#02 [...]
#03 DXE phase ends, BDS phase starts
#04 [...]
#05 GraphicsConsoleDxe (a UEFI driver) consumes the PCDs
#06 video drivers bind the video devices, install Graphics Output
    Protocol instances, and EDID Discovered/Active Protocol instances,
    on child handles
#07 PlatformDxe receives GOP notify callbacks
#08 PlatformDxe populates the setup TUI dialog with resolutions
    retrieved from GOPs in the callback
#09 the setup browser displays the form due to user action (pressing ESC
    during boot, etc)
#10 user selects preferred resolution for *next* boot --
    GraphicsConsoleDxe has already consumed the PCDs!
#11 PlatformDxe rewrites non-volatile UEFI variable
#12 user reboots, goto #01

So it's not trivial to set the PCDs, consumed by GraphicsConsoleDxe in
step#05, from hardware (EDID) information that becomes available in
step#06.

This is a general edk2 design question -- if you'd like this feature,
we'll have to ask about the information flow on the list. I haven't seen
any code in edk2 or edk2-platforms that *consumes* EDID, for setting the
UEFI console resolution for OS boot.

... Sorry if this comment is somewhat incoherent, I've been in a rush to
type it up.

Comment 4 Laszlo Ersek 2019-09-05 16:13:10 UTC
To summarize:
- Installing the EDID protocols may not be difficult (covering both QemuVideoDxe and VirtioGpuDxe).
- But there isn't anything in edk2 that consumes the protocol -- maybe OS boot loaders do? I have no clue.
- Using the EDID information from the virtual hardware, for setting the UEFI console resultion *in the same boot*, and cleanly, appears difficult.

Comment 6 Gerd Hoffmann 2019-09-06 06:27:01 UTC
> This in fact doesn't look very complex -- whenever the driver binding
> Start() function installs the GOP on the child handle, it could fetch
> the EDID blob from the hardware, and install the two EDID protocol
> instances too ("discovered" and "active").

Yep, this is what I have in mind ;)

> The preferred resolution for the boot is determined by
> OvmfPkg/PlatformDxe. It sets two dynamic PCDs, from a non-volatile UEFI
> variable to which the manually selected resolution has been written,
> during the previous boot. The PCDs (PcdVideoHorizontalResolution,
> PcdVideoVerticalResolution) are consumed by
> 
>   MdeModulePkg/Universal/Console/GraphicsConsoleDxe

What is the state of these PCDs if the user never explicitly picked
a display resolution in the ovmf platform configuration screen?
Are they initialized to the default (800x600 I think) then?  Or can
they be in a "unset" state?  The latter might allow a logic like
"if (have-user-set-resolution) then { use-that } else { use-edid }".

> To summarize:
> - Installing the EDID protocols may not be difficult (covering both
> QemuVideoDxe and VirtioGpuDxe).

Having that (for QemuVideoDxe) would be a good start for experiments.

> - But there isn't anything in edk2 that consumes the protocol -- maybe OS
> boot loaders do? I have no clue.

I have no clue either.

On physical hardware the uefi firmware uses native display resolution.
So there must be code to figure the display resolution from edid and
use that, but it is probably in the non-public and hardware-specific
(IGD driver?) parts of the firmware.

Windows comes up in native resolution too (even before loading the
display drivers), but I don't know whenever it parses the EDID or
whenever it simply continues to use the display resolution the UEFI
firmware has initialized the GOP with.

Does the order of the video modes in ModeData matter?  Would it have
any effect if QemuVideoDxe puts the preferred resolution first into
the list?

> - Using the EDID information from the virtual hardware, for setting the UEFI
> console resultion *in the same boot*, and cleanly, appears difficult.

Lets start with the simple things and see what happens.

Comment 7 Laszlo Ersek 2019-09-06 13:19:45 UTC
(In reply to Gerd Hoffmann from comment #6)

> > The preferred resolution for the boot is determined by
> > OvmfPkg/PlatformDxe. It sets two dynamic PCDs, from a non-volatile
> > UEFI variable to which the manually selected resolution has been
> > written, during the previous boot. The PCDs
> > (PcdVideoHorizontalResolution, PcdVideoVerticalResolution) are
> > consumed by
> >
> >   MdeModulePkg/Universal/Console/GraphicsConsoleDxe
>
> What is the state of these PCDs if the user never explicitly picked
> a display resolution in the ovmf platform configuration screen?
> Are they initialized to the default (800x600 I think) then?

Yes, that's correct. They are declared in "MdeModulePkg.dec", as
follows:

[PcdsPatchableInModule, PcdsDynamic, PcdsDynamicEx]
  ## This PCD defines the video horizontal resolution.
  #  If this PCD is set to 0 then video resolution would be at highest resolution.
  # @Prompt Video horizontal resolution.
  gEfiMdeModulePkgTokenSpaceGuid.PcdVideoHorizontalResolution|800|UINT32|0x40000009

  ## This PCD defines the video vertical resolution.
  #  If this PCD is set to 0 then video resolution would be at highest resolution.
  # @Prompt Video vertical resolution.
  gEfiMdeModulePkgTokenSpaceGuid.PcdVideoVerticalResolution|600|UINT32|0x4000000a

These entries in the DEC file provide the default values (800/600), the
data types (UINT32), and the possible (permitted) access methods
(PatchableInModule, Dynamic, DynamicEx).

Due to the precedence rules described at [1], if the OvmfPkg DSC file
didn't list these PCDs as well, then in OVMF, these PCDs would get the
PatchableInModule access method (rule#2 at [1]). That would preclude
dynamic modification with PcdSet32S() -- in fact, the PcdSet32S()
invocations wouldn't even compile.

Therefore, the OvmfPkg DSC files do spell these out:

[PcdsDynamicDefault]
  gEfiMdeModulePkgTokenSpaceGuid.PcdVideoHorizontalResolution|800
  gEfiMdeModulePkgTokenSpaceGuid.PcdVideoVerticalResolution|600

This syntax requires both the PCD's name (including token space GUID),
and the DSC-global default value (overriding whatever is inherited from
the DEC file) [2].

Thus, if the ExecutePlatformConfig() function in
"OvmfPkg/PlatformDxe/Platform.c" does not find the non-volatile UEFI
variable, and consequently the PcdSet32S() calls are never reached, the
PCDs keep the values from the DSC file.

[1] https://edk2-docs.gitbooks.io/edk-ii-build-specification/content/8_pre-build_autogen_stage/84_auto-generated_pcd_database_file.html#8412-precedence-rules-for-pcds-not-listed-in-the-dsc-or-fdf-files
[2] https://edk2-docs.gitbooks.io/edk-ii-dsc-specification/content/2_dsc_overview/28_pcd_sections.html#2831-pcdsdynamicdefault

> Or can they be in a "unset" state?  The latter might allow a logic
> like "if (have-user-set-resolution) then { use-that } else { use-edid
> }".

If we changed the OVMF DSC files to set both PCDs to 0 by default, and
the non-volatile UEFI variable for ExecutePlatformConfig() didn't exist,
then GraphicsConsoleDxe would pick the highest resolution.

But, AIUI, that's still not necessarily what the user requests on the
QEMU command line, with EDID.

>
> > To summarize:
> > - Installing the EDID protocols may not be difficult (covering both
> >   QemuVideoDxe and VirtioGpuDxe).
>
> Having that (for QemuVideoDxe) would be a good start for experiments.
>
> > - But there isn't anything in edk2 that consumes the protocol --
> >   maybe OS boot loaders do? I have no clue.
>
> I have no clue either.
>
> On physical hardware the uefi firmware uses native display resolution.
> So there must be code to figure the display resolution from edid and
> use that, but it is probably in the non-public and hardware-specific
> (IGD driver?) parts of the firmware.

I think that in physical installations, the PCDs are set to 0 by
default. Then GraphicsConsoleDxe finds the highest resolution from the
GOP modes (it doesn't consume EDID). See in
GraphicsConsoleControllerDriverStart().

> Does the order of the video modes in ModeData matter?  Would it have
> any effect if QemuVideoDxe puts the preferred resolution first into
> the list?

I don't think so; GraphicsConsoleControllerDriverStart() seems to
perform a maximum search based on horizontal resolution, and given equal
horizontal resolutions, it prefers the larger vertical ones.

> > - Using the EDID information from the virtual hardware, for setting
> >   the UEFI console resultion *in the same boot*, and cleanly,
> >   appears difficult.
>
> Lets start with the simple things and see what happens.

So the question is whether we'd like to use the maximum, or adhere to
the preference in EDID. The latter is complicated.

The former may be simpler:

- change the DSC files to set 0 as the default value for the PCDs,

- modify PopulateForm() in PlatformDxe to always add an entry called
  "highest supported resolution" to the drop-down, likely at the bottom
  (index NumGopModes)

- modify FormStateToPlatformConfig() to translate index mNumGopModes to
  resolution 0:0,

- modify PlatformConfigToFormState() to translate resolution 0:0 to
  index mNumGopModes.

Comment 8 Laszlo Ersek 2019-09-06 13:44:58 UTC
Here's a further idea.

After looking at

  https://git.seabios.org/cgit/seabios.git/commit/?id=a3fd63c2342bd84a5487e280e2899250ec440d98

it seemed like SeaBIOS took the preferred resolution from a
sub-structure starting at offset 54 in the EDID block.

At this point, I was still confused, because I didn't understand how a
monitor could express a *user preference*. Is that something that a
human would manually set on a physical monitor???

However, looking at wikipedia helped:

  https://en.wikipedia.org/wiki/Extended_Display_Identification_Data#Structure,_version_1.4

It explains that at offset 54, we have the first of four "Detailed
timing descriptors", each 18 bytes in size.

Then, at

  https://en.wikipedia.org/wiki/Extended_Display_Identification_Data#Detailed_Timing_Descriptor

we find the fields of this sub-structure. If we check the field offsets
taken from SeaBIOS commit a3fd63c2342b, we get:

  2: Horizontal active pixels 8 lsbits (0-4095)
  4: Horizontal active pixels 4 msbits [...]
  5: Vertical active lines 8 lsbits (0-4095)
  7: Vertical active lines 4 msbits [...]

So basically what QEMU populates in the EDID block is a *physical
capability* for the monitor! It's not at all a preference, we just abuse
it for expressing a user preference. :)

Gerd: you'll have to start writing way more detailed commit messages.
Please. :)

Anyway, based on this, installing the EDID protocols in the video
drivers (QemuVideoDxe, VirtioGpuDxe) appears unnecessary. Instead, the
drivers should parse the EDID info from the hardware, and make sure that
they expose such a video mode in their mode lists that matches the
resolution from the EDID. (The mode may or may not already exist in the
fixed mode lists.)

This way the QEMU user's preference will reach any firmware components
that parse GOP mode lists. Including OvmfPkg/PlatformDxe, and
MdeModulePkg/Universal/Console/GraphicsConsoleDxe.

In combination with comment 7, this can be used to specify a "highest"
resolution on the QEMU command line, and the firmware can be made to
select it automatically.

The design seems mostly clear to me, now I only need the time that I
don't have to write the code.

Comment 12 RHEL Program Management 2021-03-05 07:30:49 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 13 leidwang@redhat.com 2021-03-10 02:10:44 UTC
According to comment9, QE agree close this bug.

Comment 14 Gerd Hoffmann 2021-11-24 11:41:23 UTC
https://github.com/kraxel/edk2/commits/video

[ stdvga only for now, virtio to be done ]

Comment 16 Laszlo Ersek 2021-11-24 13:33:35 UTC
(In reply to Gerd Hoffmann from comment #14)
> https://github.com/kraxel/edk2/commits/video
>
> [ stdvga only for now, virtio to be done ]

(1) Comments on PcdVideoHostResolution:

QemuVideoDxe, which is a UEFI_DRIVER, consumes PcdVideoHostResolution in
its Driver Binding Start function. This function is invoked only after
the firmware has entered the BDS phase (namely when the platform's
PlatformBootManagerLib instance connects devices).

PlatformDxe, which is a DXE_DRIVER, sets PcdVideoHostResolution in its
entry point function.

The above PCD accesses are deterministically ordered because the BDS
phase is only entered after the DXE Core has dispatched all DXE_DRIVER
modules.

(2) More comments on PcdVideoHostResolution:

UEFI_DRIVER modules should preferably not use PCDs at all, only UEFI
services. However, in OVMF, we've always considered this guideline
loosely, saying that the UEFI_DRIVERs in OvmfPkg are not supposed to be
portable outside of the ArmVirtPkg and OvmfPkg platforms in the edk2
tree.

(3) However, setting PcdVideoHorizontalResolution and
PcdVideoVerticalResolution in a UEFI_DRIVER seems fishy.

First, you could have some incidental ordering between the Driver
Binding Start functions of QemuVideoDxe (setter) and GraphicsConsoleDxe
(getter).

Second, if you have a "multi head" style setup, possibly with a primary
VGA card (QemuVideoDxe), a secondary VGA card (QemuVideoDxe), and maybe
even a virtio-gpu-pci card (VirtioGpuDxe), then each of those cards can
specify its own EDID -- and then the driver *bindings* will fight over
PcdVideoHorizontalResolution and PcdVideoVerticalResolution.

See my suggestions in comment 7 and comment 8:

- based on the resolution preference fetched from the card, each driver
  should update the subject GOP's resolution list. Meaning two things:
  (a) the hw-fetched resolution should be placed on the list, and (b)
  all higher resolutions should be removed.

- furthermore, PcdVideoHorizontalResolution and
  PcdVideoVerticalResolution should be set to zero, in the DSC files,
  and by PlatformDxe (using a dedicated entry in the drop-down list).

- this way, GraphicsConsoleDxe will expose the max textual resolution on
  top of each GOP,

- and ConSplitterDxe will build an intersection (and fall back to a
  small textual resolution if necessary).

Basically the arbitration (intersection-building) in a multi-head setup
is best left to ConSplitterDxe.

If you're not concerned about the ownership of
PcdVideoHorizontalResolution and PcdVideoVerticalResolution between
multiple driver *bindings*, then please feel free to ignore this, of
course. Thanks!

Comment 17 Laszlo Ersek 2021-11-24 13:49:11 UTC
Restoring the Keyword and the Pool ID that Bugzilla deleted (without asking me after my comment) in its INFINITE WISDOM.

Comment 18 Gerd Hoffmann 2021-11-25 06:36:40 UTC
> (3) However, setting PcdVideoHorizontalResolution and
> PcdVideoVerticalResolution in a UEFI_DRIVER seems fishy.
> 
> First, you could have some incidental ordering between the Driver
> Binding Start functions of QemuVideoDxe (setter) and GraphicsConsoleDxe
> (getter).

Havn't checked the code, but GraphicsConsoleDxe can't do much as long as
there isn't a GOP registered yet ...

> Second, if you have a "multi head" style setup, possibly with a primary
> VGA card (QemuVideoDxe), a secondary VGA card (QemuVideoDxe), and maybe
> even a virtio-gpu-pci card (VirtioGpuDxe), then each of those cards can
> specify its own EDID -- and then the driver *bindings* will fight over
> PcdVideoHorizontalResolution and PcdVideoVerticalResolution.

Hmm, yes, that certainly isn't ideal.  Maybe we should flip PcdVideoHostResolution
to false after updating it so the resolution will be set only once (probably by
the device coming first in pci scan order).

> See my suggestions in comment 7 and comment 8:
> 
> - based on the resolution preference fetched from the card, each driver
>   should update the subject GOP's resolution list. Meaning two things:
>   (a) the hw-fetched resolution should be placed on the list, and (b)
>   all higher resolutions should be removed.
> 
> - furthermore, PcdVideoHorizontalResolution and
>   PcdVideoVerticalResolution should be set to zero, in the DSC files,
>   and by PlatformDxe (using a dedicated entry in the drop-down list).

Problem with that is that edk2 doesn't have the concept of a "preferred
resolution" for a GOP.  While "just use the max resolution" approach works
fine on physical hardware because that typically is the native resolution
of the display it has its problems in virtual machines.  We want allow the
user to pick high resolutions without using the maximum possible resolution
by default.

qemu default is 1024x768.  If we filter out all resolutions higher than
that users wouldn't be able to pick them via platform config.  If we don't
filter them edk2 wouldn't pick 1024x768 by default but the highest possible
resolution.

I also have no good idea how to handle the case that there is no edid block
present.

Using PcdVideo{Horizontal,Vertical}Resolution kind-of emulates a "preferred
resolution".  I don't see an easy way around that without ovmf behaving less
than ideal in a number of cases.  I'm open to sugestions though, maybe I missed
something ...

Comment 19 Laszlo Ersek 2021-11-25 09:04:05 UTC
(In reply to Gerd Hoffmann from comment #18)
> > (3) However, setting PcdVideoHorizontalResolution and
> > PcdVideoVerticalResolution in a UEFI_DRIVER seems fishy.
> > 
> > First, you could have some incidental ordering between the Driver
> > Binding Start functions of QemuVideoDxe (setter) and GraphicsConsoleDxe
> > (getter).
> 
> Havn't checked the code, but GraphicsConsoleDxe can't do much as long as
> there isn't a GOP registered yet ...

True; good point. GraphicsConsoleDxe consumes (IIRC) the PCDs in its Driver Binding Start function, which can only be invoked after its Driver Binding Supported function checked and approved the handle in question. That includes opening a GOP interface on the handle. So the GOP must have been installed on that handle previously, by the video driver (which is also the one setting the PCDs).

> 
> > Second, if you have a "multi head" style setup, possibly with a primary
> > VGA card (QemuVideoDxe), a secondary VGA card (QemuVideoDxe), and maybe
> > even a virtio-gpu-pci card (VirtioGpuDxe), then each of those cards can
> > specify its own EDID -- and then the driver *bindings* will fight over
> > PcdVideoHorizontalResolution and PcdVideoVerticalResolution.
> 
> Hmm, yes, that certainly isn't ideal.  Maybe we should flip
> PcdVideoHostResolution
> to false after updating it so the resolution will be set only once (probably
> by
> the device coming first in pci scan order).

That could work, yes, if we're OK with saying, "if you use this feature, don't specify different xres & yres device properties on different display devices, as the result may not be deterministic".

(Maybe use UINT8 rather than BOOLEAN for the new PCD. Zero could mean "host info to be ignored", one could mean "host info to be investigated", and two could mean "host info consumed and put into effect; don't check any longer".)

I like this; it covers the most common use case (single video device) and leaves the "policy" with the user. Thanks!

Comment 20 Gerd Hoffmann 2021-11-25 12:47:40 UTC
> (Maybe use UINT8 rather than BOOLEAN for the new PCD. Zero could mean "host
> info to be ignored", one could mean "host info to be investigated", and two
> could mean "host info consumed and put into effect; don't check any longer".)

Recording that information looks useful indeed.

Rename it to ResolutionSource, then have ...
  0 - unset (aka at default still)
  1 - set from PlatformConfig
  2 - set by VideoDriver
... would be less confusing I think.

Comment 21 Laszlo Ersek 2021-11-25 15:41:33 UTC
And then we can make it a GUID (a pointer PCD with room for 16 bytes); use a new GUID for "unset", and all other GUIDs used would be the FILE_GUIDs of the drivers that (last) set it :) (FILE_GUID is available in the C source as gEfiCallerIdGuid). But, seriously, the enum seems fine.

Comment 24 Gerd Hoffmann 2022-02-01 10:57:57 UTC
Part one merged upstream (support for stdvga, virtio-gpu to follow).
https://github.com/tianocore/edk2/pull/2455

Comment 26 Gerd Hoffmann 2022-04-26 07:00:37 UTC
(In reply to Gerd Hoffmann from comment #24)
> Part one merged upstream (support for stdvga, virtio-gpu to follow).
> https://github.com/tianocore/edk2/pull/2455

virtio-gpu merged upstream
https://github.com/tianocore/edk2/pull/2826

Next rebase will pick it up.

Comment 31 yduan 2022-06-22 01:12:33 UTC
ovmf failed to pick up display resolution set via xres and yres properties.

Components:
edk2-ovmf-20220526git16779ede2d36-1.el9.noarch
qemu-kvm-7.0.0-6.el9.x86_64

Steps:
1. Boot up VM with VGA device and set resolution to 800*600:
...
 -blockdev node-name=file_ovmf_code,driver=file,filename=/usr/share/OVMF/OVMF_CODE.secboot.fd,auto-read-only=off,discard=unmap \
 -blockdev node-name=drive_ovmf_code,driver=raw,read-only=off,file=file_ovmf_code \
 -blockdev node-name=file_ovmf_vars,driver=file,filename=/root/avocado/data/avocado-vt/avocado-vt-vm1_rhel910-64-virtio-scsi_qcow2_filesystem_VARS.fd,auto-read-only=off,discard=unmap \
 -blockdev node-name=drive_ovmf_vars,driver=raw,read-only=off,file=file_ovmf_vars \
 -machine q35,memory-backend=mem-machine_mem,pflash0=drive_ovmf_code,pflash1=drive_ovmf_vars \
 -device VGA,xres=800,yres=600,bus=pcie.0,addr=0x2 \
                  ^^^      ^^^
...
2. Check the 'Preferred Resolution at' and 'Change Preferred' in 'OVMF Platform Configuration'.
'Preferred Resolution at' and 'Change Preferred' both show 1920x1080.

Hi Gerd,

Are above test steps correct?

Thanks,
yduan

Comment 32 Gerd Hoffmann 2022-06-22 10:05:29 UTC
> /root/avocado/data/avocado-vt/avocado-vt-vm1_rhel910-64-virtio-scsi_qcow2_filesystem_VARS.fd

> 2. Check the 'Preferred Resolution at' and 'Change Preferred' in 'OVMF
> Platform Configuration'.
> 'Preferred Resolution at' and 'Change Preferred' both show 1920x1080.

Hmm, where does the VARS.fd file come from?  Is that an existing file?
Or a fresh version (i.e. copied from /usr/share/edk2/ovmf/OVMF_VARS.fd)?

When you explicitly set some configuration in the 'OVMF
Platform Configuration' menu that has higher priority,
the xres and yres properties only set the default resulution
which ovmf will use when nothing else has been configured manually.

Comment 33 yduan 2022-06-23 07:36:16 UTC
(In reply to Gerd Hoffmann from comment #32)
> > /root/avocado/data/avocado-vt/avocado-vt-vm1_rhel910-64-virtio-scsi_qcow2_filesystem_VARS.fd
> 
> > 2. Check the 'Preferred Resolution at' and 'Change Preferred' in 'OVMF
> > Platform Configuration'.
> > 'Preferred Resolution at' and 'Change Preferred' both show 1920x1080.
> 
> Hmm, where does the VARS.fd file come from?  Is that an existing file?
> Or a fresh version (i.e. copied from /usr/share/edk2/ovmf/OVMF_VARS.fd)?
> 

I re-tested with a fresh OVMF_VAR.fd (copied from /usr/share/edk2/ovmf/OVMF_VARS.fd).
1. With virtio-gpu-pci device, It was early kernel phase when remote-viewer window showed something up.
So I cannot catch up with TianoCore splash screen.
Is there any suggestion for virtio-gpu-pci device testing?

2. With VGA or bochs-display device, I tested 3 resolution scenarios: 800*600, 1920*1080, and 2560*1600.
Correct resolution shows  in 'Change Preferred'. And 'Preferred Resolution at' is always 'Unset'.

Components:
edk2-ovmf-20220526git16779ede2d36-1.el9.noarch
qemu-kvm-7.0.0-6.el9.x86_64

Command line:
...
 -blockdev node-name=file_ovmf_code,driver=file,filename=/usr/share/OVMF/OVMF_CODE.secboot.fd,auto-read-only=off,discard=unmap \
 -blockdev node-name=drive_ovmf_code,driver=raw,read-only=off,file=file_ovmf_code \
 -blockdev node-name=file_ovmf_vars,driver=file,filename=/home/OVMF_VARS.fd,auto-read-only=off,discard=unmap \
 -blockdev node-name=drive_ovmf_vars,driver=raw,read-only=off,file=file_ovmf_vars \
 -machine q35,memory-backend=mem-machine_mem,pflash0=drive_ovmf_code,pflash1=drive_ovmf_vars \
 -device VGA,xres=800,yres=600,bus=pcie.0,addr=0x2 \
...

Comment 34 Gerd Hoffmann 2022-06-23 07:48:13 UTC
> So I cannot catch up with TianoCore splash screen.
> Is there any suggestion for virtio-gpu-pci device testing?

qemu -boot menu=on,splash-time=60000
                               ^^^^^ milliseconds -> this is 60 secs

Should give you enough time to connect the vnc client ;)

Increasing the grub timeout in the guest should work too,
you'll see the grub boot menu instead of the tianocore splash then.

Comment 35 yduan 2022-06-23 07:54:00 UTC
(In reply to Gerd Hoffmann from comment #34)
> > So I cannot catch up with TianoCore splash screen.
> > Is there any suggestion for virtio-gpu-pci device testing?
> 
> qemu -boot menu=on,splash-time=60000
>                                ^^^^^ milliseconds -> this is 60 secs
> Should give you enough time to connect the vnc client ;)
> 

This doesn't help, I tried splash-time with 65535 and the result is no difference.

> Increasing the grub timeout in the guest should work too,
> you'll see the grub boot menu instead of the tianocore splash then.

I'll re-try with this method.
Thanks!

Comment 36 Daniel Berrangé 2022-06-23 07:56:27 UTC
(In reply to yduan from comment #35)
> (In reply to Gerd Hoffmann from comment #34)
> > > So I cannot catch up with TianoCore splash screen.
> > > Is there any suggestion for virtio-gpu-pci device testing?
> > 
> > qemu -boot menu=on,splash-time=60000
> >                                ^^^^^ milliseconds -> this is 60 secs
> > Should give you enough time to connect the vnc client ;)
> > 
> 
> This doesn't help, I tried splash-time with 65535 and the result is no
> difference.

Use  'virsh start --paused $VM', then connect virt-viewer / remote-viewer, and then in another terminal 'virsh resume $VM', will guarantee you see the very first screen output

Comment 37 yduan 2022-06-23 08:49:34 UTC
(In reply to Daniel Berrangé from comment #36)
> (In reply to yduan from comment #35)
> > (In reply to Gerd Hoffmann from comment #34)
> > > > So I cannot catch up with TianoCore splash screen.
> > > > Is there any suggestion for virtio-gpu-pci device testing?
> > > 
> > > qemu -boot menu=on,splash-time=60000
> > >                                ^^^^^ milliseconds -> this is 60 secs
> > > Should give you enough time to connect the vnc client ;)
> > > 
> > 
> > This doesn't help, I tried splash-time with 65535 and the result is no
> > difference.
> 
> Use  'virsh start --paused $VM', then connect virt-viewer / remote-viewer,
> and then in another terminal 'virsh resume $VM', will guarantee you see the
> very first screen output

It's not related with the time when remote-viewer connected.
During previous tests, there was "-S" option in qemu command line.
I connected remote-viewer and then resumed VM through sending 'cont' in HMP.
I think this is the same as the virsh method you mentioned above.

It was black screen at the beginning. Around 1~2 minutes later, early kernel booting screen showed up directly.
These two screen snapshots are attached.

Thanks!

Comment 40 yduan 2022-06-23 09:28:09 UTC
(In reply to Gerd Hoffmann from comment #34)
> > So I cannot catch up with TianoCore splash screen.
> > Is there any suggestion for virtio-gpu-pci device testing?
> 
> Increasing the grub timeout in the guest should work too,
> you'll see the grub boot menu instead of the tianocore splash then.

This doesn't help either. The result has no change.
I updated the grub timeout value from 5 to 120.

Black screen (shows "Dispaly output is not active") lasted for around 2mins. And then kernel booting screen showed up immediately.

> It was black screen at the beginning. Around 1~2 minutes later, early kernel
> booting screen showed up directly.
> These two screen snapshots are attached.

Comment 41 Gerd Hoffmann 2022-06-23 11:36:08 UTC
Probably due to "OvmfPkg: Remove VirtioGpu device driver (RHEL only)" downstream patch.
On arm the driver is included, so when testing this on a aarch64 machine it should work.

Comment 42 Guo, Zhiyi 2022-06-23 15:26:19 UTC
(In reply to Gerd Hoffmann from comment #41)
> Probably due to "OvmfPkg: Remove VirtioGpu device driver (RHEL only)"
> downstream patch.
> On arm the driver is included, so when testing this on a aarch64 machine it
> should work.

I think use virtio-vga instead of virtio-gpu for both seabios/ovmf on x86 is more reasonable? libvirt for now only support to use virtio-vga as primary vga(at least on x86, I'm not sure about this on ppc or arm) and virtio-gpu can be only configured as secondary.

Comment 43 Yihuang Yu 2022-06-24 06:23:31 UTC
Well, I don't know exactly about the background, but I can give some info on the aarch64 guest.

(In reply to Gerd Hoffmann from comment #41)
> Probably due to "OvmfPkg: Remove VirtioGpu device driver (RHEL only)"
> downstream patch.
> On arm the driver is included, so when testing this on a aarch64 machine it
> should work.

virt arm only supports virtio-gpu, so I'm not sure how this change affects aarch64.

Besides, I suppose without these edk2 commits, I cannot get any EDID info from the guest, but for now, we already rebased the edk2 package, so currently I can monitor some EDID info via "monitor-edid" in guest.

# monitor-edid
Name: QEMU Monitor
EISA ID: RHT1234
EDID version: 1.4
EDID extension blocks: 1
Screen size: 32.5 cm x 20.3 cm (15.09 inches, aspect ratio 16/10 = 1.60)
Gamma: 2.2
Digital signal

	HorizSync 30-160
	VertRefresh 50-125

	# Monitor preferred modeline (75.0 Hz vsync, 62.1 kHz hsync, ratio 16/10, 100 dpi)
	ModeLine "1280x800" 107.3 1280 1600 1638 1728 800 804 808 828 -hsync -vsync

	# Monitor supported CEA modeline (50.0 Hz vsync, 56.2 kHz hsync, ratio 16/9, 150x135 dpi) (bad ratio)
	ModeLine "1920x1080" 148.5 1920 2448 2492 2640 1080 1084 1089 1125 +hsync +vsync

Comment 44 Yihuang Yu 2022-06-24 06:58:14 UTC
OK, let me update the result.

qemu-kvm-7.0.0-6.el9.aarch64
edk2-aarch64-20220526git16779ede2d36-1.el9.noarch

"Preferred Resolution at" is always "Unset" and "Change Preferred" is always "640x480", even though I changed the qemu command line. But in the guest, info are correct, and finially, the vnc screen size is also correct.

-device virtio-gpu-pci,bus=pcie-root-port-1,addr=0x0,xres=1920,yres=1080 \

# monitor-edid
monitor-edid
Name: QEMU Monitor
EISA ID: RHT1234
EDID version: 1.4
EDID extension blocks: 1
Screen size: 48.0 cm x 27.0 cm (21.68 inches, aspect ratio 16/9 = 1.78)
Gamma: 2.2
Digital signal

	HorizSync 30-160
	VertRefresh 50-125

	# Monitor preferred modeline (75.0 Hz vsync, 83.8 kHz hsync, ratio 16/9, 101 dpi)
	ModeLine "1920x1080" 217.14 1920 2400 2457 2592 1080 1085 1090 1117 -hsync -vsync

	# Monitor supported CEA modeline (50.0 Hz vsync, 56.2 kHz hsync, ratio 16/9, 101 dpi)
	ModeLine "1920x1080" 148.5 1920 2448 2492 2640 1080 1084 1089 1125 +hsync +vsync


-device virtio-gpu-pci,bus=pcie-root-port-1,addr=0x0,xres=1440,yres=900 \

# monitor-edid
monitor-edid
Name: QEMU Monitor
EISA ID: RHT1234
EDID version: 1.4
EDID extension blocks: 1
Screen size: 36.0 cm x 22.0 cm (16.61 inches, aspect ratio 16/10 = 1.60)
Gamma: 2.2
Digital signal

	HorizSync 30-160
	VertRefresh 50-125

	# Monitor preferred modeline (75.0 Hz vsync, 69.8 kHz hsync, ratio 16/10, 101 dpi)
	ModeLine "1440x900" 135.73 1440 1800 1843 1944 900 904 908 931 -hsync -vsync

	# Monitor supported CEA modeline (50.0 Hz vsync, 56.2 kHz hsync, ratio 16/9, 135x124 dpi) (bad ratio)
	ModeLine "1920x1080" 148.5 1920 2448 2492 2640 1080 1084 1089 1125 +hsync +vsync

Comment 45 Gerd Hoffmann 2022-06-24 11:08:39 UTC
> "Preferred Resolution at" is always "Unset" and "Change Preferred" is always
> "640x480", even though I changed the qemu command line. But in the guest,
> info are correct, and finially, the vnc screen size is also correct.

Well, the interesting question is whenever the tianocore splash screen
has the correct size.  This should work for virtio-gpu-pci.  It does
not work for virtio-vga, the firmware runs virtio-vga in vga compat
mode and edid is only available in native virtio mode.

Once the guest is fully booted and the linux kernel's driver is active
the screen should have the correct resolution even with older firmware
and for both virtio-vga and virtio-gpu-pci.

Comment 46 Yihuang Yu 2022-06-24 11:59:32 UTC
(In reply to Gerd Hoffmann from comment #45)
> > "Preferred Resolution at" is always "Unset" and "Change Preferred" is always
> > "640x480", even though I changed the qemu command line. But in the guest,
> > info are correct, and finially, the vnc screen size is also correct.
> 
> Well, the interesting question is whenever the tianocore splash screen
> has the correct size.  This should work for virtio-gpu-pci.  It does
Thanks for the info. Then I think my test results are expected, the tianocore screen will follow the setting in qemu command line rather than UEFI setting. After the guest is bootup, the resolution will align to tianocore if we set xres and yres in qemu command, otherwise the guest will set the resolution by itself.
> not work for virtio-vga, the firmware runs virtio-vga in vga compat
> mode and edid is only available in native virtio mode.
> 
> Once the guest is fully booted and the linux kernel's driver is active
> the screen should have the correct resolution even with older firmware
> and for both virtio-vga and virtio-gpu-pci.

Comment 47 Guo, Zhiyi 2022-06-24 15:39:21 UTC


(In reply to Gerd Hoffmann from comment #45)
> > "Preferred Resolution at" is always "Unset" and "Change Preferred" is always
> > "640x480", even though I changed the qemu command line. But in the guest,
> > info are correct, and finially, the vnc screen size is also correct.
> 
> Well, the interesting question is whenever the tianocore splash screen
> has the correct size.  This should work for virtio-gpu-pci.  It does
> not work for virtio-vga, the firmware runs virtio-vga in vga compat
> mode and edid is only available in native virtio mode.

So for testing this bug with supported video hardware on x86, seems the devices need to be tested are VGA, bochs-display? 

Looks like we have two problems stops us to test this bug with virtio-gpu on x86:

1)VirtioGpu driver is removed from ovmf for x86

2)virtio-gpu is not supported to be specified as primary video device from libvirt


I think 2) is more important because use virtio-gpu as primary seems an unsupported configuration for x86 in virt stack for now. Gerd, what's your suggestion here?

Comment 48 Gerd Hoffmann 2022-06-27 06:37:04 UTC
> So for testing this bug with supported video hardware on x86, seems the
> devices need to be tested are VGA, bochs-display? 

Yes.

> I think 2) is more important because use virtio-gpu as primary seems an
> unsupported configuration for x86 in virt stack for now. Gerd, what's your
> suggestion here?

Yes, test libvirt-supported configs only makes sense.  Which means testing
virtio-gpu-pci on aarch64 only.

Comment 49 yduan 2022-06-28 01:43:42 UTC
Change to VERIFIED based on comment 33 and comment 46.

Comment 51 RHEL Program Management 2022-11-01 07:28:49 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 52 Xueqiang Wei 2022-11-02 16:19:34 UTC
According to Comment 49, change status to CURRENTRELEASE. If i was wrong, please correct me.