Bug 1679680
| Summary: | RFE: please support the "ramfb" display device model | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | Laszlo Ersek <lersek> | |
| Component: | libvirt | Assignee: | Jonathon Jongsma <jjongsma> | |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | yafu <yafu> | |
| Severity: | unspecified | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 8.1 | CC: | abologna, chhu, dinechin, dzheng, jdenemar, jjongsma, jsuchane, kraxel, libvirt-maint, lmen, rm, smitterl, xuzhang, yalzhang | |
| Target Milestone: | rc | Keywords: | FutureFeature, TestOnly | |
| Target Release: | 8.3 | Flags: | pm-rhel:
mirror+
|
|
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | libvirt-6.0.0-1.el8 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1841068 (view as bug list) | Environment: | ||
| Last Closed: | 2021-01-08 16:53:46 UTC | Type: | Feature Request | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1841068, 1847791 | |||
| Bug Blocks: | 1528681, 1958081 | |||
|
Description
Laszlo Ersek
2019-02-21 16:14:02 UTC
(In reply to Laszlo Ersek from comment #0) > When using VGA, the framebuffer used by the guest lives inside a > (virtual) PCI MMIO BAR. Accordingly, the guest maps the framebuffer as > uncacheable. I'm curious why is this problem specific to VGA memory? Is that because VGA is the only device mapping memory that way on QEMU? (I know this is rare, I do not know for certain it is unique) How does QEMU deal with other devices if it can't assume it sees valid data for any guest MMIO or UC mappings? If the architecture does not ensure coherency when one mapping is UC and the other is WB, doesn't that imply that QEMU should be able to override UC for some mapping that are known to be backed by real memory? Or alternatively, to invalidate the cache in some other way before accessing VGA memory. (My concern here is that the ramfb approach would not fix the VGA bug, and I find it likely that someone might still misconfigure QEMU) (In reply to Christophe de Dinechin from comment #1) > (In reply to Laszlo Ersek from comment #0) > > When using VGA, the framebuffer used by the guest lives inside a > > (virtual) PCI MMIO BAR. Accordingly, the guest maps the framebuffer as > > uncacheable. > > I'm curious why is this problem specific to VGA memory? > Is that because VGA is the only device mapping memory that way on QEMU? > (I know this is rare, I do not know for certain it is unique) Normally a GPA range that appears as writeable MMIO to the guest is not backed by any host memory. There is no memory slot in KVM, and every write access traps separately. QEMU provides the device emulation on an individual write access basis. VGA is different; it is backed by writeable host memory (and a r/w KVM memslot), and QEMU uses something called "coalesced MMIO" to refresh the display from that host RAM area. My understanding is quite lacking in this area, but VGA framebuffer writes from the guest do not trap individually. > How does QEMU deal with other devices if it can't assume it sees valid data > for any guest MMIO or UC mappings? For other devices, the specifics of the MMIO write are passed down to QEMU from KVM based on the trap symptoms. QEMU doesn't have to read RAM for the details. > If the architecture does not ensure coherency when one mapping is UC and > the other is WB, doesn't that imply that QEMU should be able to override UC > for some mapping that are known to be backed by real memory? This was one of the ideas discussed (multiple times); KVM could perhaps expose an ioctl() just for this. Ultimately the idea was rejected. I don't remember why. > Or alternatively, to invalidate the cache in some other way before accessing VGA memory. This was another idea that ended up being rejected. It would be a privileged operation, for starters. (QEMU runs restricted so it couldn't directly execute the necessary instructions. There could be other obstacles. I don't remember.) > (My concern here is that the ramfb approach would not fix the VGA bug, > and I find it likely that someone might still misconfigure QEMU) ramfb is not meant to fix the VGA bug; ramfb is meant to replace VGA. The ramfb framebuffer is exposed as normal RAM (not MMIO) to the guest, hence the guest is expected to map it with WB attributes. New (and downstream) versions of the "virt" machine type may choose to reject VGA-like devices altogether. (I'm unsure how this would be implemented in practice however.) This blog post from Gerd might contain information that are relevant to the issue at hand. https://www.kraxel.org/blog/2019/02/ramfb-display-in-qemu/ Support for the vfio ramfb option would be useful too, for vgpu boot display.
Can be enabled today this way:
<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on'>
[ ... ]
</hostdev>
<qemu:commandline>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.driver=vfio-pci-nohotplug'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.ramfb=on'/>
</qemu:commandline>
(In reply to Laszlo Ersek from comment #0) > Libvirt already supports the "virtio-gpu-pci" device for AARCH64 guests > (as the "virtio" model), but that's good enough only for Linux guests, > not for Windows guests. Windows requires a directly accessible > framebuffer, and virtio-gpu-pci doesn't provide one. So how do we go about testing this on windows / aarch64? > The issue also goes away with virtio-gpu-pci, because (a) there is no > framebuffer to speak of, (b) all display data, exchanged with virtio > transfers, are mapped as normal DRAM (WB) by both guest and host. > Unfortunately, Windows absolutely insist on a framebuffer for booting > (even though a direct access framebuffer is optional, according to the > UEFI spec). Therefore we can't use virtio-gpu-pci. > > (Again, don't confuse virtio-gpu-pci with virtio-vga. The latter is > virtio-gpu-pci PLUS the traditional VGA stuff. No good for aarch64.) > > The issue also goes away with the "ramfb" device model that Gerd wrote > for QEMU. It is a (comparatively) simple platform device that uses DRAM > (WB) as the framebuffer, so again, both sides' mappings agree, wrt. > cacheability. The UEFI driver (QemuRamfbDxe, also by Gerd) exposes the > direct-access framebuffer to the Windows boot loader, so the framebuffer > requirement is satisfied, while also side-stepping the non-coherence > issue on ARM64. Ramfb is not a PCI device, but it doesn't need to be. > UEFI-booted Windows is expected to work just with the UEFI framebuffer > that it inherits. Graphics mode changes and such are not expected to > work, of course, but basic display is expected to work. > > Please introduce the "ramfb" display device model in the domain XML. > Thanks! So, are you suggesting that this ramfb display device will be used as a primary display in these situations? In Gerd's blog post, he suggests that this is possible (via the ramfb standalone device) but not a good idea. Or are you suggesting that ramfb should be used in conjunction with virtio-gpu-pci somehow? In other words, if we add support in libvirt domain xml for ramfb, what is the expected configuration we'd like to support? and how should that translate to the qemu command? Gerd's suggestion in coment #4 seems to be a slightly separate issue. > So, are you suggesting that this ramfb display device will be used as a > primary display in these situations? In Gerd's blog post, he suggests that > this is possible (via the ramfb standalone device) but not a good idea. It's not very efficient due to lack of dirty tracking. So, if you have something else (better), you should use that instead. Problem is, on aarch64 we don't have something else. So, yes, supporting ramfb makes sense. > Or > are you suggesting that ramfb should be used in conjunction with > virtio-gpu-pci somehow? That is another use case (you might want make that a separate bug). > In other words, if we add support in libvirt domain xml for ramfb, what is > the expected configuration we'd like to support? and how should that > translate to the qemu command? (1) standalone ramfb: <video> <model type='ramfb'/> Translates to '-device ramfb' (2) vgpu+ramfb <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on' ramfb='on'> Translates to '-device vfio-pci-nohotplug,ramfb=on,...' Gerd replied in my place, better than I could have -- thanks. Clearing needinfo. (In reply to Gerd Hoffmann from comment #6) Thanks Gerd, I thought that this was what was requested, but so far in my experiments, I have not been able to successfully use the standalone ramfb device as a primary display device. For example, early in the boot process, it shows a boot menu. But after booting the OS, the display does not update. What am I doing wrong here? (In reply to Jonathon Jongsma from comment #8) > (In reply to Gerd Hoffmann from comment #6) > Thanks Gerd, I thought that this was what was requested, but so far in my > experiments, I have not been able to successfully use the standalone ramfb > device as a primary display device. For example, early in the boot process, > it shows a boot menu. But after booting the OS, the display does not update. > What am I doing wrong here? linux @ x86 bios I guess? You need vesafb in the guest then (vga=ask on the linux command line, then pick a vesa mode). vga text mode (which is used by vgacon) doesn't work because there is no vga device in the first place. Alternatively use something more modern: UEFI. edk2 comes with a ramfb driver and will setup a GOP on the ramfb device. Should work fine with all guests on both x86 and arm. So, apparently I neglected to update this bug after implementing the feature. The ramfb display device model made it into upstream libvirt as of version 5.9.0. In addition, the vgpu boot display feature (see comment #4) made it into upstream 5.10.0. rhel-av 8.3 includes version 6.2.0 of libvirt, so this feature should already be implemented. This feature was implemented in RHEL-AV 8.2.0. Hi Gerd,
When i tried to add 'ramfb‘ video device into guest xml, it reported error "unsupported configuration: domain configuration does not support 'video model' value 'ramfb'". Is there any other setting needed for 'ramfb' video device?
#virsh edit vm1
...
<device>
<video>
<model type='ramfb'/>
</video>
...
</device>
...
Also report the same error, if i added the qemu command line in the xml:
#virsh edit vm1
<domain>
...
<device>
<video>
<model type='ramfb'/>
</video>
...
</device>
<qemu:commandline>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.driver=vfio-pci-nohotplug'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.ramfb=on'/>
</qemu:commandline>
</domain>
Hmm, ramfb was patched out downstream (commit 67511676246cce57becbd2dcf5abccf08d9ef737, even though not mentioned in the commit message). So I guess we need a qemu-kvm bug to revert that. Test with libvirt-6.0.0-24.module+el8.2.1+6997+c666f621.x86_64 and qemu-kvm-4.2.0-25.module+el8.2.1+6985+9fd9d514.x86_64.
Test steps:
1.Start a guest with standalone 'ramfb' video device:
#virsh start test1
# virsh dumpxml test1 | grep -A5 video
<video>
<model type='ramfb' heads='1' primary='yes'/>
<alias name='video0'/>
</video>
Check the qemu cmd line:
#ps aux | grep -i qemu-kvm
..-device ramfb,id=video0..
2.For the display checking in guest os, please refer to:
https://bugzilla.redhat.com/show_bug.cgi?id=1841068#c15 - https://bugzilla.redhat.com/show_bug.cgi?id=1841068#c18
Move the bug to verified according to comment 14 - 22. And track 'vgpu+ramfb' in the bug 1847791. Changing this TestOnly BZ as CLOSED CURRENTRELEASE. Please reopen if the issue is not resolved. |