Bug 894350 - [RFE] spice qxl device should support multiple monitors
[RFE] spice qxl device should support multiple monitors
Status: CLOSED DUPLICATE of bug 787578
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: RFEs (Show other bugs)
3.2.0
Unspecified Unspecified
unspecified Severity unspecified
: ---
: 3.3.0
Assigned To: David Blechter
Marian Krcmarik
spice
: FutureFeature, Tracking, Triaged
Depends On: 896604 905898 921128
Blocks: 787578 977213 978877 978878 978879 978880 978883 978884 978885 978887 978888 978889 978892 978893 978895 979217 979218 979221 1088390
  Show dependency treegraph
 
Reported: 2013-01-11 09:16 EST by Marc-Andre Lureau
Modified: 2016-02-10 15:20 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-06-23 05:07:08 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Spice
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Marc-Andre Lureau 2013-01-11 09:16:13 EST
To get n monitors emulated in the guest currently required n pci qxl devices. This has several problems:
 needless waste of memory.
 hard to support for linux guests (Xinerama required, Xrandr doesn't allow different cards)

Solution:
X device driver can use multiple outputs of single card and provide Xrandr for additions removal.
Comment 1 Marc-Andre Lureau 2013-01-11 09:23:28 EST
Components involved: spice-server, qemu-kvm, xorg-qxl, spice-gtk, virt-viewer, spice-vdagent-x
Comment 2 Itamar Heim 2013-01-12 17:06:55 EST
so no chnage needed in ovirt-engine/vdsm to say, configure more RAM for this single qxl device?
Comment 3 Marian Krcmarik 2013-01-14 07:07:00 EST
(In reply to comment #2)
> so no chnage needed in ovirt-engine/vdsm to say, configure more RAM for this
> single qxl device?

I do believe we should change the logic of creating multimonitor Linux guests while Windows multimonitor guests still require to have assigned more qxl devices, Linux guests should have only one qxl device (assigning more qxl devices causing multimonitor misbehave according to our testing). Moreover existing selectbox for choosing the number of monitors lost its meaning for Linux guests unless we want to assign different amount of video memory to the single qxl device according to number of monitor which I would recommend.

I would ack this RFE once this requeste is recognized and It's deciced what changes will be made.
Comment 4 Alon Levy 2013-01-16 11:31:13 EST
I agree with Marian, it was an oversight in the original bug we should correct: both libvirt and ovirt need to be updated.

libvirt currently only allows setting the size of the vram bar, and both need to be set to be able to have the same performance as with multiple pci cards (in the respect that lack of memory is bad for performance).
 - libvirt update: add ram property to qxl vga device
  amount of work: minor

ovirt needs to set the amount of memory on the single pci device in accordance with the expected number of monitors. We don't have any benchmarks to guide this decision, so we should be pessimistic and allocate the equivalent amount that would have been in n pci cards, so just 64*n for the ram bar, and 64*n for the vram bar (the other bars, rom and io, are fixed size).
 - (this is just a suggestion on how to implement - maybe it is better to let ovirt decide on the amount of memory for a reason I'm not aware of) since vdsm already knows the number of monitors (it has to to instantiate the correct libvirt xml) it can use that same information to set the existing vram_size property and the new ram_size property.

I've tested that we can set up to 256MB (that's all we require for the 4 max supported monitors) on both bars and seabios / win7 64 (a single guest I used for testing) handled it fine. Of course if I've used enough other pci devices in the vm I would reach a point where the qxl device would conflict with the other devices. And the conflict could be harder to satisfy having to allocate [256, 256] instead of [64, 64, 64, 64, 64, 64, 64, 64] (i.e. one pci card with two 256 bars instead of 4 cards each with two 64 bars), but I think not since qxl is by far requiring the largest contiguous allocations.

The tested qemu versions were upstream and rhel using upstream seabios
 - rhel seabios doesn't like 128/128 much and not at all 256/256, I have asked Gerd about this offline but since upstream does it well with rhel qemu I think this is down to backporting the requirements.

Note that we round the allocation to a power of two (I think a PCI requirement, not sure), so in effect the options are:

64, 64 - single monitor
128, 128 - two monitors
256, 256 - three and four monitors
(i.e. there is no 192)

We could try in the future to see if:
1. we can reduce those requirements due to more efficient allocation when using a single large bar.
2. we can reduce only one of those and not the other (i.e. 128/256 instead of 256/256)
Comment 6 Itamar Heim 2013-03-07 14:29:04 EST
(In reply to comment #4)
> The tested qemu versions were upstream and rhel using upstream seabios
>  - rhel seabios doesn't like 128/128 much and not at all 256/256, I have
> asked Gerd about this offline but since upstream does it well with rhel qemu
> I think this is down to backporting the requirements.

Alon - can you please explain the impact of this comment for rhel?
Comment 7 Alon Levy 2013-03-10 11:19:39 EDT
(In reply to comment #6)
> (In reply to comment #4)
> > The tested qemu versions were upstream and rhel using upstream seabios
> >  - rhel seabios doesn't like 128/128 much and not at all 256/256, I have
> > asked Gerd about this offline but since upstream does it well with rhel qemu
> > I think this is down to backporting the requirements.
> 
> Alon - can you please explain the impact of this comment for rhel?

Basically that we need to fix RHEL's seabios but there is a limit to how much we can fix easily without backporting a lot from upstream seabios. Here are the two patches Gerd made and I tested to allow 128/128, see the comments:

commit 0be03c098c46f0eeb8a3cbdefc7b3fb42e37fc66
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 17 11:13:07 2013 +0100

    Start allocating pci bars at 0xe000000
    
    FOR TESTING PURPOSES ONLY
    
    FIXME: rhel machine types older than rhel6.1.0 (or rhel6.2.0, not
           fully sure) map the vesa framebuffer at 0xe0000000 with
           stdvga + cirrus, so we need a somewhat more sophisticated
           approach here to not break those.
    
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>

diff --git a/src/config.h b/src/config.h
index 01e827f..1abbd89 100644
--- a/src/config.h
+++ b/src/config.h
@@ -167,7 +167,7 @@
 // Support old pci mem assignment behaviour
 #define CONFIG_OLD_PCIMEM_ASSIGNMENT    1
 #if CONFIG_OLD_PCIMEM_ASSIGNMENT
-#define BUILD_PCIMEM_START        0xf0000000
+#define BUILD_PCIMEM_START        0xe0000000
 #define BUILD_PCIMEM_SIZE         (BUILD_PCIMEM_END - BUILD_PCIMEM_START)
 #define BUILD_PCIMEM_END          0xfec00000    /* IOAPIC is mapped at */
 #define BUILD_PCIPREFMEM_START    0

commit 8acd3257b0460fed99e5021dc201a90bd0a5ebb7
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 17 11:13:06 2013 +0100

    set CONFIG_OLD_PCIMEM_ASSIGNMENT
    
    This removes the split for prefmem and non-prefmem memory regions
    in seabios.  The code uses the prefmem region only for bus != 0,
    i.e. devices behind bridges.  RHEL-6 qemu has no PCI bridge support,
    so the prefmem region unused and useless.
    
    Net effect of the patch is that the address space used by seabios for
    PCI bar allocation becomes larger:
    
      old: 0xf0000000 -> 0xf7ffffff (128M)
      new: 0xf0000000 -> 0xfec00000 (236M)
    
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>

diff --git a/src/config.h b/src/config.h
index 634c9f4..01e827f 100644
--- a/src/config.h
+++ b/src/config.h
@@ -165,7 +165,7 @@
 #define BUILD_MAX_HIGHMEM         0xe0000000
 
 // Support old pci mem assignment behaviour
-//#define CONFIG_OLD_PCIMEM_ASSIGNMENT    1
+#define CONFIG_OLD_PCIMEM_ASSIGNMENT    1
 #if CONFIG_OLD_PCIMEM_ASSIGNMENT
 #define BUILD_PCIMEM_START        0xf0000000
 #define BUILD_PCIMEM_SIZE         (BUILD_PCIMEM_END - BUILD_PCIMEM_START)
diff --git a/src/pciinit.c b/src/pciinit.c
index f517243..9960709 100644
--- a/src/pciinit.c
+++ b/src/pciinit.c
@@ -82,6 +82,7 @@ static int pci_bios_allocate_region(u16 bdf, int region_num)
                         bdf, region_num);
                 size = 0;
             }
+#ifndef CONFIG_OLD_PCIMEM_ASSIGNMENT
         } else if ((val & PCI_BASE_ADDRESS_MEM_PREFETCH) &&
                  /* keep behaviour on bus = 0 */
                  pci_bdf_to_bus(bdf) != 0 &&
@@ -95,6 +96,7 @@ static int pci_bios_allocate_region(u16 bdf, int region_num)
                         bdf, region_num, BUILD_PCIPREFMEM_SIZE);
                 size = 0;
             }
+#endif
         } else {
             paddr = &pci_bios_mem_addr;
             if (ALIGN(*paddr, size) + size >= BUILD_PCIMEM_END) {
Comment 13 Andrew Cathrow 2013-06-23 05:07:08 EDT

*** This bug has been marked as a duplicate of bug 787578 ***

Note You need to log in before you can comment on or make changes to this bug.