Bug 770842 - RFE: qemu-kvm: qxl device should support multiple monitors
RFE: qemu-kvm: qxl device should support multiple monitors
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Alon Levy
Virtualization Bugs
: FutureFeature
: 754103 (view as bug list)
Depends On:
Blocks: 787160 787569 787578 842298 842305 842310 842411 977213 978877 978878 978879 978880 978883 978884 978885 978887 978888 978889 978892 978893 978895 979217 979218 979221 1088390
  Show dependency treegraph
Reported: 2011-12-29 09:28 EST by Alon Levy
Modified: 2014-04-16 10:21 EDT (History)
13 users (show)

See Also:
Fixed In Version: qemu-kvm-
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
: 842298 842305 842310 (view as bug list)
Last Closed: 2013-02-21 02:31:44 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Alon Levy 2011-12-29 09:28:31 EST
Description of problem:
(note: this could be made into a Fedora feature for F17/F18 and edited there, or into spice-space wiki at the least).

To get n monitors emulated in the guest currently required n pci qxl devices. This has several problems:
 needless waste of memory.
 hard to support for linux guests (Xinerama required, Xrandr doesn't allow different cards)

Suggested feature: Let a single pci card support several outputs.

Suggested usage:
qemu-kvm -vga qxl -global qxl-vga.num_displays=2

Suggested Implementation:
single card with same number and function of BARs as today.
backward compatible - can still define several cards.
bump revision to notify to the driver. Possibly add a cap mechanism to avoid the coarse version bumping.
new driver initialization sees new revision/cam and knows device supports multiple monitors.
new parameter in ram_header: num_monitors, set by the above command line parameter.
driver in charge of pci memory allocation. Today the first ram bar looks so:
0000: monitor 0 framebuffer
16MB: mspace start
End-sizeof(ram_header): ram header start

device will create spice display channels as number of monitors. VGA is still just the first monitor so remains with a fixed device determined location and size (start of RAM bar, size in ROM).

Driver will instead partition it

Primary surface creation: Ram QXLCreateSurface will be extended with an additional parameter. This means the device needs to know the driver is aware of the multiple monitor support. This in turn suggests the cap mechanism suggested before, two way. Can be implemented with an additional IO (QXL_IO_CAPS).

Command and Cursor commands, Rings, interface_get_command:
The existing padding in QXLCommand can be used as monitor number, or extend QXLCommandExt (CAPS_DRIVER_MULTIPLE_MONITORS protected) to add a uint32_t monitor_id. This is used by qemu to pass the command to the correct queue.
Internally interface_get_cmd knows which queue to read from, we will have multiple rings on the single ram_header, so the header will be updated as well.
spice server doesn't need any update.

Expected benefits:
X device driver can use multiple outputs of single card and provide Xrandr for additions removal.

Additional considerations:
monitor addition removal should reflect the client. The current implementation is suboptimal. Namely it starts the vm with the maximum conceivable number of monitors, and the guest agent enables/disables them based on client commands.
This means that the guest always thinks all monitors are connected, but only the guest agent knows for sure. A better implementation would actually notify the guest of monitor addition/removal through the normal mechanisms inheret to CRTCs.
Comment 2 Alon Levy 2012-02-03 06:37:47 EST
Moving to 6.4 because the work on xf86-video-qxl, bug 787160, that was not estimated in time for 6.3.
Comment 3 Alon Levy 2012-02-23 09:53:42 EST
*** Bug 754103 has been marked as a duplicate of this bug. ***
Comment 5 Alon Levy 2012-07-23 08:12:39 EDT
The original description is now totally incorrect about the implementation. The new device acts very much like the old, i.e. has a single command ring, single cursor ring, two bars. The additional monitors are implmemented by passing rectangles to the client to let it know which parts of the primary surface correspond to which monitor. The qemu-kvm changes are minor, the addition of a single async io and bumping the revision number to 4.
Comment 14 mazhang 2012-11-19 01:16:06 EST
Hi Alon,

I will try your suggestion steps in this week and give your result as soon as my possible.
Comment 19 mazhang 2012-11-20 23:01:20 EST
verified this bug on:
kernel 2.6.32-338.el6.x86_64

1. boot up a linux guest with single pci device, command line:
#/usr/libexec/qemu-kvm -M rhel6.4.0 -cpu SandyBridge -m 2048 -smp 2,sockets=1,cores=1,threads=1 -enable-kvm -name rhel64 -uuid 990ea161-6b67-47b2-b803-19fb01d30d12 -smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 -k en-us -rtc base=utc,clock=host,driftfix=slew -no-kvm-pit-reinjection -monitor stdio -qmp tcp:0:6666,server,nowait -boot c -drive file=/home/rhel.qcow2,if=none,id=drive-scsi-disk,format=qcow2,cache=none,werror=stop,rerror=stop -device virtio-scsi-pci,id=scsi0,addr=0x5 -device scsi-disk,drive=drive-scsi-disk,bus=scsi0.0,scsi-id=0,lun=0,id=scsi-disk,bootindex=1 -netdev tap,id=hostnet0,downscript=no -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:2e:28:1c,bus=pci.0,addr=0x4,bootindex=2 -chardev socket,path=/tmp/isa-serial,server,nowait,id=isa1 -device isa-serial,chardev=isa1,id=isa-serial1 -spice port=5900,disable-ticketing,image-compression=off -vga qxl -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x7 -chardev socket,id=channel0,host=,port=12345,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=channel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0

2. Install spice-vdagent inside guest and run spice-vdagentd and spicevdagent service

3. Connect to guest with full screen mode by remote-viewer,(client machine should have two or more monitor attached.)
# remote-viewer spice://ip_address:5900 --full-screen=auto-conf

4. Guest desktop will full filling all monitor.

thank you Alon!
Comment 21 errata-xmlrpc 2013-02-21 02:31:44 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.