Red Hat Bugzilla – Bug 770842
RFE: qemu-kvm: qxl device should support multiple monitors
Last modified: 2014-04-16 10:21:19 EDT
Description of problem:
(note: this could be made into a Fedora feature for F17/F18 and edited there, or into spice-space wiki at the least).
To get n monitors emulated in the guest currently required n pci qxl devices. This has several problems:
needless waste of memory.
hard to support for linux guests (Xinerama required, Xrandr doesn't allow different cards)
Suggested feature: Let a single pci card support several outputs.
qemu-kvm -vga qxl -global qxl-vga.num_displays=2
single card with same number and function of BARs as today.
backward compatible - can still define several cards.
bump revision to notify to the driver. Possibly add a cap mechanism to avoid the coarse version bumping.
new driver initialization sees new revision/cam and knows device supports multiple monitors.
new parameter in ram_header: num_monitors, set by the above command line parameter.
driver in charge of pci memory allocation. Today the first ram bar looks so:
0000: monitor 0 framebuffer
16MB: mspace start
End-sizeof(ram_header): ram header start
device will create spice display channels as number of monitors. VGA is still just the first monitor so remains with a fixed device determined location and size (start of RAM bar, size in ROM).
Driver will instead partition it
Primary surface creation: Ram QXLCreateSurface will be extended with an additional parameter. This means the device needs to know the driver is aware of the multiple monitor support. This in turn suggests the cap mechanism suggested before, two way. Can be implemented with an additional IO (QXL_IO_CAPS).
Command and Cursor commands, Rings, interface_get_command:
The existing padding in QXLCommand can be used as monitor number, or extend QXLCommandExt (CAPS_DRIVER_MULTIPLE_MONITORS protected) to add a uint32_t monitor_id. This is used by qemu to pass the command to the correct queue.
Internally interface_get_cmd knows which queue to read from, we will have multiple rings on the single ram_header, so the header will be updated as well.
spice server doesn't need any update.
X device driver can use multiple outputs of single card and provide Xrandr for additions removal.
monitor addition removal should reflect the client. The current implementation is suboptimal. Namely it starts the vm with the maximum conceivable number of monitors, and the guest agent enables/disables them based on client commands.
This means that the guest always thinks all monitors are connected, but only the guest agent knows for sure. A better implementation would actually notify the guest of monitor addition/removal through the normal mechanisms inheret to CRTCs.
Moving to 6.4 because the work on xf86-video-qxl, bug 787160, that was not estimated in time for 6.3.
*** Bug 754103 has been marked as a duplicate of this bug. ***
The original description is now totally incorrect about the implementation. The new device acts very much like the old, i.e. has a single command ring, single cursor ring, two bars. The additional monitors are implmemented by passing rectangles to the client to let it know which parts of the primary surface correspond to which monitor. The qemu-kvm changes are minor, the addition of a single async io and bumping the revision number to 4.
I will try your suggestion steps in this week and give your result as soon as my possible.
verified this bug on:
1. boot up a linux guest with single pci device, command line:
#/usr/libexec/qemu-kvm -M rhel6.4.0 -cpu SandyBridge -m 2048 -smp 2,sockets=1,cores=1,threads=1 -enable-kvm -name rhel64 -uuid 990ea161-6b67-47b2-b803-19fb01d30d12 -smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 -k en-us -rtc base=utc,clock=host,driftfix=slew -no-kvm-pit-reinjection -monitor stdio -qmp tcp:0:6666,server,nowait -boot c -drive file=/home/rhel.qcow2,if=none,id=drive-scsi-disk,format=qcow2,cache=none,werror=stop,rerror=stop -device virtio-scsi-pci,id=scsi0,addr=0x5 -device scsi-disk,drive=drive-scsi-disk,bus=scsi0.0,scsi-id=0,lun=0,id=scsi-disk,bootindex=1 -netdev tap,id=hostnet0,downscript=no -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:2e:28:1c,bus=pci.0,addr=0x4,bootindex=2 -chardev socket,path=/tmp/isa-serial,server,nowait,id=isa1 -device isa-serial,chardev=isa1,id=isa-serial1 -spice port=5900,disable-ticketing,image-compression=off -vga qxl -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x7 -chardev socket,id=channel0,host=127.0.0.1,port=12345,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=channel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0
2. Install spice-vdagent inside guest and run spice-vdagentd and spicevdagent service
3. Connect to guest with full screen mode by remote-viewer,(client machine should have two or more monitor attached.)
# remote-viewer spice://ip_address:5900 --full-screen=auto-conf
4. Guest desktop will full filling all monitor.
thank you Alon!
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.