Description of problem: To get n monitors emulated in the guest currently required n pci qxl devices. This has several problems: needless waste of memory. hard to support for linux guests (Xinerama required, Xrandr doesn't allow different cards) See planned libvirt support in bug #787569.
when this is implemented in vdsm, also need to push a change to engine to allow choosing between "multi-head" to "multi-monitors" (or xrandr vs. xinermaa, or any other way to not confuse the user too much...)
*** Bug 838504 has been marked as a duplicate of this bug. ***
Discussing this with David the following came up for consideration: - This feature allows to use "Multi Head" instead of "Xinerama" - This is only supported for rhel 6.4 guests (on rhel 6.4 hosts / cluster > 3.2) - we probably need to preserve backward compatibility behaviour, so default should be Xinerama on upgrade. - Since guest should be rhel 6.4 and our granularity is at major versions, we need to let user be able to choose the mode they want, probably only for RHEL 6 and Other Linux (at ui level, API we don't validate this). Andrew - thoughts/comments?
(In reply to Itamar Heim from comment #4) > - This feature allows to use "Multi Head" instead of "Xinerama" I'm a bit confused that this isn't happening until 3.3 because I was able to get dual monitor working in a RHEL 6.4 guest on RHEV 3.2. Am I missing something here? What does this RFE give us that we don't already have? > - This is only supported for rhel 6.4 guests (on rhel 6.4 hosts / cluster > > 3.2) Ah, so for this to work properly, the cluster compatibility level has to be 3.2 or greater? I was helping a customer that upgraded from 3.1 and he's having an issue getting his display to work properly when he enables the 2nd monitor.
the comments above were for 3.2 version, but this didn't make 3.2. could be the default is enough to cover 2 monitors, but this is needed for more monitors to configure a larger buffer.
Ah, thank you for the clarification! That makes sense. I will update the kbase.
*** Bug 894350 has been marked as a duplicate of this bug. ***
implementation: UI will have a checkbox that will only be enabled in cluster 3.3 or more, guest Rhel 6.4 or more and Spice as display type. The check box text: "Use single device" and the default will be yes. The engine will have a new Boolean field in VmBase (next to number of monitors) - Database as well. The information will pass to VDSM via the device custom properties (the same as the VRam pass for the display device today). Itamar: why the API shouldn't validate for Rhel 6.4? Is this solution acceptable?
guest OSs are at RHEL major level granularity, not minor level. so its for "RHEL 6" not 6.4. note it should also be available for Other Linux OS (also note any such definitions are done today via the new OsInfo external config (sync with Roy). only UI level is to allow use to do via API things we don't think make sense to nanny them less (in general, we are less protective via API in some areas assuming API users are more familiar with what they want to do.
(In reply to Itamar Heim from comment #10) > guest OSs are at RHEL major level granularity, not minor level. so its for > "RHEL 6" not 6.4. note it should also be available for Other Linux OS (also > note any such definitions are done today via the new OsInfo external config > (sync with Roy). > only UI level is to allow use to do via API things we don't think make sense > to nanny them less (in general, we are less protective via API in some areas > assuming API users are more familiar with what they want to do. ACK.
Hi Shahar, what particular VRAM values will be used? n x 32MB when checked and 64 when not? Thanks.
(In reply to David Jaša from comment #12) > Hi Shahar, what particular VRAM values will be used? n x 32MB when checked > and 64 when not? Thanks. For now it will be the same as we do in regular multiple monitors with single head: String mem = (numOfMonitors > 2 ? VmDeviceCommonUtils.LOW_VIDEO_MEM : VmDeviceCommonUtils.HIGH_VIDEO_MEM);
I would recommend 64*<number of monitors>, rounded to a power of two, so: #M | VRAM ----|------- 1 | 64 2 | 128 3 | 256 4 | 256
(In reply to Alon Levy from comment #14) I guess its good to apply for multiple heads as well, instead of the single 32MB and 2,3,4 64MB.
Shahar, after reading bug 896604 and explanations of what respective *ram attributes tune (ML links below), it's clear that the "ram" attribute is the one to be set: https://www.redhat.com/archives/libvir-list/2013-January/msg01320.html https://www.redhat.com/archives/libvir-list/2013-January/msg01330.html "vram" attribute should be kept at the default value or lower as: 1) the memory for it is always allocated 2) the memory is unlikely to be used for the time being (as long as off-screen surfaces are disabled by default) I'll need to figure out good values for vram memory as too low setting will harm users of guests with surfaces enabled and too high setting will waste memory for everybody else... Michal, the total host RAM taken by qxl devices should will be number_of_qxl_devices * (ram + vram). The memory controlled by "ram" attribute won't exactly be a KSM-candidate, the memory controlled by "vram" should when the surfaces are indeed disabled in the guest and qemu doesn't prevent it in some other way...
The Alon's recommendations are for "ram" attribute, not "vram".
Just to reiterate David Jaŝa's last comment: I meant VRAM in the libvirt sense, which (unfortunately) translates to the qxl device attribute called 'ram'.
OK - sorry for the confusion I've been causing, I was completely wrong, ram and vram are identical in meaning in libvirt and in qemu's command line. I'll just disappear off this thread for now.
(In reply to Alon Levy from comment #19) Alon, Thanks for the help, David - I need to check it more deeply and thanks for the pointers!
VDSM patch: http://gerrit.ovirt.org/#/c/16399/ Engine patch: http://gerrit.ovirt.org/#/c/16803/
(In reply to Shahar Havivi from comment #21) > VDSM patch: http://gerrit.ovirt.org/#/c/16399/ > Engine patch: http://gerrit.ovirt.org/#/c/16803/ Sorry the VDSM patch is here: http://gerrit.ovirt.org/#/c/17067/
Engine git commit: http://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=commit;h=0c47b30468b855e5c52b120f476d97cc754852ee VDSM git commit: http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=commit;h=9b1f03172587586b294317249d563f87f6a546ad
Anyone know the maximum resolution supported in RHEV 3.2 with the current 64 meg video device? This would be useful to document in the kbase and/or in documentation.
I cannot verify that agreed logic (as far as I understand) happens on: IS10 with 6.4 host (vdsm-4.12.0-61.git8178ec2, libvirt-0.10.2-18.el6_4.9). Even though I choose in select box of Console tab more Monitors for RHEL VM, I still can see the same amount of ram assigned to VM (64MB). As far as I understand essential recommendations is based on following table: comment #14
Btw. Not sure How much This is supported, but once you try to play with console dialog and try to switch between types of OSes for example from Windows to RHEL and try to choose various combination of monitors number It behaves weirdly: i.e.: Create a VM and choose Windows OS type and 2 monitors, Start and Stop VM, change OS type to RHEL and choose 4 monitors and Single PCI, Start and Stop VM, Change OS type to Windows and one monitor without Single PCI device checkbox thicked -> VM is started with cirrus driver set and 8MB of memory. Create a VM and choose Windows OS type and 2 monitors, Start and Stop VM, change OS type to RHEL and choose 4 monitors and Single PCI, Start VM - sometimes I observed more qxl devices to be set even though Single PCI checkbox was thicked.
Comment #14 is not for VRAM but for RAM as Alon wrote in comments #18 and #19. Currently We are using 32M for single and 64M for more then one monitor.
patch send for update video device issues http://gerrit.ovirt.org/#/c/18786/
merged at: http://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=commit;h=6074557f27da95c587450521150c7f0e80ec2cce
There is misunderstanding. Current situation (is14): ram vram 1 Monitor, Single PCI checked 64MB 32MB 2 Monitors, Single PCI checked 64MB 64MB 4 Monitors, Single PCI checked 64MB 132MB Afaik desired situation: ram vram 1 Monitor, Single PCI checked 64MB 32MB 2 Monitors, Single PCI checked 132MB 32MB 4 Monitors, Single PCI checked 256MB 32MB What on the qemu level (and I believe on libvirt level) is called "ram" keeps frame buffer, command and release ring and device bitmaps, "vram" keeps only off-screen surfaces which are disabled by default right now so It's important to increase "ram" size as usage of more monitors is indicated and keeps vram low.
(In reply to Marian Krcmarik from comment #31) what is the case for vnc display (regarding the ram), because I think its auto configure by libvirt (because vdsm is not setting this value)
(In reply to Shahar Havivi from comment #32) > (In reply to Marian Krcmarik from comment #31) > what is the case for vnc display (regarding the ram), because I think its > auto configure by libvirt (because vdsm is not setting this value) It's ok, afaik we do not support multiple monitors with VNC (is it even possible) so default value in qemu for size of framebuffer is okay (ram and vram are specific terms for qxl device).
patches sent to: engine: http://gerrit.ovirt.org/19360 vdsm: http://gerrit.ovirt.org/19361
pending backport of engine patch to ovirt-engine-3.3
engine patch backported to ovirt-engine-3.3 as http://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=commit;h=2e8b3e8fdc99b48feaa7de64d0722bfdfcabac9d vdsm not needed -> moving to modified
Verified on is18.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2014-0038.html