RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1368406 - Virtual display of virtio-gpu should behave like qxl device when using rhel7.3 guest
Summary: Virtual display of virtio-gpu should behave like qxl device when using rhel7....
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.3
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Gerd Hoffmann
QA Contact: Guo, Zhiyi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-19 10:02 UTC by Guo, Zhiyi
Modified: 2017-08-02 03:29 UTC (History)
8 users (show)

Fixed In Version: qemu-kvm-rhev-2.9.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 23:34:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:2392 0 normal SHIPPED_LIVE Important: qemu-kvm-rhev security, bug fix, and enhancement update 2017-08-01 20:04:36 UTC

Description Guo, Zhiyi 2016-08-19 10:02:04 UTC
Description of problem:
Virtual display of virtio-gpu should behave like qxl device when using rhel7.3 guest

Version-Release number of selected component (if applicable):
qemu-kvm-rhev package:qemu-kvm-rhev-2.6.0-21.el7.x86_64
host & guest kernel:
3.10.0-489.el7.x86_64
virt-viewer:
virt-viewer-2.0-11.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Boot rhel7.3 guest use qemu cli:
/usr/libexec/qemu-kvm -name rhel7.3 -m 2048 \
        -cpu Haswell-noTSX \
        -smp 6,threads=2,cores=1,sockets=3,maxcpus=6 \
	-device virtio-vga\
	-device virtio-gpu\
        -spice port=5901,disable-ticketing \
        -device virtio-serial -chardev spicevmc,id=vdagent,debug=0,name=vdagent \
        -serial unix:/tmp/m,server,nowait \
        -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 \
        -drive file=rhel73.qcow2,if=none,id=drive-scsi-disk0,format=qcow2,cache=none,werror=stop,rerror=stop -device virtio-scsi-pci,id=scsi0,disable-modern=off,disable-legacy=off -device scsi-hd,drive=drive-scsi-disk0,bus=scsi0.0,scsi-id=0,lun=0,id=scsi-disk0,bootindex=1 \
        -monitor stdio \
        -usb -device usb-kbd,id=input0 \
        -netdev tap,id=idinWyYp -device virtio-net-pci,mac=42:ce:a9:d2:4d:d7,id=idlbq7eA,netdev=idinWyYp \
        -qmp tcp:localhost:4444,server,nowait \
	-device ich9-intel-hda -device hda-duplex \
2.remote-viewer spice://127.0.0.1:5901
3.

Actual results:
Guest have two displays after remote-viewer connected and guest cannot use virtio-gpu display to display desktop

Expected results:
Guest have one display after remote-viewer connected

Additional info:
Test aginst qxl-vga with multiple qxl devices, only one display present.
Attach screenshot to show phenomenon of virtio-vga with virtio-gpu device.

Comment 2 Gerd Hoffmann 2016-09-13 07:35:26 UTC
multihead + multiseat is beyond the scope for 7.3, moving to 7.4.

Comment 3 Gerd Hoffmann 2017-01-09 15:05:35 UTC
(In reply to Gerd Hoffmann from comment #2)
> multihead + multiseat is beyond the scope for 7.3, moving to 7.4.

host side: qemu 2.8 needed.
guest side: kernel with drm driver rebase needed (will probably happen late in the devel cycle).

With those multihead should work fine.

Comment 4 Ademar Reis 2017-05-09 12:25:53 UTC
(In reply to Gerd Hoffmann from comment #3)
> (In reply to Gerd Hoffmann from comment #2)
> > multihead + multiseat is beyond the scope for 7.3, moving to 7.4.
> 
> host side: qemu 2.8 needed.
> guest side: kernel with drm driver rebase needed (will probably happen late
> in the devel cycle).
> 
> With those multihead should work fine.

Please retest with qemu-2.9 and a fresh up to date RHEL-7.4 kernel.

Comment 6 Guo, Zhiyi 2017-06-12 08:48:05 UTC
   For virtio-vga multiheads support, it has been verified in 1368406 Virtual display of virtio-gpu should behave like qxl device when using rhel7.3 guest. For this bug, if use default options of virtio-gpu, the behavior is same as original report. But landing to qemu-kvm-rhev 2.9,I can override the default options of virtio-gpu to make virtio-gpu behave like qxl:
-device virtio-gpu,max_outputs=0

The default max_outputs of qxl is 0:
      dev: qxl, id ""
        ram_size = 67108864 (0x4000000)
        vram_size = 67108864 (0x4000000)
        revision = 4 (0x4)
        debug = 0 (0x0)
        guestdebug = 0 (0x0)
        cmdlog = 0 (0x0)
        ram_size_mb = 4294967295 (0xffffffff)
        vram_size_mb = 4294967295 (0xffffffff)
        vram64_size_mb = 4294967295 (0xffffffff)
        vgamem_mb = 16 (0x10)
        surfaces = 1024 (0x400)
        max_outputs = 0 (0x0)
        addr = 03.0
        romfile = ""
        rombar = 1 (0x1)
        multifunction = false
        command_serr_enable = true
        x-pcie-lnksta-dllla = true
        x-pcie-extcap-init = true
        class Display controller, addr 00:03.0, pci id 1b36:0100 (sub 1af4:1100)
        bar 0: mem at 0xffffffffffffffff [0x3fffffe]
        bar 1: mem at 0xffffffffffffffff [0x3fffffe]
        bar 2: mem at 0xffffffffffffffff [0x1ffe]
        bar 3: i/o at 0xffffffffffffffff [0x1e]

The default max_outputs of virtio-gpu is 1 but can be overrided to 0:
     dev: virtio-gpu-pci, id ""
        ioeventfd = false
        vectors = 3 (0x3)
        virtio-pci-bus-master-bug-migration = false
        disable-legacy = "on"
        disable-modern = false
        migrate-extra = true
        modern-pio-notify = false
        x-disable-pcie = false
        page-per-vq = false
        x-ignore-backend-features = false
        ats = false
        x-pcie-deverr-init = true
        x-pcie-lnkctl-init = true
        x-pcie-pm-init = true
        addr = 03.0
        romfile = ""
        rombar = 1 (0x1)
        multifunction = false
        command_serr_enable = true
        x-pcie-lnksta-dllla = true
        x-pcie-extcap-init = true
        class Display controller, addr 00:03.0, pci id 1af4:1050 (sub 1af4:1100)
        bar 1: mem at 0xffffffffffffffff [0xffe]
        bar 4: mem at 0xffffffffffffffff [0x3ffe]
        bus: virtio-bus
          type virtio-pci-bus
          dev: virtio-gpu-device, id ""
            max_outputs = 1 (0x1)
            max_hostmem = 268435456 (256 MiB)
            indirect_desc = true
            event_idx = true
            notify_on_empty = true
            any_layout = true
            iommu_platform = false
            __com.redhat_rhel6_ctrl_guest_workaround = false

So the bug is verified now

Comment 8 errata-xmlrpc 2017-08-01 23:34:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 9 errata-xmlrpc 2017-08-02 01:12:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 10 errata-xmlrpc 2017-08-02 02:04:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 11 errata-xmlrpc 2017-08-02 02:45:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 12 errata-xmlrpc 2017-08-02 03:09:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 13 errata-xmlrpc 2017-08-02 03:29:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392


Note You need to log in before you can comment on or make changes to this bug.