RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1792515 - kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:37:crtc-0] flip_done timed out
Summary: kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: kernel
Version: 7.7
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: X/OpenGL Maintenance List
QA Contact: Desktop QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-17 19:21 UTC by Joe Wright
Modified: 2024-03-25 15:38 UTC (History)
47 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-27 19:24:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4490391 0 None None None 2021-04-02 03:04:33 UTC

Internal Links: 1884401

Description Joe Wright 2020-01-17 19:21:55 UTC
Description of problem:
- DRM errors at boot

Jan 17 06:17:00 localhost kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:37:crtc-0] flip_done timed out
Jan 17 06:17:10 localhost kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:33:plane-0] flip_done timed out

Version-Release number of selected component (if applicable):
- kernel 3.10.0-1062.9.1.el7.x86_64

How reproducible:
- appears randomly at boot

Steps to Reproduce:
1. Boot the system
2.
3.

Actual results:


Expected results:


Additional info:
[root@localhost~]# uname -r 
3.10.0-1062.9.1.el7.x86_64
[root@localhost ~]# dmidecode -t 1
# dmidecode 3.21, 27 bytes
System Information
        Manufacturer: VMware, Inc.
        Product Name: VMware Virtual Platform
        Version: None
        Serial Number: VMware-42 37 33 8e c5 82 b6 a0-e8 98 e4 c8 f6 8f 84 f3
        UUID: 4237338e-c582-b6a0-e898-e4c8f68f84f3
        Wake-up Type: Power Switch
        SKU Number: Not Specified
        Family: Not Specified

[root@localhost ~]# lspci | grep SVGA
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
[root@localhost ~]# grep drm_atomic_helper_wait_for_dependencies /var/log/messages
Jan 17 06:17:00 localhost kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:37:crtc-0] flip_done timed out
Jan 17 06:17:10 localhost kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:33:plane-0] flip_done timed out
Jan 17 09:32:20 localhost kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:37:crtc-0] flip_done timed out
Jan 17 09:32:30 localhost kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:33:plane-0] flip_done timed out
[root@localhost ~]#


[root@localhost tmp]# grep drm /var/log/messages
Jan 17 06:16:35 localhost kernel: [drm] DMA map mode: Keeping DMA mappings.
Jan 17 06:16:35 localhost kernel: [drm] Capabilities:
Jan 17 06:16:35 localhost kernel: [drm]   Rect copy.
Jan 17 06:16:35 localhost kernel: [drm]   Cursor.
Jan 17 06:16:35 localhost kernel: [drm]   Cursor bypass.
Jan 17 06:16:35 localhost kernel: [drm]   Cursor bypass 2.
Jan 17 06:16:35 localhost kernel: [drm]   8bit emulation.
Jan 17 06:16:35 localhost kernel: [drm]   Alpha cursor.
Jan 17 06:16:35 localhost kernel: [drm]   Extended Fifo.
Jan 17 06:16:35 localhost kernel: [drm]   Multimon.
Jan 17 06:16:35 localhost kernel: [drm]   Pitchlock.
Jan 17 06:16:35 localhost kernel: [drm]   Irq mask.
Jan 17 06:16:35 localhost kernel: [drm]   Display Topology.
Jan 17 06:16:35 localhost kernel: [drm]   GMR.
Jan 17 06:16:35 localhost kernel: [drm]   Traces.
Jan 17 06:16:35 localhost kernel: [drm]   GMR2.
Jan 17 06:16:35 localhost kernel: [drm]   Screen Object 2.
Jan 17 06:16:35 localhost kernel: [drm]   Command Buffers.
Jan 17 06:16:35 localhost kernel: [drm]   Command Buffers 2.
Jan 17 06:16:35 localhost kernel: [drm]   Guest Backed Resources.
Jan 17 06:16:35 localhost kernel: [drm] Max GMR ids is 64
Jan 17 06:16:35 localhost kernel: [drm] Max number of GMR pages is 65536
Jan 17 06:16:35 localhost kernel: [drm] Max dedicated hypervisor surface memory is 0 kiB
Jan 17 06:16:35 localhost kernel: [drm] Maximum display memory size is 8192 kiB
Jan 17 06:16:35 localhost kernel: [drm] VRAM at 0xe8000000 size is 8192 kiB
Jan 17 06:16:35 localhost kernel: [drm] MMIO at 0xfe000000 size is 256 kiB
Jan 17 06:16:35 localhost kernel: [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
Jan 17 06:16:35 localhost kernel: [drm] No driver support for vblank timestamp query.
Jan 17 06:16:35 localhost kernel: [drm] Screen Target Display device initialized
Jan 17 06:16:35 localhost kernel: [drm] width 1280
Jan 17 06:16:35 localhost kernel: [drm] height 768
Jan 17 06:16:35 localhost kernel: [drm] bpp 32
Jan 17 06:16:35 localhost kernel: [drm] Fifo max 0x00040000 min 0x00001000 cap 0x0000077f
Jan 17 06:16:35 localhost kernel: [drm] Using command buffers with DMA pool.
Jan 17 06:16:35 localhost kernel: [drm] DX: no.
Jan 17 06:16:35 localhost kernel: [drm] Atomic: yes.
Jan 17 06:16:35 localhost kernel: [drm] SM4_1: no.
Jan 17 06:16:35 localhost kernel: fbcon: svgadrmfb (fb0) is primary device
Jan 17 06:16:35 localhost kernel: [drm] Initialized vmwgfx 2.15.0 20180704 for 0000:00:0f.0 on minor 0
Jan 17 06:17:00 localhost kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:37:crtc-0] flip_done timed out
Jan 17 06:17:10 localhost kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:33:plane-0] flip_done timed out
Jan 17 09:32:20 localhost kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:37:crtc-0] flip_done timed out
Jan 17 09:32:30 localhost kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:33:plane-0] flip_done timed out
Jan 17 03:58:29 localhost kernel: Command line: BOOT_IMAGE=/vmlinuz-3.10.0-1062.9.1.el7.x86_64 root=/dev/mapper/system-root ro crashkernel=256M selinux=0 nodmraid rd.lvm.lv=system/root rd.lvm.lv=system/swap net.ifnames=0 biosdevname=0 drm.debug=0x1bf
Jan 17 03:58:29 localhost kernel: Kernel command line: BOOT_IMAGE=/vmlinuz-3.10.0-1062.9.1.el7.x86_64 root=/dev/mapper/system-root ro crashkernel=256M selinux=0 nodmraid rd.lvm.lv=system/root rd.lvm.lv=system/swap net.ifnames=0 biosdevname=0 drm.debug=0x1bf
Jan 17 09:58:32 localhost kernel: [drm] DMA map mode: Keeping DMA mappings.
Jan 17 09:58:32 localhost kernel: [drm] Capabilities:
Jan 17 09:58:32 localhost kernel: [drm]   Rect copy.
Jan 17 09:58:32 localhost kernel: [drm]   Cursor.
Jan 17 09:58:32 localhost kernel: [drm]   Cursor bypass.
Jan 17 09:58:32 localhost kernel: [drm]   Cursor bypass 2.
Jan 17 09:58:32 localhost kernel: [drm]   8bit emulation.
Jan 17 09:58:32 localhost kernel: [drm]   Alpha cursor.
Jan 17 09:58:32 localhost kernel: [drm]   Extended Fifo.
Jan 17 09:58:32 localhost kernel: [drm]   Multimon.
Jan 17 09:58:32 localhost kernel: [drm]   Pitchlock.
Jan 17 09:58:32 localhost kernel: [drm]   Irq mask.
Jan 17 09:58:32 localhost kernel: [drm]   Display Topology.
Jan 17 09:58:32 localhost kernel: [drm]   GMR.
Jan 17 09:58:32 localhost kernel: [drm]   Traces.
Jan 17 09:58:32 localhost kernel: [drm]   GMR2.
Jan 17 09:58:32 localhost kernel: [drm]   Screen Object 2.
Jan 17 09:58:32 localhost kernel: [drm]   Command Buffers.
Jan 17 09:58:32 localhost kernel: [drm]   Command Buffers 2.
Jan 17 09:58:32 localhost kernel: [drm]   Guest Backed Resources.
Jan 17 09:58:32 localhost kernel: [drm] Max GMR ids is 64
Jan 17 09:58:32 localhost kernel: [drm] Max number of GMR pages is 65536
Jan 17 09:58:32 localhost kernel: [drm] Max dedicated hypervisor surface memory is 0 kiB
Jan 17 09:58:32 localhost kernel: [drm] Maximum display memory size is 8192 kiB
Jan 17 09:58:32 localhost kernel: [drm] VRAM at 0xe8000000 size is 8192 kiB
Jan 17 09:58:32 localhost kernel: [drm] MMIO at 0xfe000000 size is 256 kiB
Jan 17 09:58:32 localhost kernel: [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
Jan 17 09:58:32 localhost kernel: [drm] No driver support for vblank timestamp query.
Jan 17 09:58:32 localhost kernel: [drm] Screen Target Display device initialized
Jan 17 09:58:32 localhost kernel: [drm] width 1280
Jan 17 09:58:32 localhost kernel: [drm] height 768
Jan 17 09:58:32 localhost kernel: [drm] bpp 32
Jan 17 09:58:32 localhost kernel: [drm] Fifo max 0x00040000 min 0x00001000 cap 0x0000077f
Jan 17 09:58:32 localhost kernel: [drm] Using command buffers with DMA pool.
Jan 17 09:58:32 localhost kernel: [drm] DX: no.
Jan 17 09:58:32 localhost kernel: [drm] Atomic: yes.
Jan 17 09:58:32 localhost kernel: [drm] SM4_1: no.
Jan 17 09:58:32 localhost kernel: fbcon: svgadrmfb (fb0) is primary device
Jan 17 09:58:32 localhost kernel: [drm] Initialized vmwgfx 2.15.0 20180704 for 0000:00:0f.0 on minor 0

Comment 9 Steve Barcomb 2020-06-03 12:18:05 UTC
Hey Joe,
Can you validate if the customer is seeing impact from this or if the issue is just that we are logging the error?  Does the customer have any RHEL8 systems that are also seeing this error as well?

-Steve

Comment 12 Steve Barcomb 2020-06-15 20:18:40 UTC
When Red Hat shipped 7.7 on Aug 6, 2019 Red Hat Enterprise Linux 7 entered Maintenance Support 1 Phase.

    https://access.redhat.com/support/policy/updates/errata#Maintenance_Support_1_Phase

That means only "Critical and Important Security errata advisories (RHSAs) and Urgent Priority Bug Fix errata advisories (RHBAs) may be released". This BZ does not appear to meet Maintenance Support 1 Phase criteria so is being closed WONTFIX. If this is critical for your environment please open a case in the Red Hat Customer Portal, https://access.redhat.com, provide a thorough business justification and ask that the BZ be re-opened for consideration in the next minor release.

Comment 24 Divya 2020-08-23 09:46:23 UTC
https://bugs.freedesktop.org/show_bug.cgi?id=103713 seems relevant to this issue here and there is already a fix in upstream for the issue at https://lkml.org/lkml/2019/5/15/810. Can any anyone check and confirm and take required action here.

Comment 33 Paul B. Henson 2020-11-02 19:36:42 UTC
I'm seeing this on an RHEL 8 box:

Linux ldap-dev-vmc-02 4.18.0-193.19.1.el8_2.x86_64 #1 SMP Mon Sep 14 14:37:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Nov  1 20:30:28 ldap-dev-vmc-02 kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] flip_done timed out
Nov  1 20:30:38 ldap-dev-vmc-02 kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] flip_done timed out
Nov  2 08:14:32 ldap-dev-vmc-02 kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] flip_done timed out
Nov  2 08:14:42 ldap-dev-vmc-02 kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] flip_done timed out

Comment 36 Jeremiah Buckley 2020-11-17 20:25:47 UTC
Hi, this is a repeat of what is posted on 1884401 (RHEL 8 version of this bug) but, IHAC who is also interested in these 3 questions, please ignore if you already noticed these questions on 1884401. Don't mean to pester, I just don't know how info flows on these tickets, so want to be sure it gets to the right people:

1. Also, what is the cause of the bug?

2. What is the impact of the bug?
 
3. Is there any workaround for the bug?


Thanks,

Comment 37 Ryan Stasel 2020-11-18 16:27:31 UTC
I cannot see bug 1884401. 

I see this only on about half of my RHEL8 installs. Seems to occur after boot (I'll go to login to the console and see the screen full of these messages). dmesg also fills with them.

Comment 38 Dave Airlie 2020-11-23 00:16:18 UTC
(In reply to Jeremiah Buckley from comment #36)
> Hi, this is a repeat of what is posted on 1884401 (RHEL 8 version of this
> bug) but, IHAC who is also interested in these 3 questions, please ignore if
> you already noticed these questions on 1884401. Don't mean to pester, I just
> don't know how info flows on these tickets, so want to be sure it gets to
> the right people:
> 
> 1. Also, what is the cause of the bug?

The cause is likely the hypervisor being under load and not scheduling some guest work in a fast enough time. This could be because the guest is generating a lot of dmesg traffic (a guess), or a lot of X.org/desktop rendering, or because the host is overloaded with other VMs.

> 
> 2. What is the impact of the bug?

In theory none, we miss a flip, another will happen later.

>  
> 3. Is there any workaround for the bug?

There isn't really a bug in the guest, it just reports some info, maybe we should just drop the severity of this to a warning so people stop flagging it as the canary for the hypervisor being overloaded.

Dave.

Comment 39 Ryan Stasel 2020-11-23 00:45:10 UTC
(In reply to Dave Airlie from comment #38)

> > 1. Also, what is the cause of the bug?
> 
> The cause is likely the hypervisor being under load and not scheduling some
> guest work in a fast enough time. This could be because the guest is
> generating a lot of dmesg traffic (a guess), or a lot of X.org/desktop
> rendering, or because the host is overloaded with other VMs.

I see this in headless non-X running rhel8 installs. I don't see a ton of rapid dmesg traffic (just a lot when I look after months of uptime and dmesg is full of nothing but these errors). Our vmware infrastructure isn't overloaded. I suppose it could be the hypervisor being slow, but I see no evidence of that in any of our monitoring. The hosts are not heavily loaded, nor is the storage. 


> There isn't really a bug in the guest, it just reports some info, maybe we
> should just drop the severity of this to a warning so people stop flagging
> it as the canary for the hypervisor being overloaded.

I have only seen this issue with rhel8, I have run rhel6.x and 7.x in identical environments and not seen this issue. I mainly care because it fills up dmesg and console with messages and makes it difficult to see other important messages. 

Thanks.

Comment 41 Jeremiah Buckley 2020-11-24 17:57:28 UTC
Dave, thanks for the detail. Can you explain more about Flips? If you try googling for VMWare and Flips, well... it's just one of those garbage searches that return 1000 different ways that vmware and flip can be used in a sentence. We have clients who might be able to think up reproduction steps if they new more about what the flip timeout really was involved with.

Comment 45 John Savanyo 2020-12-02 19:03:37 UTC
VMware internal PR on this item is 2685159

Comment 46 Kodiak Firesmith 2020-12-04 11:03:51 UTC
FYSA this appears to happen on fully headless VMWare guests at random.  In my case right now it's CentOS8 but the same behavior.  vSphere environment is not under much load at all.  I'm guessing our RHEL 8 hosts are doing this as well.  Never saw this with RHEL 7 guests.

So essentially, +1

Comment 51 Bob Knau 2021-02-11 21:10:33 UTC
I am at 7.8 and saw for the first time.   The virtual guest was having a VMWare snapshot creation and then messages appeared 16 seconds later.  From 2:05:04-2:05:38 the machine pauses to get snapshot as seem in vmstat output, the run queue is then stacked with everyone that built up when it returns.  Did not noticed any issues caused by the messages. -bk

Feb 11 02:05:54 XXXXXXXXX kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:37:crtc-0] flip_done timed out
Feb 11 02:06:04 XXXXXXXXX kernel: [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:33:plane-0] flip_done timed out

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- -----timestamp-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st                 EST
 3  0  30464 1976088 1883376 21966432    0    0     0   940 7394 5370 59 23 17  1  0 2021-02-11 02:05:01
10  0  30464 1974680 1883388 21966608    0    0     0  1444 7548 6682 58 31 11  0  0 2021-02-11 02:05:02
12  0  30464 1967312 1883404 21966856    0    0   124   624 7820 8585 61 39  0  0  0 2021-02-11 02:05:03
15  0  30464 1954472 1883428 21967324    0    0   408  2708 7709 6555 58 41  0  0  0 2021-02-11 02:05:04
315  0  30464 1958988 1883476 21967244    0    0     0 13615 2901 2462  2  1 97  0  0 2021-02-11 02:05:38
 6  2  30464 1927740 1883556 21967504    0    0   428    40 5073 3674 55 24 12  9  0 2021-02-11 02:05:39
 7  2  30464 1936796 1883556 21967540    0    0     0   264 7546 13297 61 36  2  1  0 2021-02-11 02:05:40
 3  2  30464 1943736 1883556 21967388    0    0     0   232 7339 7391 58 31  7  4  0 2021-02-11 02:05:41
 9  1  30464 1929884 1883568 21968000    0    0    12   428 7308 7828 59 28  6  7  0 2021-02-11 02:05:42
 5  1  30464 1937560 1883572 21968520    0    0   292  1548 12799 15582 61 37  2  0  0 2021-02-11 02:05:43
 5  1  30464 1943216 1883580 21969028    0    0     0   284 4979 2488 62 23  2 14  0 2021-02-11 02:05:44
 8  1  30464 1924208 1883584 21969364    0    0     0   952 11855 11897 72 28  1  0  0 2021-02-11 02:05:45
 7  2  30464 1917876 1883616 21969668    0    0     0  1292 10167 9422 66 31  2  1  0 2021-02-11 02:05:46
11  0  30464 1910088 1883652 21970656    0    0    44  1348 14506 16712 65 31  2  2  0 2021-02-11 02:05:47
 7  1  30464 1910932 1883656 21972752    0    0   324  3724 21830 32997 66 31  3  1  0 2021-02-11 02:05:48
10  1  30464 1906400 1883664 21974252    0    0     0  1704 22597 31639 63 34  2  0  0 2021-02-11 02:05:49
 5  1  30464 1911660 1883668 21974920    0    0    16   820 13614 16947 65 33  2  1  0 2021-02-11 02:05:50
 9  1  30464 1921912 1883696 21975824    0    0    12  1316 13320 16221 66 31  2  1  0 2021-02-11 02:05:51
11  1  30464 1931288 1883712 21977056    0    0     4  2148 14122 18808 66 32  1  1  0 2021-02-11 02:05:52
 7  1  30464 1922904 1883724 21978768    0    0     8  2316 20088 31175 64 33  2  1  0 2021-02-11 02:05:53
 6  1  30464 1920372 1883744 21980252    0    0     0  2084 17414 26949 65 31  4  0  0 2021-02-11 02:05:54
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- -----timestamp-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st                 EST
10  1  30464 1914660 1883744 21982052    0    0     0  1700 17415 24349 68 28  3  1  0 2021-02-11 02:05:55
 4  1  30464 1922244 1883768 21983600    0    0    20  2652 20243 29836 65 28  6  1  0 2021-02-11 02:05:56
 4  1  30464 1914044 1883784 21984588    0    0     0  1424 12525 16037 59 26  9  6  0 2021-02-11 02:05:57
 5  1  30464 1907492 1883796 21986180    0    0     4  7232 18841 24495 59 24 11  6  0 2021-02-11 02:05:58
 6  0  30464 1911812 1883804 21988116    0    0     0  2844 21422 29487 56 25 14  4  0 2021-02-11 02:05:59
11  1  30464 1905572 1883812 21990836    0    0    32  2060 16826 21911 61 28  8  2  0 2021-02-11 02:06:00

Comment 55 Rick Barry 2021-03-09 18:18:51 UTC
Zack, please see https://bugzilla.redhat.com/show_bug.cgi?id=1884401#c41. The customer in this BZ has run successfully with the patches you proposed in bug 1884401. They are no longer seeing any messages. Is this information enough to move forward with your fix?

Comment 56 Zack Rusin 2021-03-31 02:17:48 UTC
Yes, thank you. I'll get this upstreamed as a module parameter for the next major kernel release. I'll update this bug once it's been merged.

Comment 57 henson 2021-03-31 02:32:53 UTC
bug 1884401 is private. Could somebody copy the referenced patches into this one?

Comment 58 Michel Dänzer 2021-03-31 10:38:08 UTC
(In reply to Zack Rusin from comment #56)
> I'll get this upstreamed as a module parameter for the next major kernel release.

A module parameter which disables so much functionality seems like a pretty big hammer?

Comment 59 Luca Maranzano 2021-04-08 16:36:59 UTC
Hello all,

I'm getting these kind of messages also on Ubuntu 18.04 and 20.04 (Kernel 5.4) running on vSphere 6.7 with latest patches.
It seems strictly related to the vSphere version, since I've the same guest OS on less recent version of vSphere and the message does not appear.

It does NOT seem to have impact on the VMs for the moment.

In my opinion it can be related to the "vmwgfx" driver provided by vmware tools.

For the moment, for servers environment, my workaround is to boot the VM with these parameters:

GRUB_CMDLINE_LINUX_DEFAULT="vga=normal nomodeset"

I think we'll open a SR to VMware.

Regards
Luca

Comment 60 Zack Rusin 2021-04-27 22:18:11 UTC
(In reply to Michel Dänzer from comment #58)
> (In reply to Zack Rusin from comment #56)
> > I'll get this upstreamed as a module parameter for the next major kernel release.
> 
> A module parameter which disables so much functionality seems like a pretty
> big hammer?

This entire patch is basically cut down version of a feature we'll be adding in the next release, so it will basically endup as a five line patch once the other stuff lands. It's big here because I had to extract all the support code from the other features without exposing them.

Comment 61 Michel Dänzer 2021-04-28 13:57:53 UTC
(In reply to Zack Rusin from comment #60)
> This entire patch is basically cut down version of a feature we'll be adding
> in the next release, so it will basically endup as a five line patch once
> the other stuff lands. It's big here because I had to extract all the
> support code from the other features without exposing them.

FWIW, my concern isn't that the patch attached to bug 1884401 was big (it really wasn't), but that AFAICT it disabled 3D acceleration and related features in guests (SVGA_CAP_3D, SVGA_CAP_SCREEN_OBJECT_2 and SVGA_CAP_GBOBJECTS).

(What you wrote above sounds very different from that, which makes me wonder if you're referring to another patch I haven't seen yet :)

Also, a module parameter which needs to be set to avoid bad behaviour isn't ideal, there should be no bad behaviour by default.

Comment 62 Zack Rusin 2021-04-28 14:42:30 UTC
(In reply to Michel Dänzer from comment #61)
> (In reply to Zack Rusin from comment #60)
> > This entire patch is basically cut down version of a feature we'll be adding
> > in the next release, so it will basically endup as a five line patch once
> > the other stuff lands. It's big here because I had to extract all the
> > support code from the other features without exposing them.
> 
> FWIW, my concern isn't that the patch attached to bug 1884401 was big (it
> really wasn't), but that AFAICT it disabled 3D acceleration and related
> features in guests (SVGA_CAP_3D, SVGA_CAP_SCREEN_OBJECT_2 and
> SVGA_CAP_GBOBJECTS).

iirc none of those are used. The guest is running without 3d. We're not disabling anything that's actually used, we're just limiting the guest to the feature set it actually needs. It's heavy handed in the patch because currently we do not have a "pre svga v2 feature set" config.

> (What you wrote above sounds very different from that, which makes me wonder
> if you're referring to another patch I haven't seen yet :)
> 
> Also, a module parameter which needs to be set to avoid bad behaviour isn't
> ideal, there should be no bad behaviour by default.

There's no bad behavior. On configs without 3D support flips might take a bit, missing a frame isn't a problem. We're talking about preventing error messages on those missing flips. Moving the flips to the host isn't changing anything, they still take the same amount of time it's just that they're not blocking in drm preventing the error messages.

Comment 63 Zack Rusin 2021-04-28 14:52:17 UTC
(In reply to Zack Rusin from comment #62)
> (In reply to Michel Dänzer from comment #61)
> > (In reply to Zack Rusin from comment #60)
> > > This entire patch is basically cut down version of a feature we'll be adding
> > > in the next release, so it will basically endup as a five line patch once
> > > the other stuff lands. It's big here because I had to extract all the
> > > support code from the other features without exposing them.
> > 
> > FWIW, my concern isn't that the patch attached to bug 1884401 was big (it
> > really wasn't), but that AFAICT it disabled 3D acceleration and related
> > features in guests (SVGA_CAP_3D, SVGA_CAP_SCREEN_OBJECT_2 and
> > SVGA_CAP_GBOBJECTS).
> 
> iirc none of those are used. The guest is running without 3d. We're not
> disabling anything that's actually used, we're just limiting the guest to
> the feature set it actually needs. It's heavy handed in the patch because
> currently we do not have a "pre svga v2 feature set" config.
> 
> > (What you wrote above sounds very different from that, which makes me wonder
> > if you're referring to another patch I haven't seen yet :)
> > 
> > Also, a module parameter which needs to be set to avoid bad behaviour isn't
> > ideal, there should be no bad behaviour by default.
> 
> There's no bad behavior. On configs without 3D support flips might take a
> bit, missing a frame isn't a problem. We're talking about preventing error
> messages on those missing flips. Moving the flips to the host isn't changing
> anything, they still take the same amount of time it's just that they're not
> blocking in drm preventing the error messages.

Also, just wanted to mention that we are working on asynchronous presentation in the host to fix this properly, we just don't have a timeline for it, so we  want something people can do in the meantime if they don't want those warnings in the log.

Comment 66 Steve Johnston 2021-07-20 19:19:43 UTC
We are closing this BZ. Red Hat Enterprise Linux 7 shipped it's final minor release on September 29th, 2020. 7.9 was the last minor releases scheduled for RHEL 7. 

From the RHEL life cyle page:

https://access.redhat.com/support/policy/updates/errata#Maintenance_Support_2_Phase

"During Maintenance Support 2 Phase for Red Hat Enterprise Linux version 7,Red Hat defined Critical and Important impact Security Advisories (RHSAs) and selected (at Red Hat discretion) Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available."

If you feel this BZ should be addressed in RHEL 7.9.z, please re-open and provide suitable business and technical justifications, and follow the process for Accelerated Fixes:

https://source.redhat.com/groups/public/pnt-cxno/pnt_customer_experience_and_operations_wiki/support_delivery_accelerated_fix_release_handbook

Comment 67 Ryan Stasel 2021-07-20 23:05:24 UTC
So, this issue affects RHEL8 as well...

Comment 69 Steve Barcomb 2021-07-21 13:58:25 UTC
For people following this issue the RHEL8 version of this bug is https://bugzilla.redhat.com/show_bug.cgi?id=1884401 .   I am reopening the RHEL7 version of this bug due to the many customer cases attached to this issue.  There is a dialog with the VMWare team to resolve this, though I do not have a current ETA.

Comment 71 Paul B. Henson 2021-07-21 19:07:22 UTC
(In reply to Steve Barcomb from comment #69)
> For people following this issue the RHEL8 version of this bug is
> https://bugzilla.redhat.com/show_bug.cgi?id=1884401 .   I am reopening the
> RHEL7 version of this bug due to the many customer cases attached to this
> issue.  There is a dialog with the VMWare team to resolve this, though I do
> not have a current ETA.

That bugid appears to be marked private? It won't let me see it. I've asked multiple times about the patch to fix this issue without response, so I guess that's private too?

Comment 72 Steve Barcomb 2021-07-21 20:03:22 UTC
Hey Paul,
That bug is marked private because there is customer data contained in the bug.  We obviously do not want to make that public.  I see you have an account, but at quick glance I do not see a support case linked to this bug.  Feel free to open a support case to get better information on the status.  A quick note, there is no submitted patch for this from VMWare yet.

Comment 73 Paul B. Henson 2021-07-22 00:41:02 UTC
I haven't bothered to open a case on it, it's not that critical, and I'm sure it would end up like my open case 02801614, I don't need somebody to tell me every couple of weeks there's no progress :).

Comment 55 mentions a patch attached to 1884401 (which I can't see) submitted by Zack Rusin who appears to work for vmware? I've asked multiple times if it could be copied here to the unrestricted bug but nobody has replied. I guess I could indeed open a case to ask for it <sigh>, but it seems better to just post it here so everybody could see it. Unless vmware patches are secret customer information.

Thanks...

Comment 74 Steve Barcomb 2021-07-23 00:35:03 UTC
Hey Paul,
You can find the test kernel package here:  

  http://people.redhat.com/sbarcomb/bz1792515/

Be aware that this package does not resolve the issue, and this information was distributed to our support teams.  It was for gathering diagnostics for VMWare.  Pertinent information from VMWare:

"Patch which removes any guest flip stalls

So far we were unable to reproduce this bug. For anyone who is able to reproduce this bug, it would be wonderful if you could grab the small attached patch and see if it fixes it. It's on top of master but it's small enough that it should apply to basically any vmwgfx version.

The patch removes any possible stalls during flips from the vmwgfx driver."


As of now, Red Hat is still waiting on a final fix from VMWare.

Comment 75 John Savanyo 2021-08-16 21:36:49 UTC
We closed internal VMware PR as won't fix.  We are unable to fix because we can't reproduce in house.

Comment 76 Ryan Stasel 2021-08-16 21:42:05 UTC
(In reply to John Savanyo from comment #75)
> We closed internal VMware PR as won't fix.  We are unable to fix because we
> can't reproduce in house.

I assume any one of us with this issue could provide logs via Skyline, etc.

Comment 79 Zack Rusin 2021-09-08 20:18:41 UTC
For anyone who's able to reproduce this bug, I'd love to see the kernel logs from a system booted with parameter: "drm.debug=0xf" (which can be also enabled via "echo 0xf > /sys/module/drm/parameters/debug" right after boot).

Of course an easy fix the log spamming issue, is to just replace DRM_ERROR's related to flip timeout with DRM_DEBUG_ATOMIC, e.g. for 5.14 kernel that would be:

diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
index ff1416cd609a..4c0945881042 100644
--- a/drivers/gpu/drm/drm_atomic.c
+++ b/drivers/gpu/drm/drm_atomic.c
@@ -84,7 +84,7 @@ int drm_crtc_commit_wait(struct drm_crtc_commit *commit)
 	 */
 	ret = wait_for_completion_timeout(&commit->flip_done, timeout);
 	if (!ret) {
-		DRM_ERROR("flip_done timed out\n");
+		DRM_DEBUG_ATOMIC("flip_done timed out\n");
 		return -ETIMEDOUT;
 	}

diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
index 2c0c6ec92820..061689d8b9e0 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -1551,8 +1551,8 @@ void drm_atomic_helper_wait_for_flip_done(struct drm_device *dev,

 		ret = wait_for_completion_timeout(&commit->flip_done, 10 * HZ);
 		if (ret == 0)
-			DRM_ERROR("[CRTC:%d:%s] flip_done timed out\n",
-				  crtc->base.id, crtc->name);
+			DRM_DEBUG_ATOMIC("[CRTC:%d:%s] flip_done timed out\n",
+					 crtc->base.id, crtc->name);
 	}

 	if (old_state->fake_commit)
@@ -2218,22 +2218,22 @@ void drm_atomic_helper_wait_for_dependencies(struct drm_atomic_state *old_state)
 	for_each_old_crtc_in_state(old_state, crtc, old_crtc_state, i) {
 		ret = drm_crtc_commit_wait(old_crtc_state->commit);
 		if (ret)
-			DRM_ERROR("[CRTC:%d:%s] commit wait timed out\n",
-				  crtc->base.id, crtc->name);
+			DRM_DEBUG_ATOMIC("[CRTC:%d:%s] commit wait timed out\n",
+					 crtc->base.id, crtc->name);
 	}

 	for_each_old_connector_in_state(old_state, conn, old_conn_state, i) {
 		ret = drm_crtc_commit_wait(old_conn_state->commit);
 		if (ret)
-			DRM_ERROR("[CONNECTOR:%d:%s] commit wait timed out\n",
-				  conn->base.id, conn->name);
+			DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] commit wait timed out\n",
+					 conn->base.id, conn->name);
 	}

 	for_each_old_plane_in_state(old_state, plane, old_plane_state, i) {
 		ret = drm_crtc_commit_wait(old_plane_state->commit);
 		if (ret)
-			DRM_ERROR("[PLANE:%d:%s] commit wait timed out\n",
-				  plane->base.id, plane->name);
+			DRM_DEBUG_ATOMIC("[PLANE:%d:%s] commit wait timed out\n",
+					 plane->base.id, plane->name);
 	}
 }
 EXPORT_SYMBOL(drm_atomic_helper_wait_for_dependencies);

Comment 84 Steve Barcomb 2021-09-22 18:40:18 UTC
A quick update on what is happening in the background on these bugs.  We are going to focus on non performance impacting page flip messages in bz 1884401 and 1792515.  Since there are a multitude of symptoms in each of these bugs, these are the three paths customers and Red Hat support should use.

- If you are seeing page flip stalls and are seeing a performance issue on a VMWare hypervisor.  VMWare requests you open an SR with them to review your configuration and performance metrics.

- If you are seeing page flip messages and no performance issues on a VMWare hypervisor.  These BZs are for tracking changes to vmware’s graphics driver to suppress this message. 

- If you are seeing page flip messages and you are not using a VMWare hypervisor, please open a Red Hat support case and request that they open a new bug for your specific graphics card.

Comment 85 Ryan Stasel 2021-09-22 20:02:16 UTC
@sbarcomb Thanks for the update. Is there any way to make a "public" bz for case 2. The first one you listed is locked due to customer info in there. Or are we just going to focus on THIS bz for this?

Comment 86 Steve Barcomb 2021-09-23 15:38:01 UTC
Hey Ryan,
I asked about making that bug public, however some of the bug content prevents us from making it publicly accessible.  What I will offer to everyone watching this bug who needs information on the RHEL8 bug is to email me directly at sbarcomb.  I can tell everyone that I made the exact same public update in bz 1884401 yesterday and the only difference of substance was the test build that was provided earlier (and again not the final fix).  I know that's not exactly ideal, but hopefully folks will be content working directly with me to get an update if they do not have an open support case.

Comment 90 Morten Stevens 2021-10-26 10:00:59 UTC
(In reply to Zack Rusin from comment #79)
> For anyone who's able to reproduce this bug, I'd love to see the kernel logs
> from a system booted with parameter: "drm.debug=0xf" (which can be also
> enabled via "echo 0xf > /sys/module/drm/parameters/debug" right after boot).

VMware vSphere 7.0 U3 + updates
Kernel: 4.18.0-305.19.1.el8_4.x86_64

dmesg:

[ 9470.440148] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] flip_done timed out
[ 9480.680039] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] flip_done timed out
[19817.456782] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] flip_done timed out
[19827.696869] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] flip_done timed out
[121847.366905] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] flip_done timed out
[121857.606922] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] flip_done timed out
[124657.225265] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] flip_done timed out
[124667.465252] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] flip_done timed out
[152521.824848] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] flip_done timed out
[152532.064752] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] flip_done timed out
[178068.086370] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] flip_done timed out
[178078.326359] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] flip_done timed out
[192274.562421] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] flip_done timed out
[192284.802409] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] flip_done timed out

echo 0xf > /sys/module/drm/parameters/debug

[204483.478696] [drm:drm_mode_object_get [drm]] OBJ ID: 74 (2)
[204483.478729] [drm:drm_mode_object_get [drm]] OBJ ID: 75 (1)
[204483.478757] [drm:drm_mode_object_get [drm]] OBJ ID: 76 (1)
[204483.478816] vmwgfx 0000:00:0f.0: [drm:drm_calc_timestamping_constants [drm]] crtc 38: hwmode: htotal 0, vtotal 0, vdisplay 0
[204483.478845] vmwgfx 0000:00:0f.0: [drm:drm_calc_timestamping_constants [drm]] crtc 38: clock 64662 kHz framedur 0 linedur 0
[204483.478940] [drm:drm_mode_object_put.part.4 [drm]] OBJ ID: 76 (2)
[204483.478969] [drm:drm_mode_object_put.part.4 [drm]] OBJ ID: 75 (2)
[204483.478996] [drm:drm_mode_object_put.part.4 [drm]] OBJ ID: 74 (3)
[204483.479021] [drm:drm_mode_object_put.part.4 [drm]] OBJ ID: 77 (1)
[204483.686722] [drm:drm_mode_object_get [drm]] OBJ ID: 74 (2)
[204483.686770] [drm:drm_mode_object_get [drm]] OBJ ID: 75 (1)
[204483.686798] [drm:drm_mode_object_get [drm]] OBJ ID: 77 (1)
[204483.686852] vmwgfx 0000:00:0f.0: [drm:drm_calc_timestamping_constants [drm]] crtc 38: hwmode: htotal 0, vtotal 0, vdisplay 0
[204483.686881] vmwgfx 0000:00:0f.0: [drm:drm_calc_timestamping_constants [drm]] crtc 38: clock 64662 kHz framedur 0 linedur 0
[204483.686978] [drm:drm_mode_object_put.part.4 [drm]] OBJ ID: 77 (2)
[204483.687005] [drm:drm_mode_object_put.part.4 [drm]] OBJ ID: 75 (2)
[204483.687031] [drm:drm_mode_object_put.part.4 [drm]] OBJ ID: 74 (3)
[204483.687056] [drm:drm_mode_object_put.part.4 [drm]] OBJ ID: 76 (1)

This issue is also reproducible with Fedora and the latest 5.14.x kernel.

Comment 94 Zack Rusin 2022-04-27 19:24:01 UTC
Morten, thanks for the update. I don't see the errors in the logs with "echo 0xf > /sys/module/drm/parameters/debug". Those logs will be noisy but they need to contain the error in order to get any extra info out of them. Is this during suspend and are there any issues apart from the message in the log? We don't seem to have a way to reproduce this problem. If there's no other issue apart from the message, it's harmless. Michel doesn't think guarding that message under one of the DRM debug flags is an option, so unfortunately all the bugs without any issues apart from the message itself will have to be closed as "won't fix". Please feel free to reopen if there's any issue that seems to be associated with those messages.

Comment 95 Paul Xu 2023-06-12 07:22:12 UTC
(In reply to Rick Barry from comment #55)
> Zack, please see https://bugzilla.redhat.com/show_bug.cgi?id=1884401#c41.
> The customer in this BZ has run successfully with the patches you proposed
> in bug 1884401. They are no longer seeing any messages. Is this information
> enough to move forward with your fix?

Dear Expert,

Could you please help to provide the workaround details? That bug page is not a public one, As it is not convenient for us to run the upgrade, we may need to stop this error message before we got the fixed version.

Thank you in advance.

Comment 96 Guido Johannes Lorc 2023-06-23 08:12:52 UTC
(In reply to Paul Xu from comment #95)
> (In reply to Rick Barry from comment #55)
> > Zack, please see https://bugzilla.redhat.com/show_bug.cgi?id=1884401#c41.
> > The customer in this BZ has run successfully with the patches you proposed
> > in bug 1884401. They are no longer seeing any messages. Is this information
> > enough to move forward with your fix?
> 
> Dear Expert,
> 
> Could you please help to provide the workaround details? That bug page is
> not a public one, As it is not convenient for us to run the upgrade, we may
> need to stop this error message before we got the fixed version.
> 
> Thank you in advance.

Dear RHEL Expert,

I already have the same issue on my servers and this wasn't fixed by the reglurary fixes that came up right now. Could you please deliver this with one of the rpm packages

Comment 97 Mai Ling 2023-08-27 10:39:58 UTC
interested in a follow-up. meanwhile workaround in comment 59 works fine for cli only servers (EL9 on ESXi 7u3)

Comment 98 Mai Ling 2023-12-20 11:11:05 UTC
I am no longer seeing the error message with kernel 5.15.0-201.135.6.el9uek.x86_64 - maybe check with oracle and see if/what have they done

$ journalctl -kb --no-hostname --no-pager | grep -i drm
Dec 19 20:37:36 systemd[1]: Starting Load Kernel Module drm...
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] FIFO at 0x00000000fb800000 size is 8192 kiB
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] VRAM at 0x00000000f0000000 size is 131072 kiB
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] Running on SVGA version 2.
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] DMA map mode: Caching DMA mappings.
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] Legacy memory limits: VRAM = 4096 kB, FIFO = 256 kB, surface = 0 kB
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] MOB limits: max mob size = 16384 kB, max mob pages = 12288
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] Capabilities: rect copy, cursor, cursor bypass, cursor bypass 2, 8bit emulation, alpha cursor, extended fifo, multimon, pitchlock, irq mask, display topology, gmr, traces, gmr2, screen object 2, command buffers, command buffers 2, gbobject, dx, hp cmd queue, no bb restriction, cap2 register,
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] Capabilities2: grow otable, intra surface copy, dx2, gb memsize 2, screendma reg, otable ptdepth2, non ms to ms stretchblt, cursor mob, mshint, cb max size 4mb, dx3, frame type, trace full fb, extra regs,
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] Max GMR ids is 64
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] Max number of GMR pages is 65536
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] Maximum display memory size is 16384 kiB
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] Screen Target display unit initialized
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] Fifo max 0x00040000 min 0x00001000 cap 0x0000077f
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] Using command buffers with DMA pool.
Dec 19 20:37:37 kernel: vmwgfx 0000:00:0f.0: [drm] Available shader model: Legacy.
Dec 19 20:37:37 kernel: fbcon: svgadrmfb (fb0) is primary device
Dec 19 20:37:37 kernel: [drm] Initialized vmwgfx 2.19.0 20210722 for 0000:00:0f.0 on minor 0

Comment 99 Guido Johannes Lorc 2023-12-20 11:29:03 UTC
I got this problem already:
Here's some logging from the kern.log:
ec 15 06:22:45 xxxxxxxxxxxxx kernel: [3091393.238055] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out
Dec 15 14:32:22 xxxxxxxxxxxxx kernel: [3120769.746080] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] commit wait timed out
Dec 15 14:32:32 xxxxxxxxxxxxx kernel: [3120779.985964] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out
Dec 15 21:21:01 xxxxxxxxxxxxx kernel: [3145289.423403] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] commit wait timed out
Dec 15 21:21:12 xxxxxxxxxxxxx kernel: [3145299.663407] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out
Dec 15 22:04:35 xxxxxxxxxxxxx kernel: [3147903.184899] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] commit wait timed out
Dec 15 22:04:45 xxxxxxxxxxxxx kernel: [3147913.423255] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out
Dec 16 06:54:16 xxxxxxxxxxxxx kernel: [3179684.041135] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] commit wait timed out
Dec 16 06:54:26 xxxxxxxxxxxxx kernel: [3179694.288390] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out
Dec 16 18:10:42 xxxxxxxxxxxxx kernel: [3220270.275367] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] commit wait timed out
Dec 16 18:10:52 xxxxxxxxxxxxx kernel: [3220280.515227] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out
Dec 17 07:05:56 xxxxxxxxxxxxx kernel: [3266783.930670] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] commit wait timed out
Dec 17 07:06:06 xxxxxxxxxxxxx kernel: [3266794.170874] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out
Dec 17 08:59:52 xxxxxxxxxxxxx kernel: [3273619.641953] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] commit wait timed out
Dec 17 09:00:02 xxxxxxxxxxxxx kernel: [3273629.883205] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out
Dec 18 05:18:35 xxxxxxxxxxxxx kernel: [3346742.962233] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] commit wait timed out
Dec 18 05:18:45 xxxxxxxxxxxxx kernel: [3346753.202066] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out
Dec 18 22:07:20 xxxxxxxxxxxxx kernel: [3407268.017068] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] commit wait timed out
Dec 18 22:07:30 xxxxxxxxxxxxx kernel: [3407278.257669] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out
Dec 18 23:06:03 xxxxxxxxxxxxx kernel: [3410791.090223] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] commit wait timed out
Dec 18 23:06:13 xxxxxxxxxxxxx kernel: [3410801.329218] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out
Dec 19 03:27:40 xxxxxxxxxxxxx kernel: [3426487.984088] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] commit wait timed out
Dec 19 03:27:50 xxxxxxxxxxxxx kernel: [3426498.224062] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out
Dec 19 06:45:17 xxxxxxxxxxxxx kernel: [3438345.389877] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] commit wait timed out
Dec 19 06:45:28 xxxxxxxxxxxxx kernel: [3438355.629810] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out
Dec 19 08:54:15 xxxxxxxxxxxxx kernel: [3446082.733603] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:38:crtc-0] commit wait timed out
Dec 19 08:54:25 xxxxxxxxxxxxx kernel: [3446092.973201] [drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [PLANE:34:plane-0] commit wait timed out

Comment 100 Mai Ling 2023-12-20 12:00:14 UTC
(In reply to Guido Johannes Lorc from comment #99)
> I got this problem already:
> Here's some logging from the kern.log:

see workarounds and explanations of the current situation in the thread and in the mentioned kb article


Note You need to log in before you can comment on or make changes to this bug.