Bug 1386827

Summary: driver/firmware hang in nouveau
Product: Red Hat Enterprise Linux 7 Reporter: Joe Wright <jwright>
Component: xorg-x11-drv-nouveauAssignee: Ben Skeggs <bskeggs>
Status: CLOSED WONTFIX QA Contact: Desktop QE <desktop-qa-list>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 7.2CC: bskeggs, cww, jwright, kwalker, pablo.iranzo, riehecky, robert.hogue, tpelka, vagrawal, vanhoof
Target Milestone: rcFlags: vagrawal: needinfo? (bskeggs)
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1504126 (view as bug list) Environment:
Last Closed: 2020-11-11 21:40:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1298243, 1394840, 1594286    

Description Joe Wright 2016-10-19 16:51:30 UTC
Description of problem:
- Firmware and driver lockup
- 02:00.0 VGA compatible controller: NVIDIA Corporation GF119 [NVS 310] (rev a1)


Version-Release number of selected component (if applicable):
- xorg-x11-drv-nouveau-1.0.11-2.el7.x86_64

How reproducible:
- intermittent

Steps to Reproduce:
1. unsure, happens on its own
2.
3.

Actual results:
- driver crashes and locks up UI

Expected results:


Additional info:

[692885.180406] nouveau E[gnome-shell[109462]] nv50cal_space: -16
[692885.288602] nouveau E[gnome-shell[109462]] nv50cal_space: -16
[692885.395573] nouveau E[gnome-shell[109462]] nv50cal_space: -16

Comment 4 Joe Wright 2016-10-19 17:51:40 UTC
This issue does not occur with the Nvidia proprietary driver

Comment 5 Joe Wright 2016-10-19 18:12:42 UTC
I believe this is where it actually breaks:

[632908.432561] UDP: bad checksum. From 138.120.146.6:137 to 138.120.147.255:137 ulen 58
[632911.791776] UDP: bad checksum. From 138.120.146.6:137 to 138.120.147.255:137 ulen 58
[632913.292264] UDP: bad checksum. From 138.120.146.6:137 to 138.120.147.255:137 ulen 58
[637333.422873] UDP: bad checksum. From 138.120.147.30:137 to 138.120.147.255:137 ulen 58
[667655.046481] nouveau E[   PFIFO][0000:02:00.0] read fault at 0x00012e0000 [PAGE_NOT_PRESENT] from PGRAPH/GPC0/TEX on channel 0x003fcb1000 [gnome-shell[109462]]
[667655.046485] nouveau E[   PFIFO][0000:02:00.0] PGRAPH engine fault on channel 5, recovering...
[667655.046502] nouveau E[  PGRAPH][0000:02:00.0] TRAP ch 5 [0x003fcb1000 gnome-shell[109462]]
[667655.046511] nouveau E[  PGRAPH][0000:02:00.0] GPC0/TPC0/TEX: 0x80000049
[667741.006770] nouveau E[Xorg[2858]] nv50cal_space: -16
[667741.073426] nouveau E[Xorg[2858]] nv50cal_space: -16
[667741.139289] nouveau E[Xorg[2858]] nv50cal_space: -16
[667741.504754] nouveau E[Xorg[2858]] nv50cal_space: -16
[667741.570579] nouveau E[Xorg[2858]] nv50cal_space: -16
[667741.636396] nouveau E[Xorg[2858]] nv50cal_space: -16
[667741.702212] nouveau E[Xorg[2858]] nv50cal_space: -16
[667742.005291] nouveau E[Xorg[2858]] nv50cal_space: -16

Comment 26 Pat Riehecky 2018-04-17 16:19:20 UTC
I'm also seeing hangs with the following:

Apr 17 10:55:16 testhost.example.com kernel: nouveau 0000:04:00.0: gr: PGRAPH TLB flush idle timeout fail
Apr 17 10:55:16 testhost.example.com kernel: nouveau 0000:04:00.0: gr: PGRAPH_STATUS 00000503 [BUSY DISPATCH CTXPROG CCACHE_PREGEOM]
Apr 17 10:55:16 testhost.example.com kernel: nouveau 0000:04:00.0: gr: PGRAPH_VSTATUS0: 00000008 [CCACHE]
Apr 17 10:55:16 testhost.example.com kernel: nouveau 0000:04:00.0: gr: PGRAPH_VSTATUS1: 00000000 []
Apr 17 10:55:16 testhost.example.com kernel: nouveau 0000:04:00.0: gr: PGRAPH_VSTATUS2: 00000000 []
Apr 17 10:55:18 testhost.example.com kernel: nouveau 0000:04:00.0: gr: PGRAPH TLB flush idle timeout fail
Apr 17 10:55:18 testhost.example.com kernel: nouveau 0000:04:00.0: gr: PGRAPH_STATUS 00000503 [BUSY DISPATCH CTXPROG CCACHE_PREGEOM]
Apr 17 10:55:18 testhost.example.com kernel: nouveau 0000:04:00.0: gr: PGRAPH_VSTATUS0: 00000008 [CCACHE]
Apr 17 10:55:18 testhost.example.com kernel: nouveau 0000:04:00.0: gr: PGRAPH_VSTATUS1: 00000000 []
Apr 17 10:55:18 testhost.example.com kernel: nouveau 0000:04:00.0: gr: PGRAPH_VSTATUS2: 00000000 []
Apr 17 10:55:27 testhost.example.com kernel: INFO: task kworker/u16:4:307 blocked for more than 120 seconds.
Apr 17 10:55:27 testhost.example.com kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 17 10:55:27 testhost.example.com kernel: Call Trace:
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffc0513307>] ? nvkm_client_notify_get+0x27/0x40 [nouveau]
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffc0514b5a>] ? nvkm_ioctl_ntfy_get+0x6a/0xc0 [nouveau]
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffa9512f49>] schedule+0x29/0x70
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffa95108b9>] schedule_timeout+0x239/0x2c0
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffc05c5912>] ? nvkm_client_ioctl+0x12/0x20 [nouveau]
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffc0512048>] ? nvif_object_ioctl+0x48/0x60 [nouveau]
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffc05c866c>] ? nouveau_bo_rd32+0x2c/0x30 [nouveau]
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffc05e4a2e>] ? nv84_fence_read+0x2e/0x30 [nouveau]
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffc05e2bfc>] ? nouveau_fence_no_signaling+0x2c/0x90 [nouveau]
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffa9295adc>] dma_fence_default_wait+0x1cc/0x220
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffa92956a0>] ? dma_fence_release+0xa0/0xa0
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffa92954df>] dma_fence_wait_timeout+0x3f/0xe0
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffc0448869>] drm_atomic_helper_wait_for_fences+0x69/0xe0 [drm_kms_helper]
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffc05d87b5>] nv50_disp_atomic_commit_tail+0x55/0x1200 [nouveau]
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffa951291c>] ? __schedule+0x41c/0xa20
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffc05d9972>] nv50_disp_atomic_commit_work+0x12/0x20 [nouveau]
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffa8eb2dff>] process_one_work+0x17f/0x440
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffa8eb3ac6>] worker_thread+0x126/0x3c0
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffa8eb39a0>] ? manage_workers.isra.24+0x2a0/0x2a0
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffa8ebae31>] kthread+0xd1/0xe0
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffa8ebad60>] ? insert_kthread_work+0x40/0x40
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffa951f637>] ret_from_fork_nospec_begin+0x21/0x21
Apr 17 10:55:27 testhost.example.com kernel: [<ffffffffa8ebad60>] ? insert_kthread_work+0x40/0x40

Comment 28 Vishal Agrawal 2018-09-27 14:06:23 UTC
Hi Ben,

I have a customer facing the same issue.

Comment 31 Chris Williams 2020-11-11 21:40:27 UTC
Red Hat Enterprise Linux 7 shipped it's final minor release on September 29th, 2020. 7.9 was the last minor releases scheduled for RHEL 7.
From intial triage it does not appear the remaining Bugzillas meet the inclusion criteria for Maintenance Phase 2 and will now be closed. 

From the RHEL life cycle page:
https://access.redhat.com/support/policy/updates/errata#Maintenance_Support_2_Phase
"During Maintenance Support 2 Phase for Red Hat Enterprise Linux version 7,Red Hat defined Critical and Important impact Security Advisories (RHSAs) and selected (at Red Hat discretion) Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available."

If this BZ was closed in error and meets the above criteria please re-open it flag for 7.9.z, provide suitable business and technical justifications, and follow the process for Accelerated Fixes:
https://source.redhat.com/groups/public/pnt-cxno/pnt_customer_experience_and_operations_wiki/support_delivery_accelerated_fix_release_handbook  

Feature Requests can re-opened and moved to RHEL 8 if the desired functionality is not already present in the product. 

Please reach out to the applicable Product Experience Engineer[0] if you have any questions or concerns.  

[0] https://bugzilla.redhat.com/page.cgi?id=agile_component_mapping.html&product=Red+Hat+Enterprise+Linux+7