Red Hat Bugzilla – Bug 990986
mesa 9.2 is causing GPU to hang on AMD Cape Verde (Radeon HD 7750)
Last modified: 2015-02-17 11:30:03 EST
Description of problem:
Changing twice the options on a combo box causes GPU to not answer to mesa.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Add two identities for sending emails on claws-mail
2. start claws-mail
3. Click on write an email
4. Click at the "From" combo box and select the alternate email;
5. Click at the "From" combo box again and try to return to the first email.
Note: the same kind of issue also happens with other combo boxes on other programs.
Xorg stops works or hang. Dmesg shows:
[15183.080162] radeon 0000:01:00.0: GPU lockup CP stall for more than 10000msec
[15183.080167] radeon 0000:01:00.0: GPU lockup (waiting for 0x000000000003535d last fence id 0x0000000000035359)
[15183.080419] radeon 0000:01:00.0: sa_manager is not empty, clearing anyway
[15183.156028] radeon 0000:01:00.0: fence driver on ring 5 use gpu addr 0x0000000000178a18 and cpu addr 0xffffc900133eca18
[15183.157053] radeon 0000:01:00.0: Saved 509 dwords of commands on ring 0.
[15183.157121] radeon 0000:01:00.0: GPU softreset: 0x00000049
[15183.157122] radeon 0000:01:00.0: GRBM_STATUS = 0xF4403028
[15183.157124] radeon 0000:01:00.0: GRBM_STATUS_SE0 = 0xC8000006
[15183.157125] radeon 0000:01:00.0: GRBM_STATUS_SE1 = 0x00000006
[15183.157126] radeon 0000:01:00.0: SRBM_STATUS = 0x20000AC0
[15183.157186] radeon 0000:01:00.0: SRBM_STATUS2 = 0x00000000
[15183.157188] radeon 0000:01:00.0: R_008674_CP_STALLED_STAT1 = 0x00000000
[15183.157189] radeon 0000:01:00.0: R_008678_CP_STALLED_STAT2 = 0x40000000
[15183.157191] radeon 0000:01:00.0: R_00867C_CP_BUSY_STAT = 0x00408006
[15183.157192] radeon 0000:01:00.0: R_008680_CP_STAT = 0x84228647
[15183.157193] radeon 0000:01:00.0: R_00D034_DMA_STATUS_REG = 0x44C83D57
[15183.157195] radeon 0000:01:00.0: R_00D834_DMA_STATUS_REG = 0x44C83D57
[15183.157196] radeon 0000:01:00.0: VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x00000000
[15183.157198] radeon 0000:01:00.0: VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x00000000
[15183.182056] radeon 0000:01:00.0: GRBM_SOFT_RESET=0x0000DDFF
[15183.182108] radeon 0000:01:00.0: SRBM_SOFT_RESET=0x00000100
[15183.183264] radeon 0000:01:00.0: GRBM_STATUS = 0x00003028
[15183.183265] radeon 0000:01:00.0: GRBM_STATUS_SE0 = 0x00000006
[15183.183266] radeon 0000:01:00.0: GRBM_STATUS_SE1 = 0x00000006
[15183.183267] radeon 0000:01:00.0: SRBM_STATUS = 0x200000C0
[15183.183323] radeon 0000:01:00.0: SRBM_STATUS2 = 0x00000000
[15183.183324] radeon 0000:01:00.0: R_008674_CP_STALLED_STAT1 = 0x00000000
[15183.183326] radeon 0000:01:00.0: R_008678_CP_STALLED_STAT2 = 0x00000000
[15183.183327] radeon 0000:01:00.0: R_00867C_CP_BUSY_STAT = 0x00000000
[15183.183328] radeon 0000:01:00.0: R_008680_CP_STAT = 0x00000000
[15183.183329] radeon 0000:01:00.0: R_00D034_DMA_STATUS_REG = 0x44C83D57
[15183.183330] radeon 0000:01:00.0: R_00D834_DMA_STATUS_REG = 0x44C83D57
[15183.183393] radeon 0000:01:00.0: GPU reset succeeded, trying to resume
[15183.206370] [drm] PCIE GART of 512M enabled (table at 0x0000000000040000).
[15183.206440] radeon 0000:01:00.0: WB enabled
[15183.206441] radeon 0000:01:00.0: fence driver on ring 0 use gpu addr 0x0000000040000c00 and cpu addr 0xffff8804034d3c00
[15183.206442] radeon 0000:01:00.0: fence driver on ring 1 use gpu addr 0x0000000040000c04 and cpu addr 0xffff8804034d3c04
[15183.206443] radeon 0000:01:00.0: fence driver on ring 2 use gpu addr 0x0000000040000c08 and cpu addr 0xffff8804034d3c08
[15183.206444] radeon 0000:01:00.0: fence driver on ring 3 use gpu addr 0x0000000040000c0c and cpu addr 0xffff8804034d3c0c
[15183.206445] radeon 0000:01:00.0: fence driver on ring 4 use gpu addr 0x0000000040000c10 and cpu addr 0xffff8804034d3c10
[15183.207455] radeon 0000:01:00.0: fence driver on ring 5 use gpu addr 0x0000000000178a18 and cpu addr 0xffffc90013a35a18
[15183.225989] [drm] ring test on 0 succeeded in 1 usecs
[15183.225992] [drm] ring test on 1 succeeded in 1 usecs
[15183.225995] [drm] ring test on 2 succeeded in 1 usecs
[15183.226054] [drm] ring test on 3 succeeded in 2 usecs
[15183.226060] [drm] ring test on 4 succeeded in 1 usecs
[15183.402360] [drm] ring test on 5 succeeded in 1 usecs
[15183.402363] [drm] UVD initialized successfully.
[15183.403981] radeon 0000:01:00.0: GPU fault detected: 146 0x0e1a480c
Bug also opened against upstream:
Replacing mesa by version 9.1-5 fixes the issues:
Recompiling mesa without --enable-glx-tls also fixes the issue.
This message is a notice that Fedora 19 is now at end of life. Fedora
has stopped maintaining and issuing updates for Fedora 19. It is
Fedora's policy to close all bug reports from releases that are no
longer maintained. Approximately 4 (four) weeks from now this bug will
be closed as EOL if it remains open with a Fedora 'version' of '19'.
Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.
Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 19 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.
Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.
Fedora 19 changed to end-of-life (EOL) status on 2015-01-06. Fedora 19 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.
If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
Thank you for reporting this bug and we are sorry it could not be fixed.