Description of problem: When I launch glxinfo against an X server running on an integrated graphics hardware of the VIA EPIA chip set, X11 server kills itself after an assertion fail. All relevant the configuration and log files are attached (hopefully). Version-Release number of selected component (if applicable): xorg-x11-drv-via-0.2.2-4.fc8 xorg-x11-server-Xorg-1.3.0.0-23.fc8 mesa-libGL-7.0.1-5.fc8 glx-utils-7.0.1-5.fc8 How reproducible: Always Steps to Reproduce: 1. run X11 server configured with the attached xorg.conf on problematic hardware 2. run glxinfo Actual results: Program received signal SIGABRT, Aborted. [Switching to Thread -1211046128 (LWP 2346)] 0xb7dd83e6 in raise () from /lib/libc.so.6 (gdb) bt #0 0xb7dd83e6 in raise () from /lib/libc.so.6 #1 0xb7dd9da1 in abort () from /lib/libc.so.6 #2 0xb7dd1740 in __assert_fail () from /lib/libc.so.6 #3 0xb3abdf3e in _mesa_reference_renderbuffer (ptr=0x83beb00, rb=0x0) at main/renderbuffer.c:2155 #4 0xb3a9970d in _mesa_free_framebuffer_data (fb=0x83be9d8) at main/framebuffer.c:191 #5 0xb3a997f6 in _mesa_destroy_framebuffer (fb=0x83be9d8) at main/framebuffer.c:168 #6 0xb3a9958b in _mesa_unreference_framebuffer (fb=0x83be974) at main/framebuffer.c:251 #7 0xb3a591c0 in viaDestroyBuffer (driDrawPriv=0x83be970) at via_screen.c:323 #8 0xb3a4e970 in driDestroyDrawable (dpy=0x0, drawablePrivate=0x83be970) at ../common/dri_util.c:714 #9 0xb3a4e35e in __driGarbageCollectDrawables (drawHash=0x8269090) at ../common/dri_util.c:138 #10 0xb3a4e575 in driDestroyContext (dpy=0x0, scrn=0, contextPrivate=0x829a140) at ../common/dri_util.c:756 #11 0xb7c3fa14 in __glXDRIcontextDestroy (baseContext=0x829a0b0) at glxdri.c:284 #12 0xb7c078b4 in __glXFreeContext (cx=0x829a0b0) at glxext.c:249 #13 0xb7c07cf5 in ClientGone (clientIndex=1, id=1075838976) at glxext.c:151 #14 0x08072ec5 in FreeClientResources (client=0x8292018) at resource.c:783 #15 0x08083aa8 in CloseDownClient (client=0x8292018) at dispatch.c:3567 #16 0x08089b8d in Dispatch () at dispatch.c:440 #17 0x08071795 in main (argc=2, argv=0xbf9ab814, envp=Cannot access memory at address 0x932 ) at main.c:445 (gdb) up 3 #3 0xb3abdf3e in _mesa_reference_renderbuffer (ptr=0x83beb00, rb=0x0) at main/renderbuffer.c:2155 2155 assert(oldRb->Magic == RB_MAGIC); (gdb) Additional info: With this configuration the X server crashes on numerous other occasions -- I'll look at them shortly and either add to this bug or file others. I tried some on fc6 and they were reproducible also there. I will provide core dumps upon request.
Created attachment 196461 [details] /var/log/Xorg.0.log
Created attachment 196471 [details] /etc/X11/xorg.conf
Created attachment 196481 [details] Output of the glxinfo command
Running glxgears with the very same configuration seems to lock X11 server in a loop, not accepting any clients. (nothing appears in log file). The image of the gears appears on the screen, but they do not move and the whole image is corrupted with horizontal lines of randomly colored pixels. Would results of gcov coverage be helpful to find out where the loop happens?
(In reply to comment #4) > Running glxgears with the very same configuration seems to lock X11 server in a > loop, not accepting any clients. Killing the server afterwards results in a complete lockup of the machine (doesn't responde on the console, network, ...).
Ping on this. Is there anything I can do to help solve this issue? Did I provide enough information?
does turning off AIGLX help this any (I see you have it commented in your xorg.conf)? I'll need to find someone with a VIA machine to figure this out it might be worth trying to report it upstream on bugs.freedesktop.org
glxinfo doesn't crash the server anymore, glxgears triggered a Segmentation Fault. I didn't grab the core file this time.
Based on the date this bug was created, it appears to have been reported during the development of Fedora 8. In order to refocus our efforts as a project we are changing the version of this bug to '8'. If this bug still exists in rawhide, please change the version back to rawhide. (If you're unable to change the bug's version, add a comment to the bug and someone will change it for you.) Thanks for your help and we apologize for the interruption. The process we're following is outlined here: http://fedoraproject.org/wiki/BugZappers/F9CleanUp We will be following the process here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping to ensure this doesn't happen again.
This message is a reminder that Fedora 8 is nearing its end of life. Approximately 30 (thirty) days from now Fedora will stop maintaining and issuing updates for Fedora 8. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '8'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 8's end of life. Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 8 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora please change the 'version' of this bug to the applicable version. If you are unable to change the version, please add a comment here and someone will do it for you. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. The process we are following is described here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Fedora 8 changed to end-of-life (EOL) status on 2009-01-07. Fedora 8 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. Thank you for reporting this bug and we are sorry it could not be fixed.