Bug 172091 - Memory leak in the vesa driver
Memory leak in the vesa driver
Status: CLOSED WORKSFORME
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: xorg-x11 (Show other bugs)
4.0
All Linux
medium Severity medium
: ---
: ---
Assigned To: Søren Sandmann Pedersen
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-10-31 05:49 EST by Bastien Nocera
Modified: 2014-06-18 05:07 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-01-08 15:04:15 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
VESACloseScreen.patch (634 bytes, patch)
2005-10-31 05:49 EST, Bastien Nocera
no flags Details | Diff

  None (edit)
Description Bastien Nocera 2005-10-31 05:49:54 EST
+++ This bug was initially created as a clone of Bug #172090 +++

The current RHEL3 Xfree86 will leak memory for each new client connecting, and
disconnecting from the X server.

Upstream patch available at:
http://cvsweb.xfree86.org/cvsweb/xc/programs/Xserver/hw/xfree86/drivers/vesa/vesa.c.diff?r1=1.45&r2=1.46

Steps to Reproduce:
while : ; do xlogo & sleep 3 ; kill -9 `ps aux | grep xlogo | awk '{print $2}'`
; done
while : ; do ps aux | grep "X :0" | grep -v grep ; sleep 5 ;done

Actual Results:

# while : ; do ps aux | grep "X :0" | grep -v grep ; sleep 5 ;done
root  28283  1.6  0.9 35972 9732 ?   S 11:26 0:02 /usr/X11R6/bin/X :0
root  28283  1.6  0.9 35972 9732 ?   S 11:26 0:02 /usr/X11R6/bin/X :0
root  28283  1.5  1.2 38772 12532 ?  S 11:26 0:02 /usr/X11R6/bin/X :0 <-
starting xterm
root  28283  1.5  1.2 38772 12532 ?  S 11:26 0:02 /usr/X11R6/bin/X :0
root  28283  1.5  1.2 38772 12532 ?  S 11:26 0:02 /usr/X11R6/bin/X :0
root  28283  1.5  0.9 36612 9736 ?   S 11:26 0:03 /usr/X11R6/bin/X :0 <- exiting
xterm

Expected Results:

# while : ; do ps aux | grep "X :0" | grep -v grep ; sleep 5 ;done
root  28283  1.6  0.9 35972 9732 ?   S 11:26 0:02 /usr/X11R6/bin/X :0
root  28283  1.6  0.9 35972 9732 ?   S 11:26 0:02 /usr/X11R6/bin/X :0
root  28283  1.5  1.2 38772 12532 ?  S 11:26 0:02 /usr/X11R6/bin/X :0 <-
starting xterm
root  28283  1.5  1.2 38772 12532 ?  S 11:26 0:02 /usr/X11R6/bin/X :0
root  28283  1.5  1.2 38772 12532 ?  S 11:26 0:02 /usr/X11R6/bin/X :0
root  28283  1.5  0.9 36612 9732 ?   S 11:26 0:03 /usr/X11R6/bin/X :0 <- exiting
xterm

-- Additional comment from bnocera@redhat.com on 2005-10-31 05:46 EST --
Created an attachment (id=120557)
VESACloseScreen.patch
Comment 1 Bastien Nocera 2005-10-31 05:49:54 EST
Created attachment 120558 [details]
VESACloseScreen.patch
Comment 22 Mike A. Harris 2006-04-25 13:43:39 EDT
Using the test case provided above, I am unable to produce a strong leak
with the stock U3 vesa driver, however a very slight increase in memory
usage was observed over about 5 minutes of about 8Kb.  This is under a full
GNOME desktop with no apps running other than firefox with a single bugzilla
page open and gnome-term, and the 2 test case lines above running.

I modified the test case to change the first line's delay to 1s and the
second line to 2s to speed up the effect, and have left it running
for quite some time now and haven't observed any further increase in memory
usage.

In other words, it appears that I am not seeing the memory leak occur using
the U3 xorg-x11 vesa driver on a Radeon 9800Pro.  The 8Kb I mentioned above
I writeoff as something running in the background using, as it went for
about 15 minutes now with no further increase in size.  I'll leave it
running all day and see what I observe.  Right now I get 28420/23800 for
SIZE/RSS.

I've tested the new driver as well, and see no regression in brief testing,
however this neither confirms nor denies wether it fixes the leak, as I
am not observing any leak in either case on this particular RHEL4 x86 system.

Comment 23 Mike A. Harris 2006-05-02 07:32:54 EDT
As mentioned in the RHEL3 report:

I was able to reproduce the leak now with the vesa driver as reported,
using an X server started without any clients, and performing
the test indicated in the first comment via remote shell.

Issue confirmed resolved with:

xorg-x11-6.8.2-vesa-driver-memory-leak-bug172091.patch
Comment 26 Mike A. Harris 2006-05-05 11:27:13 EDT
Devel ACK..
Comment 28 Mike A. Harris 2006-05-05 12:01:31 EDT
Patch present in rpm since:

* Tue Apr 25 2006 Mike A. Harris <mharris@redhat.com> 6.8.2-1.EL.13.29
- Added xorg-x11-6.8.2-vesa-driver-memory-leak-bug172091.patch to fix a memory
  leak in the "vesa" driver. (#172091)

Setting state to MODIFIED pending QA testing.
Comment 38 Søren Sandmann Pedersen 2006-12-19 18:22:43 EST
There is a lot of confusion going on here. As far as I can tell from reading the
various issue trackers,

- There was a memory leak in the VESA driver which we fixed in 4.4 by applying
  the patch that is mentioned several times:

+    if (pVesa->pVbe) {
+       vbeFree(pVesa->pVbe);
+       pVesa->pVbe = NULL;
+    }

- Fuchi Hideshi was not convinced this was a full fix, since vesaCloseScreen()
  is not called every time a client exists. I don't believe we have seen any
  indication that a memory leak exists after this patch was applied, and I 
  don't believe vesaCloseScreen() is expected to be called every time a client
  exists.

- A different memory leak was observed with one of TI's applications. This was,
  according to comments in issue 83772, resolved after upgrading to a later
  version of that application, which would indicate some resource leak in the
  application, rather than the X server.

So, as best I can tell, there is nothing here that needs fixing in X.

I could be wrong though, and if anyone disagrees, feel free to post information
about what specifically the problem is.

Otherwise, I am going to close this bug.
Comment 39 Søren Sandmann Pedersen 2007-01-08 15:05:20 EST
Closing this bug since both IT's associated with it are closed and it's not
clear what needs to be fixed.

Note You need to log in before you can comment on or make changes to this bug.