Bug 151080 - sftp over a persistent connection (days/weeks) develops a memory leak.
sftp over a persistent connection (days/weeks) develops a memory leak.
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: openssh (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Tomas Mraz
Brian Brock
Depends On:
Blocks: 156320
  Show dependency treegraph
Reported: 2005-03-14 12:51 EST by Ken Snider
Modified: 2007-11-30 17:07 EST (History)
0 users

See Also:
Fixed In Version: RHSA-2005-550
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2005-09-28 10:31:16 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
strace of the problem (14.10 KB, text/plain)
2005-03-14 12:51 EST, Ken Snider
no flags Details
Patch (1.50 KB, patch)
2005-05-25 08:01 EDT, Tomas Mraz
no flags Details | Diff

  None (edit)
Description Ken Snider 2005-03-14 12:51:37 EST
Description of problem:
we make use of sftp over a persistent ssh tunnel to move files from one node to
another. We've noticed that, over time, the sftp client connection will grow to
several hundred megabytes in size, meaning we need to restart this service about
weekly to avoid them consuming all available memory.

Version-Release number of selected component (if applicable):

How reproducible:

I've attached an strace of one iteration of the daemon.
Comment 1 Ken Snider 2005-03-14 12:51:37 EST
Created attachment 111994 [details]
strace of the problem
Comment 2 Tomas Mraz 2005-05-25 08:00:27 EDT
I cannot reproduce this memory leak here. But it's possible that I don't use the
same commands as you do.

Comment 3 Tomas Mraz 2005-05-25 08:01:44 EDT
Created attachment 114822 [details]

Can you try this patch if it helps? It removes some memory leaks on some error
Comment 4 Ken Snider 2005-05-25 08:11:18 EDT
We'll roll this into our server today, it'll be a few days to see if the problem
goes away, of course - there's no easy way to accelerate the "leak rate" on our end.

As for what commands we use, one entire iteration of what the daemon does is in
the strace - that exact command set is what is run, once per minute. The only
thing that changes is the quantity of files received.
Comment 5 Tomas Mraz 2005-05-25 08:34:59 EDT
I can see the commands in the strace, however the command lines aren't complete
because strace cuts them. And there seem to be many failed commands in the
strace which I don't suppose to be the case in your real environment.
The patch is backported from 4.0p1 and there are only fixes for few error paths
in the code so I wouldn't be very surprised if it didn't help.
Comment 6 Ken Snider 2005-05-25 08:45:57 EDT
Actually, it's a capture from our production environment. :) a lot of distinct
files are inserted here, so there are a lot of gets that simply have no file
(because of the directory size, it's not feasible to stat the directory, the
directory contents can be more than a megabyte when transferred, even though
it's rotated every 60 seconds or so).

So, it's quite possible that the leaks *are* occuring only on errors. Either
way, we should know definitively after I've given the patch a few days to run.
Comment 7 Tomas Mraz 2005-06-06 16:23:13 EDT
So did the patch help?
Comment 8 Ken Snider 2005-06-06 17:47:54 EDT
Actually yes! Significantly! The memory footprints of the instances are now
within a meg of each other (most within a few K). Compare this to several
hundred Megabytes of discrepancy prior to this patch, and it's obviously a
significant improvement.

I can't say for certain that *all* leaks have been caught, but certainly 99% of
them were covered in the patch.
Comment 12 Red Hat Bugzilla 2005-09-28 10:31:16 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.