Bug 441089 - curlftpfs eating all memory and crashes on large uploads
Summary: curlftpfs eating all memory and crashes on large uploads
Alias: None
Product: Fedora
Classification: Fedora
Component: curlftpfs
Version: 8
Hardware: All
OS: Linux
Target Milestone: ---
Assignee: David Anderson
QA Contact: Fedora Extras Quality Assurance
Keywords: Reopened
Depends On:
TreeView+ depends on / blocked
Reported: 2008-04-06 02:41 UTC by Dag
Modified: 2008-05-01 12:55 UTC (History)
0 users

Clone Of:
Last Closed: 2008-05-01 12:55:26 UTC

Attachments (Terms of Use)

Description Dag 2008-04-06 02:41:40 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; fr; rv: Gecko/20080208 Fedora/ Firefox/

Description of problem:
when you mount a directory via curlftpfs:
curlftfs ftp://user:password@host.com/ /mnt/myftp
then try to upload a large file (i.e. 900MB):
cp my_large_file.tgz /mnt/myftp/
curlftpfs will try to load the entire file into memory before sending it... so it  "eats" all computer's memory and crashes before sending the file.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.mount a ftp directory from an FTP server on a local mount point
2.try to put a large file in the mount point
3.it will take all the available memory and crash

Actual Results:
curlftpfs crashed and a part of the file has been transfered on the FTP server.

Expected Results:
curlftpfs should transfer the file without loading it all into memory, just as any ftp file transfer software do.

Additional info:
there is a bug filed at the author web site dated on 2006-06-03 15:55 (so that's 2 years old now), and there are no news or any sign of activity since 1 year now.
Here is the link:

As this bug blocks any large file transfer, it makes curlftpfs unusable, and is eating a lot of memory causing possible system crash.

Comment 1 David Anderson 2008-04-07 14:53:47 UTC
Hello Dag,  I'm afraid I have no relevant coding skills, so if upstream isn't willing to fix it, there's nothing I can do. Maybe you could try asking on the Fedora mailing lists (e.g. fedora-devel) to find someone who might be willing to have a look. But I'm afraid I have no skills to fix bugs in the code where there's no patch or upstream fix available.  David  

Comment 2 Dag 2008-04-15 16:15:36 UTC
Hello David, I found something interesting:
the main site for curlftpfs seems dead, but I found this :

with this post:
Actually the code in the google svn fixes this problem: 

So can anyone do something to update curlftpfs ?
the bug described here is a severe bug as it eats all the available memory and
swap, then crash, and maybe causing data loss.

Comment 3 Dag 2008-04-15 16:51:44 UTC
I compiled this svn version:
1. svn checkout http://curlftpfs.googlecode.com/svn/trunk/ curlftpfs-read-only
2. cd curlftpfs-read-only
3. aclocal
4. libtoolize
5. autoheader
6. automake --add-missing
7. autoconf
8. ./configure
9. make
10. make install

After trying it, upload of large files work perfectly now, so I will try to
persuade them to make a new release with this bug fixed.
bye for now

Comment 4 David Anderson 2008-04-16 09:26:26 UTC
Thank you.

I downloaded the SVN code to see how much it differed from the present release 
(0.9.1). The answer was quite a lot; the main file ftp.c has got 25% bigger, 
and has new features. As I'm not a C coder I wouldn't feel confident packaging 
the SVN version as I wouldn't be able to then support it if it had 
regressions/further bugs, so I'd prefer to wait for a new release to be made. 
Alternatively if you were able to isolate a patch that just implemented the 
bug-fix you're looking for, I'd be willing to look at that.


Comment 5 Dag 2008-04-18 08:02:49 UTC
Hello David,
I posted on the author's forum and the author answered:
I'm the original author of curlftpfs. Norbert volunteered to solve the problem
with file uploading. He took a patch from Miklos Szeredi, applied and fixed some
bugs. I did not make a new release because I felt that this is not a
satisfactory solution. It seems to work, but it really doesn't when you're doing
reading and writing from the same handle or if you're writing to arbitrary
positions. I felt it will fail in many obscure ways and leave the user wondering
what happened. 
If a lot of you have tested it extensively and feel that it's a good solution I
can make a release. 

So, as I'm not a good C coder and don't have time I can't debug/patch/fix
anything in this code. The author here is clearly asking for help to make this
project more reliable and fix bugs, so the question is what can we do to help him ?

Comment 6 David Anderson 2008-05-01 12:50:06 UTC
I'm afraid I can't do anything to help him more than what I did by getting the 
thing in Fedora by packaging it.

It seems to me that at the moment the option is to keep the present release 
version with the known bug, or switch to a SVN version with other bugs known 
and unknown. Hmmm. I'm inclined to close the bug because I don't want to annoy 
other users who won't thank me for introducing new bugs from a pre-release.

Having looked at the thread I'd suggest that maybe the code could use the old 
behaviour if the file is below a certain size (related to the amount of free 
memory), or the new behaviour if not? Or could switch behaviour when no more 
memory is available? But it would need someone who knows more about the code 
to make a decision and produce a patch.

Note You need to log in before you can comment on or make changes to this bug.