Red Hat Bugzilla – Bug 441089
curlftpfs eating all memory and crashes on large uploads
Last modified: 2008-05-01 08:55:26 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; fr; rv:22.214.171.124) Gecko/20080208 Fedora/126.96.36.199-1.fc8 Firefox/188.8.131.52
Description of problem:
when you mount a directory via curlftpfs:
curlftfs ftp://user:email@example.com/ /mnt/myftp
then try to upload a large file (i.e. 900MB):
cp my_large_file.tgz /mnt/myftp/
curlftpfs will try to load the entire file into memory before sending it... so it "eats" all computer's memory and crashes before sending the file.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1.mount a ftp directory from an FTP server on a local mount point
2.try to put a large file in the mount point
3.it will take all the available memory and crash
curlftpfs crashed and a part of the file has been transfered on the FTP server.
curlftpfs should transfer the file without loading it all into memory, just as any ftp file transfer software do.
there is a bug filed at the author web site dated on 2006-06-03 15:55 (so that's 2 years old now), and there are no news or any sign of activity since 1 year now.
Here is the link:
As this bug blocks any large file transfer, it makes curlftpfs unusable, and is eating a lot of memory causing possible system crash.
Hello Dag, I'm afraid I have no relevant coding skills, so if upstream isn't willing to fix it, there's nothing I can do. Maybe you could try asking on the Fedora mailing lists (e.g. fedora-devel) to find someone who might be willing to have a look. But I'm afraid I have no skills to fix bugs in the code where there's no patch or upstream fix available. David
Hello David, I found something interesting:
the main site for curlftpfs seems dead, but I found this :
with this post:
Actually the code in the google svn fixes this problem:
So can anyone do something to update curlftpfs ?
the bug described here is a severe bug as it eats all the available memory and
swap, then crash, and maybe causing data loss.
I compiled this svn version:
1. svn checkout http://curlftpfs.googlecode.com/svn/trunk/ curlftpfs-read-only
2. cd curlftpfs-read-only
6. automake --add-missing
10. make install
After trying it, upload of large files work perfectly now, so I will try to
persuade them to make a new release with this bug fixed.
bye for now
I downloaded the SVN code to see how much it differed from the present release
(0.9.1). The answer was quite a lot; the main file ftp.c has got 25% bigger,
and has new features. As I'm not a C coder I wouldn't feel confident packaging
the SVN version as I wouldn't be able to then support it if it had
regressions/further bugs, so I'd prefer to wait for a new release to be made.
Alternatively if you were able to isolate a patch that just implemented the
bug-fix you're looking for, I'd be willing to look at that.
I posted on the author's forum and the author answered:
I'm the original author of curlftpfs. Norbert volunteered to solve the problem
with file uploading. He took a patch from Miklos Szeredi, applied and fixed some
bugs. I did not make a new release because I felt that this is not a
satisfactory solution. It seems to work, but it really doesn't when you're doing
reading and writing from the same handle or if you're writing to arbitrary
positions. I felt it will fail in many obscure ways and leave the user wondering
If a lot of you have tested it extensively and feel that it's a good solution I
can make a release.
So, as I'm not a good C coder and don't have time I can't debug/patch/fix
anything in this code. The author here is clearly asking for help to make this
project more reliable and fix bugs, so the question is what can we do to help him ?
I'm afraid I can't do anything to help him more than what I did by getting the
thing in Fedora by packaging it.
It seems to me that at the moment the option is to keep the present release
version with the known bug, or switch to a SVN version with other bugs known
and unknown. Hmmm. I'm inclined to close the bug because I don't want to annoy
other users who won't thank me for introducing new bugs from a pre-release.
Having looked at the thread I'd suggest that maybe the code could use the old
behaviour if the file is below a certain size (related to the amount of free
memory), or the new behaviour if not? Or could switch behaviour when no more
memory is available? But it would need someone who knows more about the code
to make a decision and produce a patch.