Created attachment 480395 [details] anaconda.log Description of problem: Booted F15 Alpha rc1 DVD and proceeded to package repo step. Failed to directly add a ftp repo. Had to add a http repo first, after that adding ftp repo worked. Version-Release number of selected component (if applicable): anaconda 15.20.1 How reproducible: always Steps to Reproduce: 1. Boot F15-alpha-rc1 dvd iso 2. https://fedoraproject.org/wiki/QA:Testcase_Additional_Ftp_Repository Actual results: Failed to retrieve ftp repository unless adding a http repo in advance
Created attachment 480396 [details] tmpfiles.tar.gz
Created attachment 481010 [details] anaconda.log with failing added ftp:// repo According to description it seems like duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=623067#c10. For me, the ftp:// just doesn't work (adding http repo first doesn't help). The logs from comment #1 don't contain info about failing add repo (seem to be gathered too soon).
Also seems related duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=624431 (leading to https://bugzilla.redhat.com/show_bug.cgi?id=643656). But adding http:// repo in advance fixing it is confusing.
(In reply to comment #3) > Also seems related duplicate of 'related to' I mean
It also appears on Fedora15-RC1-i386.
Created attachment 498443 [details] anaconda.log Also reproduced on F-15-final-RC1-x86_64 (anaconda 15.31).
Adding CommonBugs so we can document this issue
Created attachment 502042 [details] urlgrabber log After more investigation my suspicion from comment #2 and comment #3 seems like a red herring. The logs I am attaching are from installation of F15 with http base repository, adding ftp repository in UI.
Created attachment 502043 [details] urlgrabber stderr log part 1 (successful http downloads)
Created attachment 502044 [details] urlgrabber stderr log part 2 (failing ftp download) Mixed up http: and ftp: connections?
From log in comment #10 it seems that curl, although it is not reusing any connection for added ftp repository (there is no existing ftp connection), is checking connection speed of previously used - now claimed as dead - http connection (perhaps #0) that was used to get base repository. As I checked, if you manage to add the new ftp repository within 30s (see the timeout value in urlgrabber log) from last data transfer of the http connection (Installation repository setup), it works ok. Kamil, do you think that what is happening here is OK from the curl point of view? (please see comments #8 - #10) Note: Adding urlgrabber.grabber.reset_curl_obj() before setup of added repository fixes the problem in brute way.
The current_speed value seems to be wrongly computed in that case, maybe some stale state info is being used from the already closed connection. Which libcurl options are used to set the speed limit? Which values are given to them?
Yes, that was my guess too. See the timeout values in log from comment #9 which are 300, then 30.0 (the change seems strange to me too). Logging of a file transfer starts with "combined options" line (or preceeding "MIRROR:" if it is there). The value should be passed to these libcurl options by urlgrabber: # timeouts timeout = 300 if hasattr(opts, 'timeout'): timeout = int(opts.timeout or 0) self.curl_obj.setopt(pycurl.CONNECTTIMEOUT, timeout) self.curl_obj.setopt(pycurl.LOW_SPEED_LIMIT, 1) self.curl_obj.setopt(pycurl.LOW_SPEED_TIME, timeout)
Created attachment 502543 [details] an unsuccessful attempt to reproduce the anaconda issue out of anaconda I was unable to reproduce the issue. What am I doing differently?
(In reply to comment #14) > Created attachment 502543 [details] > an unsuccessful attempt to reproduce the anaconda issue out of anaconda > > I was unable to reproduce the issue. What am I doing differently? Hm, I can't say, thanks for the test, I run it in anaconda environment with the same results (not reproduced). I think I'll have to look higher - urlgrabber or yum. Just a note which doesn't invalidate your test - it is ok to use timeout 30 for the http connection as that is the value before failing ftp connection in the urlgrabber log and about the limit I observed to be decisive for success or fail of added ftp: repo.
CURLOPT_CONNECTTIMEOUT [1] limits only the time spent by initiating a new connection. It does not anyhow limit the consequent transfer once the connection is established. The overall timeout can be set by the CURLOPT_TIMEOUT [2] option. There are then several other *TIMEOUT options to control some corner cases that may be even protocol-specific. AFAIK, libcurl sets no timeouts by default. [1] http://curl.haxx.se/libcurl/c/curl_easy_setopt.html#CURLOPTCONNECTTIMEOUT [2] http://curl.haxx.se/libcurl/c/curl_easy_setopt.html#CURLOPTTIMEOUT
I meant 30s of timeout as urlgrabber value which is used to set LOW_SPEED_TIME. It is LOW_SPEED_TIME, not CONNETTIMEOUT which seems important here because it is this function that fails for ftp connection: (it is probably not the exact libcurl version) CURLcode Curl_speedcheck(struct SessionHandle *data, struct timeval now) { if((data->progress.current_speed >= 0) && data->set.low_speed_time && (Curl_tvlong(data->state.keeps_speed) != 0) && (data->progress.current_speed < data->set.low_speed_limit)) { long howlong = Curl_tvdiff(now, data->state.keeps_speed); /* We are now below the "low speed limit". If we are below it for "low speed time" seconds we consider that enough reason to abort the download. */ if( (howlong/1000) > data->set.low_speed_time) { /* we have been this slow for long enough, now die */ failf(data, "Operation too slow. " "Less than %ld bytes/sec transfered the last %ld seconds", data->set.low_speed_limit, data->set.low_speed_time); return CURLE_OPERATION_TIMEDOUT; } Curl_expire(data, howlong); } else { /* we keep up the required speed all right */ data->state.keeps_speed = now; if(data->set.low_speed_limit) /* if there is a low speed limit enabled, we set the expire timer to make this connection's speed get checked again no later than when this time is up */ Curl_expire(data, data->set.low_speed_time*1000); } return CURLE_OK; } In my observation its result: "Less than 1 bytes/sec transfered the last 30 seconds" seems true just for the previous http: connection. If I add the ftp: repo in less than 30s from the last http transfer (that is more than 1 b was transfered the last 30 seconds for the previous http connection!), it succeeds. This was the motivation of my guess.
(In reply to comment #17) > I meant 30s of timeout as urlgrabber value which is used to set LOW_SPEED_TIME. I have never used the LOW_SPEED_TIME option myself, except the testing example above. > In my observation its result: > "Less than 1 bytes/sec transfered the last 30 seconds" > seems true just for the previous http: connection. If I add the ftp: repo in > less than 30s from the last http transfer (that is more than 1 b was transfered > the last 30 seconds for the previous http connection!), it succeeds. This was > the motivation of my guess. It should measure only the transfer speed. If it included the idle time into the computation of current_speed, it would be a bug. Nevertheless I was not successful in reproducing the issue, so I cannot confirm any such bug in libcurl so far.
Never mind, thanks for your help, I'll try to find reproducer some level above libcurl.
Created attachment 503212 [details] urlgrabber reproducer I can reproduce the bug with attached script in Fedora 15. It does basically this: 1. grab a file using http or ftp 2. sleep N seconds 3. try to grab a file using different ftp connection with timeout < N seconds 4. Fails with "Operation too slow" (the speed check seems to be taken incorrectly on connection from 1.) The urlgrabber settings/methods used in the scripts are those that yum is using in anaconda. Details are in the script or comment #11. Calling urlgrabber.grabber.reset_curl_obj() before 3. fixes the issue, but it seems as proper fix should be in lower level.
Can be related to bug #633244. I'd like to keep this open though to consider fix on anaconda level (adding urlgrabber.grabber.reset_curl_obj())
(In reply to comment #20) > Created attachment 503212 [details] > urlgrabber reproducer > There is default delay == 7 in the script which is value that would make it pass, to make it fail (reproduce), just run it with value greater than 10: 679709_reproducer.py 13
Created attachment 503249 [details] proposed fix This indeed appears to be a bug in libcurl. The attached patch solves the problem for me, but I will need some time to discuss the fix with upstream.
Created attachment 503481 [details] a reliable libcurl-based reproducer
proposed upstream: http://thread.gmane.org/gmane.comp.web.curl.library/32076
upstream commit: https://github.com/bagder/curl/commit/f551aa5
fixed in curl-7.21.6-2.fc16
curl-7.21.3-7.fc15 has been submitted as an update for Fedora 15. https://admin.fedoraproject.org/updates/curl-7.21.3-7.fc15
Package curl-7.21.3-7.fc15: * should fix your issue, * was pushed to the Fedora 15 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=updates-testing curl-7.21.3-7.fc15' as soon as you are able to. Please go to the following url: https://admin.fedoraproject.org/updates/curl-7.21.3-7.fc15 then log in and leave karma (feedback).
curl-7.21.3-7.fc15 has been pushed to the Fedora 15 stable repository. If problems still persist, please make note of it in this bug report.
*** Bug 695054 has been marked as a duplicate of this bug. ***