Bug 679709 - Unable to retrieve ftp repository unless adding a http repo first
Summary: Unable to retrieve ftp repository unless adding a http repo first
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: curl
Version: 15
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
Assignee: Kamil Dudka
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard: https://fedoraproject.org/wiki/Common...
: 695054 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-02-23 10:00 UTC by He Rui
Modified: 2011-06-23 09:58 UTC (History)
7 users (show)

Fixed In Version: curl-7.21.3-7.fc15
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-06-15 23:54:02 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
anaconda.log (12.10 KB, text/plain)
2011-02-23 10:00 UTC, He Rui
no flags Details
tmpfiles.tar.gz (56.68 KB, application/x-gzip)
2011-02-23 10:01 UTC, He Rui
no flags Details
anaconda.log with failing added ftp:// repo (10.57 KB, text/plain)
2011-02-25 14:51 UTC, Radek Vykydal
no flags Details
anaconda.log (9.85 KB, application/octet-stream)
2011-05-12 07:04 UTC, He Rui
no flags Details
urlgrabber log (46.37 KB, text/plain)
2011-05-31 16:38 UTC, Radek Vykydal
no flags Details
urlgrabber stderr log part 1 (successful http downloads) (12.28 KB, image/png)
2011-05-31 16:41 UTC, Radek Vykydal
no flags Details
urlgrabber stderr log part 2 (failing ftp download) (12.80 KB, image/png)
2011-05-31 16:43 UTC, Radek Vykydal
no flags Details
an unsuccessful attempt to reproduce the anaconda issue out of anaconda (1.60 KB, text/plain)
2011-06-02 14:05 UTC, Kamil Dudka
no flags Details
urlgrabber reproducer (2.54 KB, text/plain)
2011-06-06 12:52 UTC, Radek Vykydal
no flags Details
proposed fix (570 bytes, patch)
2011-06-06 14:44 UTC, Kamil Dudka
no flags Details | Diff
a reliable libcurl-based reproducer (1.58 KB, text/plain)
2011-06-07 13:41 UTC, Kamil Dudka
no flags Details

Description He Rui 2011-02-23 10:00:36 UTC
Created attachment 480395 [details]
anaconda.log

Description of problem:
Booted F15 Alpha rc1 DVD and proceeded to package repo step. Failed to directly add a ftp repo. Had to add a http repo first, after that adding ftp repo worked. 

Version-Release number of selected component (if applicable):
anaconda 15.20.1

How reproducible:
always

Steps to Reproduce:
1. Boot F15-alpha-rc1 dvd iso
2. https://fedoraproject.org/wiki/QA:Testcase_Additional_Ftp_Repository

Actual results:
Failed to retrieve ftp repository unless adding a http repo in advance

Comment 1 He Rui 2011-02-23 10:01:18 UTC
Created attachment 480396 [details]
tmpfiles.tar.gz

Comment 2 Radek Vykydal 2011-02-25 14:51:37 UTC
Created attachment 481010 [details]
anaconda.log with failing added ftp:// repo

According to description it seems like duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=623067#c10. For me, the ftp:// just doesn't work (adding http repo first doesn't help). The logs from comment #1 don't contain info about failing add repo (seem to be gathered too soon).

Comment 3 Radek Vykydal 2011-02-25 15:48:43 UTC
Also seems related duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=624431 (leading to https://bugzilla.redhat.com/show_bug.cgi?id=643656). But adding http:// repo in advance fixing it is confusing.

Comment 4 Radek Vykydal 2011-02-25 15:49:22 UTC
(In reply to comment #3)
> Also seems related duplicate of
'related to' I mean

Comment 5 Tao Wu 2011-05-12 06:35:40 UTC
It also appears on Fedora15-RC1-i386.

Comment 6 He Rui 2011-05-12 07:04:49 UTC
Created attachment 498443 [details]
anaconda.log

Also reproduced on F-15-final-RC1-x86_64 (anaconda 15.31).

Comment 7 James Laska 2011-05-13 16:34:30 UTC
Adding CommonBugs so we can document this issue

Comment 8 Radek Vykydal 2011-05-31 16:38:51 UTC
Created attachment 502042 [details]
urlgrabber log

After more investigation my suspicion from comment #2 and comment #3 seems like a red herring. The logs I am attaching are from installation of F15 with http base repository, adding ftp repository in UI.

Comment 9 Radek Vykydal 2011-05-31 16:41:41 UTC
Created attachment 502043 [details]
urlgrabber stderr log part 1 (successful http downloads)

Comment 10 Radek Vykydal 2011-05-31 16:43:30 UTC
Created attachment 502044 [details]
urlgrabber stderr log part 2 (failing ftp download)

Mixed up http: and ftp: connections?

Comment 11 Radek Vykydal 2011-06-01 12:17:57 UTC
From log in comment #10 it seems that curl, although it is not reusing any connection for added ftp repository (there is no existing ftp connection), is checking connection speed of previously used - now claimed as dead - http connection (perhaps #0) that was used to get base repository. As I checked, if you manage to add the new ftp repository within 30s (see the timeout value in urlgrabber log) from last data transfer of the http connection (Installation repository setup), it works ok.

Kamil, do you think that what is happening here is OK from the curl point of view? (please see comments #8 - #10)

Note:
Adding urlgrabber.grabber.reset_curl_obj() before setup of added repository fixes the problem in brute way.

Comment 12 Kamil Dudka 2011-06-02 13:02:54 UTC
The current_speed value seems to be wrongly computed in that case, maybe some stale state info is being used from the already closed connection.  Which libcurl options are used to set the speed limit?  Which values are given to them?

Comment 13 Radek Vykydal 2011-06-02 13:30:35 UTC
Yes, that was my guess too. See the timeout values in log from comment #9 which are 300, then 30.0 (the change seems strange to me too). Logging of a file transfer starts with "combined options" line (or preceeding "MIRROR:" if it is there).

The value should be passed to these libcurl options by urlgrabber:

        # timeouts
        timeout = 300
        if hasattr(opts, 'timeout'):
            timeout = int(opts.timeout or 0)
        self.curl_obj.setopt(pycurl.CONNECTTIMEOUT, timeout)
        self.curl_obj.setopt(pycurl.LOW_SPEED_LIMIT, 1)
        self.curl_obj.setopt(pycurl.LOW_SPEED_TIME, timeout)

Comment 14 Kamil Dudka 2011-06-02 14:05:31 UTC
Created attachment 502543 [details]
an unsuccessful attempt to reproduce the anaconda issue out of anaconda

I was unable to reproduce the issue.  What am I doing differently?

Comment 15 Radek Vykydal 2011-06-03 13:33:41 UTC
(In reply to comment #14)
> Created attachment 502543 [details]
> an unsuccessful attempt to reproduce the anaconda issue out of anaconda
> 
> I was unable to reproduce the issue.  What am I doing differently?

Hm, I can't say, thanks for the test, I run it in anaconda environment with the same results (not reproduced). I think I'll have to look higher - urlgrabber or yum.

Just a note which doesn't invalidate your test - it is ok to use timeout 30 for the http connection as that is the value before failing ftp connection in the urlgrabber log and about the limit I observed to be decisive for success or fail of added ftp: repo.

Comment 16 Kamil Dudka 2011-06-03 13:47:38 UTC
CURLOPT_CONNECTTIMEOUT [1] limits only the time spent by initiating a new connection.  It does not anyhow limit the consequent transfer once the connection is established.  The overall timeout can be set by the CURLOPT_TIMEOUT [2] option.  There are then several other *TIMEOUT options to control some corner cases that may be even protocol-specific.  AFAIK, libcurl sets no timeouts by default.

[1] http://curl.haxx.se/libcurl/c/curl_easy_setopt.html#CURLOPTCONNECTTIMEOUT
[2] http://curl.haxx.se/libcurl/c/curl_easy_setopt.html#CURLOPTTIMEOUT

Comment 17 Radek Vykydal 2011-06-03 14:13:50 UTC
I meant 30s of timeout as urlgrabber value which is used to set LOW_SPEED_TIME. It is LOW_SPEED_TIME, not CONNETTIMEOUT which seems important here because it is this function that fails for ftp connection:

(it is probably not the exact libcurl version)

CURLcode Curl_speedcheck(struct SessionHandle *data,
                         struct timeval now)
{
  if((data->progress.current_speed >= 0) &&
     data->set.low_speed_time &&
     (Curl_tvlong(data->state.keeps_speed) != 0) &&
     (data->progress.current_speed < data->set.low_speed_limit)) {
    long howlong = Curl_tvdiff(now, data->state.keeps_speed);
  
    /* We are now below the "low speed limit". If we are below it
       for "low speed time" seconds we consider that enough reason
       to abort the download. */
                         
    if( (howlong/1000) > data->set.low_speed_time) {
      /* we have been this slow for long enough, now die */
      failf(data,
            "Operation too slow. "
            "Less than %ld bytes/sec transfered the last %ld seconds",
            data->set.low_speed_limit,
            data->set.low_speed_time);
      return CURLE_OPERATION_TIMEDOUT;
    }
    Curl_expire(data, howlong);
  }
  else {
    /* we keep up the required speed all right */
    data->state.keeps_speed = now;

    if(data->set.low_speed_limit)
      /* if there is a low speed limit enabled, we set the expire timer to
         make this connection's speed get checked again no later than when
         this time is up */
      Curl_expire(data, data->set.low_speed_time*1000);
  }
  return CURLE_OK;
}


In my observation its result:
"Less than 1 bytes/sec transfered the last 30 seconds"
seems true just for the previous http: connection. If I add the ftp: repo in less than 30s from the last http transfer (that is more than 1 b was transfered the last 30 seconds for the previous http connection!), it succeeds. This was the motivation of my guess.

Comment 18 Kamil Dudka 2011-06-03 14:26:39 UTC
(In reply to comment #17)
> I meant 30s of timeout as urlgrabber value which is used to set LOW_SPEED_TIME.

I have never used the LOW_SPEED_TIME option myself, except the testing example above.

> In my observation its result:
> "Less than 1 bytes/sec transfered the last 30 seconds"
> seems true just for the previous http: connection. If I add the ftp: repo in
> less than 30s from the last http transfer (that is more than 1 b was transfered
> the last 30 seconds for the previous http connection!), it succeeds. This was
> the motivation of my guess.

It should measure only the transfer speed.  If it included the idle time into the computation of current_speed, it would be a bug.  Nevertheless I was not successful in reproducing the issue, so I cannot confirm any such bug in libcurl so far.

Comment 19 Radek Vykydal 2011-06-03 14:37:16 UTC
Never mind, thanks for your help, I'll try to find reproducer some level above libcurl.

Comment 20 Radek Vykydal 2011-06-06 12:52:24 UTC
Created attachment 503212 [details]
urlgrabber reproducer

I can reproduce the bug with attached script in Fedora 15. It does basically this:

1. grab a file using http or ftp
2. sleep N seconds
3. try to grab a file using different ftp connection with timeout < N seconds
4. Fails with "Operation too slow" (the speed check seems to be taken incorrectly on connection from 1.)

The urlgrabber settings/methods used in the scripts are those that yum is using in anaconda.

Details are in the script or comment #11.

Calling urlgrabber.grabber.reset_curl_obj() before 3. fixes the issue, but it seems as proper fix should be in lower level.

Comment 21 Radek Vykydal 2011-06-06 13:06:52 UTC
Can be related to bug #633244. I'd like to keep this open though to consider fix on anaconda level (adding urlgrabber.grabber.reset_curl_obj())

Comment 22 Radek Vykydal 2011-06-06 13:44:41 UTC
(In reply to comment #20)
> Created attachment 503212 [details]
> urlgrabber reproducer
> 

There is default delay == 7 in the script which is value that would make it pass, to make it fail (reproduce), just run it with value greater than 10:

679709_reproducer.py 13

Comment 23 Kamil Dudka 2011-06-06 14:44:35 UTC
Created attachment 503249 [details]
proposed fix

This indeed appears to be a bug in libcurl.  The attached patch solves the problem for me, but I will need some time to discuss the fix with upstream.

Comment 24 Kamil Dudka 2011-06-07 13:41:03 UTC
Created attachment 503481 [details]
a reliable libcurl-based reproducer

Comment 25 Kamil Dudka 2011-06-07 20:32:32 UTC
proposed upstream:

http://thread.gmane.org/gmane.comp.web.curl.library/32076

Comment 26 Kamil Dudka 2011-06-07 22:32:48 UTC
upstream commit:

https://github.com/bagder/curl/commit/f551aa5

Comment 27 Kamil Dudka 2011-06-07 23:24:54 UTC
fixed in curl-7.21.6-2.fc16

Comment 28 Fedora Update System 2011-06-08 15:14:11 UTC
curl-7.21.3-7.fc15 has been submitted as an update for Fedora 15.
https://admin.fedoraproject.org/updates/curl-7.21.3-7.fc15

Comment 29 Fedora Update System 2011-06-10 13:30:55 UTC
Package curl-7.21.3-7.fc15:
* should fix your issue,
* was pushed to the Fedora 15 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing curl-7.21.3-7.fc15'
as soon as you are able to.
Please go to the following url:
https://admin.fedoraproject.org/updates/curl-7.21.3-7.fc15
then log in and leave karma (feedback).

Comment 30 Fedora Update System 2011-06-15 23:53:53 UTC
curl-7.21.3-7.fc15 has been pushed to the Fedora 15 stable repository.  If problems still persist, please make note of it in this bug report.

Comment 31 Radek Vykydal 2011-06-23 09:58:21 UTC
*** Bug 695054 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.