Description of problem: If I run wget with --limit-rate=10K and the HTTP connection gets closed, then when it retries, it seems to go over the limit for a while before returning back to the limit. Version-Release number of selected component (if applicable): wget-1.9.1-22 on Fedora Core 4, which says it is 'GNU Wget 1.9+cvs-stable (Red Hat modified)' How reproducible: Easily Steps to Reproduce: 1. run wget --limit-rate http://some.site/some.large.file 2. after it has been running for a while suspend it with Ctrl-Z for a few minutes 3. resume wget with fg Actual results: wget says the connection has dropped, and says 'reconnecting' wget then starts using up all available bandwidth instead of only up to the limit Expected results: wget should never use up more than the limit-rate bandwidth limit Additional info:
I have checked the RedHat patches and it doesn't seem to be related to any of them. I had a look at the source code ... It seems that the code calculates the amount of time it expects to be downloading the chunk, and sleeps to try and reduce the rate. It seems the reconnection is therefore "catching up" on the amount of time it was sleeping (when disconnected) before the reconnection rather than just not resetting the limit properly. This doesn't seem to give the desired effect of preventing the bandwidth from exceeding a certain value. Would it be possible to change this approach?
This report targets the FC3 or FC4 products, which have now been EOL'd. Could you please check that it still applies to a current Fedora release, and either update the target product or close it ? Thanks.
Tested on FC5 and this behaves as I wanted it to - the actual rate of transfer is still limited as specified after resuming. Marking as resolved