Bug 998263 - "throttle" option does not work
Summary: "throttle" option does not work
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: yum
Version: 18
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: Packaging Maintenance Team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-18 17:47 UTC by Jan Kratochvil
Modified: 2013-08-19 11:56 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-08-19 08:10:03 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Jan Kratochvil 2013-08-18 17:47:46 UTC
Description of problem:
It works with max_connections=1 but that is neither documented nor useful.

Version-Release number of selected component (if applicable):
yum-3.4.3-54.fc18.noarch

How reproducible:
Always.

Steps to Reproduce:
echo throttle=200k >>/etc/yum.conf
yum upgrade

Actual results:
250KB/s (physical limit of my link).

Expected results:
200KB/s

Additional info:
With throttle=20k one gets 20KB * number of concurrent downloads speed.

Comment 1 Zdeněk Pavlas 2013-08-19 08:10:03 UTC
Yes, throttle translates to per-connection speed limits.  I'll update the documentation.  Package downloads are mostly independent, there's no other way to throttle the total speed than using "thorttle" and "max_connections" simultaneously.  What's wrong with max_connections=1?  It solves your problem, I assume.

Comment 2 Jan Kratochvil 2013-08-19 08:28:35 UTC
It is a bug, it can be only WONTFIXed if you wish so.

User really does not care about limit per connection when the number of connections is random.  User cares about the total data rate of all the connections together.

max_connections=1 is a poor workaround as I get commonly only for example 50KB/s from a single site.

Comment 3 Zdeněk Pavlas 2013-08-19 08:56:56 UTC
I'm curious how you'd want to "fix" this.. Yes, the code that estimates the total data rate is there (it does not run when progress display is off, but that's not the biggest problem here). The interface between downloaders and the parent process is now mostly one-directional (progress updates to the parent process).  Assume we'll make it duplex, and send the scale factor down when the total data rate exceeds the throttle option.  Assume that curl does not break when the MAX_RECV_SPEED_LARGE option changes randomly during a transfer, and the feedback loop will be stable.

THEN it will work. But there are too many assumptions, and the effort/gain factor seems too hight to me.  Using a properly tuned traffic contol script is so much easier, and more appropriate.

Comment 4 Jan Kratochvil 2013-08-19 11:30:05 UTC
It may have implementation problems but with the current state of code it may be for example best to enforce max_connections=1 when throttle is set, with some warning (if max_connections=1 was not set explicitly).

Comment 5 Zdeněk Pavlas 2013-08-19 11:56:33 UTC
IMO, explicit is better than implicit.. Eg one might want to allow 1-2 connections, each throttled at 100kbps, and this change would render that setup impossible.


Note You need to log in before you can comment on or make changes to this bug.