Bug 998263 - "throttle" option does not work
"throttle" option does not work
Product: Fedora
Classification: Fedora
Component: yum (Show other bugs)
x86_64 Linux
unspecified Severity unspecified
: ---
: ---
Assigned To: packaging-team-maint
Fedora Extras Quality Assurance
Depends On:
  Show dependency treegraph
Reported: 2013-08-18 13:47 EDT by Jan Kratochvil
Modified: 2013-08-19 07:56 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-08-19 04:10:03 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Jan Kratochvil 2013-08-18 13:47:46 EDT
Description of problem:
It works with max_connections=1 but that is neither documented nor useful.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
echo throttle=200k >>/etc/yum.conf
yum upgrade

Actual results:
250KB/s (physical limit of my link).

Expected results:

Additional info:
With throttle=20k one gets 20KB * number of concurrent downloads speed.
Comment 1 Zdeněk Pavlas 2013-08-19 04:10:03 EDT
Yes, throttle translates to per-connection speed limits.  I'll update the documentation.  Package downloads are mostly independent, there's no other way to throttle the total speed than using "thorttle" and "max_connections" simultaneously.  What's wrong with max_connections=1?  It solves your problem, I assume.
Comment 2 Jan Kratochvil 2013-08-19 04:28:35 EDT
It is a bug, it can be only WONTFIXed if you wish so.

User really does not care about limit per connection when the number of connections is random.  User cares about the total data rate of all the connections together.

max_connections=1 is a poor workaround as I get commonly only for example 50KB/s from a single site.
Comment 3 Zdeněk Pavlas 2013-08-19 04:56:56 EDT
I'm curious how you'd want to "fix" this.. Yes, the code that estimates the total data rate is there (it does not run when progress display is off, but that's not the biggest problem here). The interface between downloaders and the parent process is now mostly one-directional (progress updates to the parent process).  Assume we'll make it duplex, and send the scale factor down when the total data rate exceeds the throttle option.  Assume that curl does not break when the MAX_RECV_SPEED_LARGE option changes randomly during a transfer, and the feedback loop will be stable.

THEN it will work. But there are too many assumptions, and the effort/gain factor seems too hight to me.  Using a properly tuned traffic contol script is so much easier, and more appropriate.
Comment 4 Jan Kratochvil 2013-08-19 07:30:05 EDT
It may have implementation problems but with the current state of code it may be for example best to enforce max_connections=1 when throttle is set, with some warning (if max_connections=1 was not set explicitly).
Comment 5 Zdeněk Pavlas 2013-08-19 07:56:33 EDT
IMO, explicit is better than implicit.. Eg one might want to allow 1-2 connections, each throttled at 100kbps, and this change would render that setup impossible.

Note You need to log in before you can comment on or make changes to this bug.