Bug 1291867

Summary: [conn] show progress when repo has metalink
Product: [Fedora] Fedora Reporter: Christian Stadelmann <fedora>
Component: dnfAssignee: rpm-software-management
Status: CLOSED ERRATA QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 24CC: fedora, jmracek, mluscon, packaging-team-maint, vmukhame
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: dnf-2.5.1-1.fc26 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-06-16 13:18:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Christian Stadelmann 2015-12-15 18:47:59 UTC
Description of problem:
Right now mirrors of rpmfusion seem to be suffering a high load. They barely respond to HTTP requests. When using dnf it freezes for minutes.

Version-Release number of selected component (if applicable):
dnf-1.1.4-2.fc23.noarch
hawkey-0.6.2-3.fc23.x86_64
libsolv-0.6.14-7.fc23.x86_64
rpm-4.13.0-0.rc1.7.fc23.x86_64

How reproducible:
I don't really know. Right now (with mirror offline) I can permanently reproduce this issue.

Steps to Reproduce:
1. start dnf with any action causing mirror refresh, e.g. `dnf upgrade`
2. wait.

Actual results:
For a very long time (~10 minutes) there is no response by dnf.

Expected results:
If a mirror is offline (and skip_if_unavailable is False, which is the case here) dnf should skip the mirror after some seconds (e.g. 5000 milliseconds). It should not blocking wait for minutes to connect to the mirror.

Additional info:
This issue also makes yumex-dnf (using dnf api) freeze on start in such cases.

Comment 1 Honza Silhan 2015-12-23 14:32:58 UTC
(In reply to Christian Stadelmann from comment #0)
> If a mirror is offline and skip_if_unavailable is False
You probably meant True

Do you still have such problems? I believe these were temporary rpmfusion server issues. Can you try it again and set `timeout` dnf config value /etc/dnf/dnf.conf to lower value than default and post result, please?

Comment 2 Honza Silhan 2015-12-23 14:33:52 UTC
+ attach the data described here [1], please.

[1] https://github.com/rpm-software-management/dnf/wiki/Bug-Reporting#connection-issue

Comment 3 Christian Stadelmann 2016-01-19 22:34:10 UTC
> > If a mirror is offline and skip_if_unavailable is False
> You probably meant True

Yes, I meant True.

> 
> Do you still have such problems? I believe these were temporary rpmfusion
> server issues. Can you try it again and set `timeout` dnf config value
> /etc/dnf/dnf.conf to lower value than default and post result, please?
> + attach the data described here [1], please.
> 
> [1]
> https://github.com/rpm-software-management/dnf/wiki/Bug-Reporting#connection-
> issue

$ rpm -q librepo curl dnf
librepo-1.7.16-2.fc23.x86_64
curl-7.43.0-4.fc23.x86_64
dnf-1.1.5-1.fc23.noarch

I wasn't able to reproduce and the /var/log/dnf.librepo.log file doesn't have timestamps and has gone already.

Is it possible that the timeout (default: 30seconds) was just applied to each repo successively, maybe even two or three times per repo when retrying?

Comment 4 Honza Silhan 2016-01-25 12:14:39 UTC
(In reply to Christian Stadelmann from comment #3)

> Is it possible that the timeout (default: 30seconds) was just applied to
> each repo successively, maybe even two or three times per repo when retrying?

The timeout is for one repo.

Comment 5 Christian Stadelmann 2016-01-25 14:14:51 UTC
Ok, that probably has been the cause then. With 6 rpmfusion repos enabled it waited at least 3 minutes.

Comment 6 Fedora Admin XMLRPC Client 2016-07-08 09:35:37 UTC
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

Comment 7 Jaroslav Mracek 2017-05-23 15:04:10 UTC
Ok, a have found that timeout in dnf is set to 120 s and it differs from yum. I create PR that changed it to 30 s like in yum (https://github.com/rpm-software-management/dnf/pull/824). Probably it helps. The showing a progress if used metalink it would be tricky, because we use at the same time 3 downloads in parallel and each of downloads could be handled from different server. But probably we will improve that also in future.

Comment 8 Fedora Update System 2017-06-12 15:29:51 UTC
dnf-plugins-core-2.1.1-1.fc26 libdnf-0.9.1-1.fc26 dnf-2.5.1-1.fc26 has been submitted as an update to Fedora 26. https://bodhi.fedoraproject.org/updates/FEDORA-2017-c87c47dccb

Comment 9 Fedora Update System 2017-06-14 01:35:41 UTC
dnf-2.5.1-1.fc26, dnf-plugins-core-2.1.1-1.fc26, libdnf-0.9.1-1.fc26 has been pushed to the Fedora 26 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2017-c87c47dccb

Comment 10 Fedora Update System 2017-06-14 05:33:46 UTC
dnf-2.5.1-1.fc26 dnf-plugins-core-2.1.1-1.fc26 dnfdaemon-0.3.18-3.fc26 libdnf-0.9.1-1.fc26 has been submitted as an update to Fedora 26. https://bodhi.fedoraproject.org/updates/FEDORA-2017-c87c47dccb

Comment 11 Fedora Update System 2017-06-15 13:55:52 UTC
dnf-2.5.1-1.fc26, dnf-plugins-core-2.1.1-1.fc26, dnfdaemon-0.3.18-3.fc26, libdnf-0.9.1-1.fc26 has been pushed to the Fedora 26 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2017-c87c47dccb

Comment 12 Fedora Update System 2017-06-16 13:18:42 UTC
dnf-2.5.1-1.fc26, dnf-plugins-core-2.1.1-1.fc26, dnfdaemon-0.3.18-3.fc26, libdnf-0.9.1-1.fc26 has been pushed to the Fedora 26 stable repository. If problems still persist, please make note of it in this bug report.