Bug 853432
Summary: | Yum with parallel download cannot download packages from repo that allow only single connection | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Patrik Kis <pkis> |
Component: | python-urlgrabber | Assignee: | James Antill <james.antill> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Patrik Kis <pkis> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.0 | CC: | jkejda, jzeleny, ksrot, packaging-team-maint, pkis |
Target Milestone: | rc | Keywords: | Regression |
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | python-urlgrabber-3.10-1.el7 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-06-13 10:49:11 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 851152 | ||
Attachments: |
Description
Patrik Kis
2012-08-31 13:41:25 UTC
This bug is still present in recent tree of RHEL7: RHEL-7.0-20130502.0 Server yum-3.4.3-86.el7 This repository configuration does not use metalink.xml that specifies the per-mirror connection limit. As a result, Yum uses the urlgrabber default (that's up to 2 connections to the same mirror). Also, max_connections is not set in yum.conf so the effective total connection limit is 5 (again, the urlgrabber default). IMO, Yum behaves as specified and expected. Does adding max_connections=1 to yum.conf solve the problem? Is that an acceptable solution? To handle such cases without user intervention, maybe we should handle server timeouts the same way as HTTP 503 errors, so Yum will retry the same mirror up to retry times. (see the related BZ 765598). (In reply to comment #5) > This repository configuration does not use metalink.xml that specifies the > per-mirror connection limit. As a result, Yum uses the urlgrabber default > (that's up to 2 connections to the same mirror). Also, max_connections is > not set in yum.conf so the effective total connection limit is 5 (again, the > urlgrabber default). > > IMO, Yum behaves as specified and expected. Does adding max_connections=1 > to yum.conf solve the problem? Is that an acceptable solution? Not really. This test is about backward compatibility and regression. Yum prior parallel download would work with this specific repo with default setting. The new yum should work out of the box as well without configuration changes. > > To handle such cases without user intervention, maybe we should handle > server timeouts the same way as HTTP 503 errors, so Yum will retry the same > mirror up to retry times. (see the related BZ 765598). Timeout sounds good, the question is what happen when the connection times out completely, i.e. 3x timeout. For example, if the timeout is 5 second and download time for one package takes 30 seconds, yum starts to download the 1st package but the 2bd connection times out completely after 15 seconds (3x5seconds). After this point I can imagine two scenarios: 1/ Installation fail; IMHO we should avoid this. 2/ Kind of fallback to single connection download; i.e. yum restart to 2nd package download after the 1st package download finished. Other possibility is to change the default same-mirror connection limit to 1 in urlgrabber. Realistically, is ftpd with max_clients=1 something we must support ootb? The max_clients option is very likely a global limit. Such server won't work with old Yum either, if there's another client active.
> For example, if the timeout is 5 second and download time for one package takes 30 seconds, yum starts to download the 1st package but the 2bd connection times out completely after 15 seconds (3x5seconds).
The default timeout is 30 seconds, and the default retry limit is 10. It should work in most cases, IMO.
(In reply to comment #7) > Other possibility is to change the default same-mirror connection limit to 1 > in urlgrabber. Realistically, is ftpd with max_clients=1 something we must > support ootb? The max_clients option is very likely a global limit. Such > server won't work with old Yum either, if there's another client active. The ftp setting max_clients=1 is used only for test purposes. In production it probably wouldn't be used this way (although it is still a valid configuration), but what could be set up is a connection limit per host, e.g. on firewall or in ftp/http server tself. The result would be the same: only one connection allowed per client. > > > For example, if the timeout is 5 second and download time for one package takes 30 seconds, yum starts to download the 1st package but the 2bd connection times out completely after 15 seconds (3x5seconds). > > The default timeout is 30 seconds, and the default retry limit is 10. It > should work in most cases, IMO. Ok that's about 5 minutes; pretty much. I think then the timeout/retry approach could work for the most real word cases. Verification: failed When I try to install more packages from a repository that allows only one ftp connection the installation fails. yum repolist all succeed, but a lot of ugly error messages are displayed for every retry. Basically error all messages says the same, so maybe one warning would be enough. yum-3.4.3-91.el7.noarch python-urlgrabber-3.9.1-28.el7.noarch The same reproducer as from C1, i.e. a simple ftp repo, default vsftpd.conf just three lines added: anon_max_rate=100000 trans_chunk_size=4096 max_clients=1 # yum repolist all Loaded plugins: product-id, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. beaker-client | 1.5 kB 00:00:00 ftp_test | 2.9 kB 00:00:00 qa-tools | 1.2 kB 00:00:00 rhel7 | 3.8 kB 00:00:00 rhel7-debug | 3.0 kB 00:00:00 rhel7-optional | 3.8 kB 00:00:00 rhel7-optional-debug | 3.0 kB 00:00:00 ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. (1/7): qa-tools/primary | 31 kB 00:00:00 (2/7): rhel7-debug/primary_db | 582 kB 00:00:00 ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. (3/7): rhel7/primary_db | 2.8 MB 00:00:00 ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. (4/7): beaker-client/primary | 5.3 kB 00:00:00 ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. (5/7): rhel7-optional/primary_db | 3.3 MB 00:00:01 (6/7): rhel7-optional-debug/primary_db | 56 kB 00:00:03 ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/b67b877ffd88e90f7e4f72d1930cd65ecaff45c24ce4f53cde99c336ebcc1868-primary.sqlite.bz2: (28, '') Trying other mirror. beaker-client 28/28 ftp_test/primary_db | 1.9 kB 00:00:00 qa-tools 243/243 repo id repo name status beaker-client Beaker Client - Fedora7.0 enabled: 28 beaker-client-testing Beaker Client - Fedora7.0 - Testing disabled ftp_test ftp_test enabled: 4 qa-tools QA Tools enabled: 243 rhel7 rhel7 enabled: 4,327 rhel7-debug rhel7-debug enabled: 1,894 rhel7-optional rhel7-optional enabled: 7,977 rhel7-optional-debug rhel7-optional-debug enabled: 226 repolist: 14,699 # # # # yum install -y parayum1 parayum2 parayum3 parayum4 Loaded plugins: product-id, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Resolving Dependencies --> Running transaction check ---> Package parayum1.noarch 0:1.0-1 will be installed ---> Package parayum2.noarch 0:1.0-1 will be installed ---> Package parayum3.noarch 0:1.0-1 will be installed ---> Package parayum4.noarch 0:1.0-1 will be installed --> Finished Dependency Resolution Dependencies Resolved ======================================================================================================= Package Arch Version Repository Size ======================================================================================================= Installing: parayum1 noarch 1.0-1 ftp_test 706 k parayum2 noarch 1.0-1 ftp_test 706 k parayum3 noarch 1.0-1 ftp_test 706 k parayum4 noarch 1.0-1 ftp_test 706 k Transaction Summary ======================================================================================================= Install 4 Packages Total download size: 2.8 M Installed size: 2.8 M Downloading packages: parayum2-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum3-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum4-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum2-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum3-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum4-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum2-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum3-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum4-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum2-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum3-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum4-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum2-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum3-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum4-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum2-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum3-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum4-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum2-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum3-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum4-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum2-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum3-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum4-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum2-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum3-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum4-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum2-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum3-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum3-1.0-1.noarch.rpm: (28, '') Trying other mirror. parayum4-1.0-1.noarch.rpm FAILED ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/parayum4-1.0-1.noarch.rpm: (28, '') Trying other mirror. (1/4): parayum1-1.0-1.noarch.rpm | 706 kB 00:00:07 Error downloading packages: parayum4-1.0-1.noarch: [Errno 256] No more mirrors to try. parayum3-1.0-1.noarch: [Errno 256] No more mirrors to try. parayum2-1.0-1.noarch: [Errno 256] No more mirrors to try. Tcpdump showed that yum while downloading the 1st package, yum tries to establish a new ftp connection to port 21 but that is rejected constantly by the ftp server. After 20 retry or so, yum gives it up and after the 1st package download finish, it fails with installation. It looks like the retry/timeout approach does not work when the connection is rejected immediately because the retry limit is used quickly. So I think other approach is necessary. Please note that the same test works fine on RHEL-6, with yum prior parallel download approach. IMHO, yum should simple retry again after the download of the 1st package finished and the connection is closed with ftp server. And I think I should extend the test to http protocol too. When the connection limitation is done by iptables and not by vsftpd, like: iptables -I INPUT 1 -p tcp --syn --dport 21 -m connlimit --connlimit-above 1 -j REJECT --reject-with tcp-reset the fails again. On RHEL-6 it works. > the connection is rejected immediately ... > ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: [Errno 12] Timeout on > ftp://127.0.0.1/ftp_test/parayum2-1.0-1.noarch.rpm: (28, '') > Trying other mirror. Weird, curl raised error=28 timeout, so it should be 30 seconds or so. Probably the FTP server responds with something like "too many connections, go away", but curl reports it (incorrectly) as timeout. > IMHO, yum should simple retry again after the download of the 1st package finished and the connection is closed with ftp server. Yum can't know that the pending download (parayum1) IS CAUSING other downloads to fail. The downloader is aware that these are using the same server, but since the connection limit is 2, they're tried in parallel. Seems that the only possible solutions are: a) Add a retry-after-timeout queue. Some errors (HTTP 530, FTP timeout) are not retried immediately, but after a configured delay. We could also parse the suggested delay from HTTP 530 responses, and handle it "the right way".. b) Adjust the host connection limit on timeouts.. That way, we'd change the connection limit from 2 to 1 after first timeout, and then block until parayum1 finishes. This is much simpler and probably more robust. I'd probably go with implementing b). (In reply to Zdeněk Pavlas from comment #11) ... > > a) Add a retry-after-timeout queue. Some errors (HTTP 530, FTP timeout) are > not retried immediately, but after a configured delay. We could also parse > the suggested delay from HTTP 530 responses, and handle it "the right way".. > > b) Adjust the host connection limit on timeouts.. That way, we'd change the > connection limit from 2 to 1 after first timeout, and then block until > parayum1 finishes. This is much simpler and probably more robust. > > I'd probably go with implementing b). Yes, b) quite makes sense to me. If I understood correctly that would also prevent the spam of error messages. The verification failed again. I see no progress here with the new version of urlgrabber. python-urlgrabber-3.9.1-29.el7 yum-3.4.3-101.el7 # yum repolist all Loaded plugins: auto-update-debuginfo, product-id, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. beaker-client | 1.5 kB 00:00:00 ftp_test | 2.9 kB 00:00:00 qa-tools | 1.2 kB 00:00:00 rhel7 | 3.8 kB 00:00:00 rhel7-debug | 3.0 kB 00:00:00 rhel7-optional | 3.8 kB 00:00:00 rhel7-optional-debug | 3.0 kB 00:00:00 http://127.0.0.1/testrepo/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to 127.0.0.1:80; Connection refused" Trying other mirror. beaker-client/primary | 5.1 kB 00:00:00 beaker-client 28/28 ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db FAILED ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: [Errno 12] Timeout on ftp://127.0.0.1/ftp_test/repodata/24a0745eebece8128116e97564c3e7efbf544f938592efb615267ad0aa54ed70-primary.sqlite.bz2: (28, '') Trying other mirror. ftp_test/primary_db | 1.9 kB 00:00:00 qa-tools/primary | 32 kB 00:00:00 qa-tools 263/263 rhel7/primary_db | 3.4 MB 00:00:00 rhel7-debug/primary_db | 565 kB 00:00:00 rhel7-optional/primary_db | 1.7 MB 00:00:00 rhel7-optional-debug/primary_db | 56 kB 00:00:00 http://127.0.0.1/testrepo/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to 127.0.0.1:80; Connection refused" Trying other mirror. repo id repo name status beaker-client Beaker Client - Fedora7.0 enabled: 28 beaker-client-testing Beaker Client - Fedora7.0 - Testing disabled ftp_test ftp_test enabled: 4 qa-tools QA Tools enabled: 263 rhel7 rhel7 enabled: 4,390 rhel7-debug rhel7-debug enabled: 1,890 rhel7-optional rhel7-optional enabled: 3,886 rhel7-optional-debug rhel7-optional-debug enabled: 230 testrepo testrepo enabled: 0 repolist: 10,691 > ftp_test/primary_db FAILED But now it's NOT failing on *.rpm download, it fails on *-primary.sqlite.bz2 download instead! I think the problem is that the FTP speed limit is too low, probably lower than the default minrate=1000. Please try using FTP speed limit larger than Yum's minrate (or set a lower minrate in yum.conf). (In older urlgrabber the default minrate was 1, now it's 1000), see BZ 964298. Sorry, should be BZ 860181. (In reply to Zdeněk Pavlas from comment #14) > > ftp_test/primary_db FAILED > > But now it's NOT failing on *.rpm download, it fails on *-primary.sqlite.bz2 > download instead! I think the problem is that the FTP speed limit is too > low, probably lower than the default minrate=1000. Please try using FTP > speed limit larger than Yum's minrate (or set a lower minrate in yum.conf). > > (In older urlgrabber the default minrate was 1, now it's 1000), see BZ > 964298. Quick test shows that it didn't help. Tested with yum-3.4.3-101.el7.noarch. > ftp_test/primary_db | 1.9 kB 00:00:00 Ok, so the metadata request for "ftp_test/primary_db" finally succeeds: > http://127.0.0.1/testrepo/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to 127.0.0.1:80; Connection refused" Why are your testing ftp_test and testrepo in the same run? Also, I suppose this test should primarily test install of multiple packages, not the "yum repolist" command. > http://127.0.0.1/testrepo/parayum1-1.0-1.noarch.rpm: [Errno 14] curl#7 - "Failed connect to 127.0.0.1:80; Connection refused"
Last time, the setup used FTP.. Why have you changed testrepo.baseurl to HTTP? You should probably start httpd first. "nejsou penize, neni laska"!
(In reply to Zdeněk Pavlas from comment #19) > > http://127.0.0.1/testrepo/parayum1-1.0-1.noarch.rpm: [Errno 14] curl#7 - "Failed connect to 127.0.0.1:80; Connection refused" > > Last time, the setup used FTP.. Why have you changed testrepo.baseurl to > HTTP? You should probably start httpd first. "nejsou penize, neni laska"! There are 3 test in this automated test. In all of them a test repository is set up with a few packages, then "yum repolist all" is issued and finally the packages are installed then erased. The difference between the tests are in how the repositories are accessible. 1/ ftp repository; only one ftp connection is allowed by the server: # echo 'anon_max_rate=100000' >> /etc/vsftpd/vsftpd.conf # echo 'trans_chunk_size=4096' >> /etc/vsftpd/vsftpd.conf # echo 'max_clients=1' >> /etc/vsftpd/vsftpd.conf 2/ ftp repository; used the default vsftdp.conf but the connection is limited by iptables # iptables -I INPUT 1 -p tcp --syn --dport 21 -m connlimit --connlimit-above 1 -j REJECT --reject-with tcp-reset 3/ the same as above (iptables limitation) but with httpd # iptables -I INPUT 1 -p tcp --syn --dport 80 -m connlimit --connlimit-above 1 -j REJECT --reject-with tcp-reset Actually all 3 test cases are failing on RHEL-7; just for comparison, the very same test is passing on RHEL-6.4. Originally there was only one test (FTP and not iptables) but then I extended it to cover more use cases. (In reply to Zdeněk Pavlas from comment #14) > > ftp_test/primary_db FAILED > > But now it's NOT failing on *.rpm download, it fails on *-primary.sqlite.bz2 > download instead! I think the problem is that the FTP speed limit is too > low, probably lower than the default minrate=1000. Please try using FTP > speed limit larger than Yum's minrate (or set a lower minrate in yum.conf). > > (In older urlgrabber the default minrate was 1, now it's 1000), see BZ > 964298. And I tried to play with booth minrate in yum.conf and also ftp speed too, but I observed no difference. yum repolist simply drops all those errors and the subsequent install fails. Thanks for explaining this. So now instead of stalling the FTP connection (and timing the client out) you reset it with iptables. This explains the 2nd error (Connection refused). I'll have to fall back to single-connection mode on refused connections, too. I couldn't push this fix to rhel, due to qa_ack=? .. Could you try with any of: python-urlgrabber-3.9.1-32.fc{19,20,21}? (you should be able to install it in rhel7) These revert to single-connection mode after refused connections, too. I hope this fixes this bug for good. If not, please enable logging (export URLGRABBER_DEBUG=INFO,/tmp/urlgrabber.log), and attach the log, please. Could you turn on urlgrabber debugging, and attach the log, please? (export URLGRABBER_DEBUG=INFO,/tmp/urlgrabber.log), re comment 23 Created attachment 807602 [details]
urlgrabber log
(In reply to Zdeněk Pavlas from comment #26) > Could you turn on urlgrabber debugging, and attach the log, please? > (export URLGRABBER_DEBUG=INFO,/tmp/urlgrabber.log), re comment 23 Sorry, I missed that comment. Here the debug log comes: https://bugzilla.redhat.com/attachment.cgi?id=807602 Please, can you double check that you use python-urlgrabber-3.9.1-31.el7 ??? The per-host connection limit processing is NOT shown: attached: 2013-10-04 08:04:53,321 max_connections: 0/5 2013-10-04 08:04:53,336 attempt 1/10: http://127.0.0.1/testrepo/repodata/07bc0819a0001189b63d19c2561e352cc57c07f84744ea7fdd7b4f8d84b216d2-primary.sqlite.bz2 expected: 2013-10-07 14:04:41,778 max_connections: 0/5 2013-10-07 14:04:41,779 max_connections(http://127.0.0.1/testrepo/): 0/1 2013-10-07 14:04:41,783 attempt 1/10: http://127.0.0.1/testrepo/... There's no code path that could skip the per-host max_mirrors debug output. Created attachment 808862 [details]
urlgrabber log from booth release 30 and 31
(In reply to Patrik Kis from comment #30) > Created attachment 808862 [details] > urlgrabber log from booth release 30 and 31 it seems that the required log item is missing from log of python-urlgrabber-3.9.1-31.el7; see the latest attachment Please try with python-urlgrabber-3.9.1-32.el7, the single-connection mode was not activated when max_connections was not set explicitly and defaulted to 2 connections. It still does not explain the missing per-host limit debug output, but let's see.. Created attachment 809251 [details]
Logs from test with python-urlgrabber-3.9.1-32.el7
It does not look much better; see the attachment. Please try with python-urlgrabber-3.9.1-33.el7 Created attachment 809321 [details]
Logs from test with python-urlgrabber-3.9.1-33.el7
The test failed again. I'll attach the logs, but if you'd like I can provide you with a test machine with the test so it can be re-run while fixing the component. Created attachment 813384 [details]
Logs from test with python-urlgrabber-3.10-0.el7
Please, could you provide also logs from the same test but with rhel6 package? Created attachment 813657 [details]
Logs from RHEL-6
Finally found the root problem.. Yum downloads repomd.xml with the legacy single-connection downloader, and *then* tries to download metadata (or packages) with the multi-downloader. But the repomd.xml connection is keep-alive'd, so no matter how hard the multi-downloader tries to use only one connection, it's still rejected by the server. Added a flush of the single-downloader connection. If the the directory /var/cache/yum is removed and yum repolist all is run prior the tests the 1st part of the test pass (when the connection is limited by max_clients=1 in vsftpd.conf), but the 2nd and 3rd test still fails (ftp and http repo where the connection is limited by a rule in iptables). With the second run the test fails completely (if cached data were not cleared). Created attachment 814883 [details]
Logs from test with python-urlgrabber-3.10-1.el7
:: [ LOG ] :: Test yum with repo that supports only one connection: ftp connection limited by firewall :: [ PASS ] :: Running 'iptables -I INPUT 1 -p tcp --syn --dport 21 -m connlimit --connlimit-above 1 -j REJECT --reject-with tcp-reset' (Expected 0, got 0) I believe this test is incorrect- the firewall probably includes close-wait connections in the total count, too. So once Yum uses two connections, for a minute or so every new connection is being reset, even if all others were closed. The test worked on EL6, becuase Yum used only one connection during the whole session. There's other problem: Yum retries the same mirror on timeouts and 503 errors, but refused connection (with tcp reset) is considered permanent. Even when I modify Yum to retry on such errors, the test does not pass. But it's that the firewall rejects retries EVEN AFTER the first download has already finished. (In reply to Zdeněk Pavlas from comment #45) > :: [ LOG ] :: Test yum with repo that supports only one connection: ftp > connection limited by firewall > :: [ PASS ] :: Running 'iptables -I INPUT 1 -p tcp --syn --dport 21 -m > connlimit --connlimit-above 1 -j REJECT --reject-with tcp-reset' (Expected > 0, got 0) > > I believe this test is incorrect- the firewall probably includes close-wait > connections in the total count, too. So once Yum uses two connections, for > a minute or so every new connection is being reset, even if all others were > closed. > The test worked on EL6, becuase Yum used only one connection during the > whole session. > > There's other problem: Yum retries the same mirror on timeouts and 503 > errors, but refused connection (with tcp reset) is considered permanent. > Even when I modify Yum to retry on such errors, the test does not pass. But > it's that the firewall rejects retries EVEN AFTER the first download has > already finished. True; I fixed the firewall rules. Please re-test with fixed firewall rules, and with yum-3.4.3-104.el7 that retries refused connections. This request was resolved in Red Hat Enterprise Linux 7.0. Contact your manager or support representative in case you have further questions about the request. |