Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1107008 - Need better handler for 'Max retries exceeded' pulp issues
Summary: Need better handler for 'Max retries exceeded' pulp issues
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Pulp
Version: 6.0.3
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: Unspecified
Assignee: satellite6-bugs
QA Contact: Katello QA List
URL:
Whiteboard:
Depends On:
Blocks: sat6-pulp-future 1175448
TreeView+ depends on / blocked
 
Reported: 2014-06-09 18:56 UTC by Og Maciel
Modified: 2019-09-26 13:46 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-03 20:05:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Og Maciel 2014-06-09 18:56:12 UTC
Description of problem:

While synchronizing several RHEL 5 and 6 repositories plus 5-6 custom yum and puppet repositories at the same time, I saw the following error:

==> /var/log/messages <==
Jun  9 13:46:12 cloud-qe-11 pulp: nectar.downloaders.threaded:ERROR: HTTPSConnectionPool(host='cdn.redhat.com', port=443): Max retries exceeded with url: /content/dist/rhel/server/5/5Server/i386/os/Packages/kernel-PAE-2.6.18-274.18.1.el5.i686.rpm (Caused by <class 'httplib.BadStatusLine'>: )
Jun  9 13:46:12 cloud-qe-11 pulp: nectar.downloaders.threaded:ERROR: Traceback (most recent call last):
Jun  9 13:46:12 cloud-qe-11 pulp: nectar.downloaders.threaded:ERROR:   File "/usr/lib/python2.6/site-packages/nectar/downloaders/threaded.py", line 173, in _fetch
Jun  9 13:46:12 cloud-qe-11 pulp: nectar.downloaders.threaded:ERROR:     response = session.get(request.url, headers=headers)
Jun  9 13:46:12 cloud-qe-11 pulp: nectar.downloaders.threaded:ERROR:   File "/usr/lib/python2.6/site-packages/requests/sessions.py", line 395, in get
Jun  9 13:46:12 cloud-qe-11 pulp: nectar.downloaders.threaded:ERROR:     return self.request('GET', url, **kwargs)
Jun  9 13:46:12 cloud-qe-11 pulp: nectar.downloaders.threaded:ERROR:   File "/usr/lib/python2.6/site-packages/requests/sessions.py", line 383, in request
Jun  9 13:46:12 cloud-qe-11 pulp: nectar.downloaders.threaded:ERROR:     resp = self.send(prep, **send_kwargs)
Jun  9 13:46:12 cloud-qe-11 pulp: nectar.downloaders.threaded:ERROR:   File "/usr/lib/python2.6/site-packages/requests/sessions.py", line 486, in send
Jun  9 13:46:12 cloud-qe-11 pulp: nectar.downloaders.threaded:ERROR:     r = adapter.send(request, **kwargs)
Jun  9 13:46:12 cloud-qe-11 pulp: nectar.downloaders.threaded:ERROR:   File "/usr/lib/python2.6/site-packages/requests/adapters.py", line 378, in send
Jun  9 13:46:12 cloud-qe-11 pulp: nectar.downloaders.threaded:ERROR:     raise ConnectionError(e)
Jun  9 13:46:12 cloud-qe-11 pulp: nectar.downloaders.threaded:ERROR: ConnectionError: HTTPSConnectionPool(host='cdn.redhat.com', port=443): Max retries exceeded with url: /content/dist/rhel/server/5/5Server/i386/os/Packages/kernel-PAE-2.6.18-274.18.1.el5.i686.rpm (Caused by <class 'httplib.BadStatusLine'>: )
Jun  9 13:46:12 cloud-qe-11 pulp: requests.packages.urllib3.connectionpool:INFO: Starting new HTTPS connection (2): cdn.redhat.com

Problem here is that as far as Katello knows, this type of issue won't be reported back to the user but there's a chance that **kernel-PAE-2.6.18-274.18.1.el5.i686.rpm** was not synchronized.

I think that we should warn the user of such errors and perhaps recommend that the sync job be run again to make sure all content is mirrored?

Version-Release number of selected component (if applicable):

* apr-util-ldap-1.3.9-3.el6_0.1.x86_64
* candlepin-0.9.7-1.el6_5.noarch
* candlepin-scl-1-5.el6_4.noarch
* candlepin-scl-quartz-2.1.5-5.el6_4.noarch
* candlepin-scl-rhino-1.7R3-1.el6_4.noarch
* candlepin-scl-runtime-1-5.el6_4.noarch
* candlepin-selinux-0.9.7-1.el6_5.noarch
* candlepin-tomcat6-0.9.7-1.el6_5.noarch
* elasticsearch-0.90.10-4.el6sat.noarch
* foreman-1.6.0.14-1.el6sat.noarch
* foreman-compute-1.6.0.14-1.el6sat.noarch
* foreman-gce-1.6.0.14-1.el6sat.noarch
* foreman-libvirt-1.6.0.14-1.el6sat.noarch
* foreman-ovirt-1.6.0.14-1.el6sat.noarch
* foreman-postgresql-1.6.0.14-1.el6sat.noarch
* foreman-proxy-1.6.0.6-1.el6sat.noarch
* foreman-selinux-1.6.0-4.el6sat.noarch
* foreman-vmware-1.6.0.14-1.el6sat.noarch
* katello-1.5.0-25.el6sat.noarch
* katello-ca-1.0-1.noarch
* katello-certs-tools-1.5.5-1.el6sat.noarch
* katello-installer-0.0.45-1.el6sat.noarch
* openldap-2.4.23-32.el6_4.1.x86_64
* pulp-katello-0.3-3.el6sat.noarch
* pulp-nodes-common-2.4.0-0.18.beta.el6sat.noarch
* pulp-nodes-parent-2.4.0-0.18.beta.el6sat.noarch
* pulp-puppet-plugins-2.4.0-0.18.beta.el6sat.noarch
* pulp-puppet-tools-2.4.0-0.18.beta.el6sat.noarch
* pulp-rpm-plugins-2.4.0-0.18.beta.el6sat.noarch
* pulp-selinux-2.4.0-0.18.beta.el6sat.noarch
* pulp-server-2.4.0-0.18.beta.el6sat.noarch
* python-ldap-2.3.10-1.el6.x86_64
* ruby193-rubygem-net-ldap-0.3.1-3.el6sat.noarch
* ruby193-rubygem-runcible-1.1.0-2.el6sat.noarch
* rubygem-hammer_cli-0.1.1-3.el6sat.noarch
* rubygem-hammer_cli_foreman-0.1.1-8.el6sat.noarch
* rubygem-hammer_cli_foreman_tasks-0.0.3-2.el6sat.noarch
* rubygem-hammer_cli_katello-0.0.4-5.el6sat.noarch

How reproducible:


Steps to Reproduce:
1. Not sure how to best reproduce but perhaps select many RHEL repositories, specially RHEL 5 which are relatively large and attempt to sync all of them at the same time
2.
3.

Actual results:

Some packages may not be synchronized due to Max retries error

Expected results:


Additional info:

Comment 1 Justin Sherrill 2014-06-09 19:03:57 UTC
Easy way to reproduce would be to setup a yum repo and simply delete of the rpms from the repo.

Comment 3 Michael Hrivnak 2014-12-17 21:11:08 UTC
Pulp's sync progress report includes an error entry for each download that fails.

Comment 6 Michael Hrivnak 2015-10-02 13:54:18 UTC
To be more clear regarding comment #3, I do not think this is a pulp bug. There may be better ways to represent what errors were encountered during a sync, but pulp does report the errors currently.

Comment 7 Bryan Kearney 2016-07-08 20:42:28 UTC
Per 6.3 planning, moving out non acked bugs to the backlog

Comment 9 Brad Buckingham 2016-08-03 19:45:47 UTC
There have been numerous changes to repository syncing and the error handling since this bug was first introduced.  Based on the current behavior, I believe we can consider this issue closed.

With Satellite 6.2 GA, if I have a repository with missing packages, an error similar to the following will be shown in the UI:

   New packages: 49 (6.02 MB).
   Failed to download 27 packages.

If I want to view the list of packages that were not downloaded, I can go to 'Monitor -> Tasks', select the sync task, click on Errors and observe the list in the error 'Output' section.

Comment 10 Brad Buckingham 2016-08-03 20:05:34 UTC
Discussed briefly with Corey and we shall move this on to CLOSED:CURRENTRELEASE (Satellite 6.2).


Note You need to log in before you can comment on or make changes to this bug.