Bug 1031001
Summary: | Invalid pulp yum_importer.json can be written with missing --proxy-* arguments | ||
---|---|---|---|
Product: | Red Hat Satellite | Reporter: | Dominic Cleal <dcleal> |
Component: | Installation | Assignee: | Eric Helms <ehelms> |
Status: | CLOSED ERRATA | QA Contact: | Elyézer Rezende <erezende> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 6.0.2 | CC: | cwelton, ehelms, mmccune |
Target Milestone: | Unspecified | Keywords: | Triaged |
Target Release: | Unused | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-08-12 05:07:52 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Dominic Cleal
2013-11-15 13:07:28 UTC
This issue has been addressed upstream here -https://bugzilla.redhat.com/show_bug.cgi?id=1127397 Verified on Satellite-6.1.0-RHEL-6-20150210.0 Verification steps: [root@amd-dinar-01 ~]# katello-installer --katello-proxy-url="http://proxy.example.com" --katello-proxy-port=8888 --katello-proxy-password=proxypass Installing Done [100%] [.................................] Success! * Katello is running at https://amd-dinar-01.example.com Initial credentials are admin / changeme * Capsule is running at https://amd-dinar-01.example.com:9090 * To install additional capsule on separate machine continue by running:" capsule-certs-generate --capsule-fqdn "$CAPSULE" --certs-tar "~/$CAPSULE-certs.tar" The full log is at /var/log/katello-installer/katello-installer.log [root@amd-dinar-01 ~]# katello-service status tomcat6 (pid 16004) is running...[ OK ] mongod (pid 5654) is running... listening on 127.0.0.1:27017 connection test successful elasticsearch (pid 4934) is running... celery init v10.0. Using config script: /etc/default/pulp_resource_manager node resource_manager (pid 923) is running... celery init v10.0. Using config script: /etc/default/pulp_workers node reserved_resource_worker-0 (pid 1598) is running... node reserved_resource_worker-1 (pid 1474) is running... node reserved_resource_worker-2 (pid 1358) is running... node reserved_resource_worker-3 (pid 1744) is running... node reserved_resource_worker-4 (pid 1789) is running... node reserved_resource_worker-5 (pid 1241) is running... node reserved_resource_worker-6 (pid 1531) is running... node reserved_resource_worker-7 (pid 1427) is running... node reserved_resource_worker-8 (pid 1194) is running... node reserved_resource_worker-9 (pid 1216) is running... node reserved_resource_worker-10 (pid 1892) is running... node reserved_resource_worker-11 (pid 1668) is running... node reserved_resource_worker-12 (pid 1381) is running... node reserved_resource_worker-13 (pid 1767) is running... node reserved_resource_worker-14 (pid 1146) is running... node reserved_resource_worker-15 (pid 1287) is running... node reserved_resource_worker-16 (pid 1450) is running... node reserved_resource_worker-17 (pid 1500) is running... node reserved_resource_worker-18 (pid 1404) is running... node reserved_resource_worker-19 (pid 1310) is running... node reserved_resource_worker-20 (pid 1573) is running... node reserved_resource_worker-21 (pid 1724) is running... node reserved_resource_worker-22 (pid 1694) is running... node reserved_resource_worker-23 (pid 1644) is running... node reserved_resource_worker-24 (pid 1170) is running... node reserved_resource_worker-25 (pid 1838) is running... node reserved_resource_worker-26 (pid 1869) is running... node reserved_resource_worker-27 (pid 1263) is running... node reserved_resource_worker-28 (pid 1621) is running... node reserved_resource_worker-29 (pid 1551) is running... node reserved_resource_worker-30 (pid 1820) is running... node reserved_resource_worker-31 (pid 1333) is running... celery init v10.0. Using configuration: /etc/default/pulp_workers, /etc/default/pulp_celerybeat pulp_celerybeat (pid 799) is running. httpd (pid 2026) is running... dynflow_executor is running. dynflow_executor_monitor is running. [root@amd-dinar-01 ~]# cat /var/log/messages | grep pulp # omitted some messages Feb 13 12:00:50 amd-dinar-01 pulp: pulp.server.webservices.application:INFO: ************************************************************* Feb 13 12:00:50 amd-dinar-01 pulp: pulp.server.webservices.application:INFO: The Pulp server has been successfully initialized Feb 13 12:00:50 amd-dinar-01 pulp: pulp.server.webservices.application:INFO: ************************************************************* Feb 13 12:00:50 amd-dinar-01 pulp: gofer.transport.qpid.broker:INFO: {amd-dinar-01.example.com:5671} connected to AMQP This bug is slated to be released with Satellite 6.1. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2015:1592 |