Description of problem: Capsule can not sync content from Satellite 6.1. Error Message in /var/log/messages on capsule Version-Release number of selected component (if applicable): Satellite 6.1 GA Snap 2 on RHEL 7.1 How reproducible: Perf system + QE System was able to reproduce Steps to Reproduce: 1. Add life-cycle environment to capsule 2. Attempt to sync content to capsule 3. Actual results: capsule sync hammer command fails /var/log/messages on capsule: May 4 11:48:39 capsule-r7-001 goferd: [INFO][worker-0] gofer.rmi.dispatcher:600 - call: Content.update() sn=d0bdd1d7-5479-4fc3-9a16-14de8aa0f09d data={'task_id': '8799ddad-51f5-407e-b891-fe8d84e79948', 'consumer_id': 'f5e7ec38-4d71-4a99-8b07-02a8abc45d46'} May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - synchronization failed May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - Traceback (most recent call last): May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - File "/usr/lib/python2.7/site-packages/pulp_node/handlers/strategies.py", line 116, in synchronize May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - validator.validate(request.bindings) May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 54, in validate May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - self.report.errors.extend(self._validate_plugins(bindings)) May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 70, in _validate_plugins May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - child = ChildServer() May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 88, in __init__ May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - self.importers = self._importers() May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 99, in _importers May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - http = bindings.server_info.get_importers() May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - File "/usr/lib/python2.7/site-packages/pulp/bindings/server_info.py", line 44, in get_importers May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - return self.server.GET(path) May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - File "/usr/lib/python2.7/site-packages/pulp/bindings/server.py", line 92, in GET May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - return self._request('GET', path, queries) May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - File "/usr/lib/python2.7/site-packages/pulp/bindings/server.py", line 142, in _request May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - response_code, response_body = self.server_wrapper.request(method, url, body) May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - File "/usr/lib/python2.7/site-packages/pulp/bindings/server.py", line 330, in request May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - raise exceptions.CertificateVerificationException() May 4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - CertificateVerificationException May 4 11:48:41 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.handler:111 - An unexpected error occurred. repo_id=None Expected results: Content to sync. Additional info: This is with the workaround from BZ 1217828
Since this issue was entered in Red Hat Bugzilla, the release flag has been set to ? to ensure that it is properly evaluated for this release.
Through some IRC discussion, it was determined that the /etc/pki/pulp/ca.crt file was used as the CA for the httpd certificate. This CA is for Pulp's internal usage only (it's used to make client SSL certificates), and should not be used by any outside entities (including Nodes and httpd). I've been lobbying internally for Pulp to stop creating/using this file, but it'll take some time before we can change that. Think of it like a private variable in a C++ class with no getters or setters ☺
Disregard my last comment, I had misread /etc/pki/pulp/server_ca.crt as /etc/pki/pulp/ca.crt. My mistake!
jsherril, mhrivnak and I dug into the problem, and we discovered that the issue was that crane's and nodes' httpd config files were using the same name in their ServerName directive[0], but did not specify the (optional) port setting in the ServerName. This caused Pulp to get served with Crane's SSL certificate. Oddly, Crane's SSL certificate was configured to use /etc/pki/tls/certs/localhost.crt as well, which was also an error. [0] https://httpd.apache.org/docs/current/mod/core.html#servername
It is possible to work around this issue one of two ways: 1) Configure Crane to use the correct SSL certificate and key by setting: SSLCertificateFile "/etc/pki/pulp/ssl_apache.crt" SSLCertificateKeyFile "/etc/pki/pulp/ssl_apache.key" in /etc/httpd/conf.d/03-crane.conf -or- 2) Configure Crane and Pulp to use different ServerNames by setting: ServerName capsule-r7-001.perf.lab.eng.rdu.redhat.com:5000 in /etc/httpd/conf.d/03-crane.conf, and ServerName capsule-r7-001.perf.lab.eng.rdu.redhat.com:443 in /etc/httpd/conf.d/25-pulp-node-ssl.conf The latter is the long-term correct fix. The former will solve two issues at once. Or you could do both and have it all the way correct!
By the way, I suspect that if you don't do solution #2, you might be serving crane on 443 next to Pulp, and you might also be serving Pulp on 5000 next to Crane. I recommend applying both fixes in the installer to ensure they remain separated.
Created redmine issue http://projects.theforeman.org/issues/10387 from this bug
VERIFIED: # rpm -qa | grep foreman foreman-1.7.2.21-1.el7sat.noarch ruby193-rubygem-foreman_discovery-2.0.0.13-1.el7sat.noarch foreman-libvirt-1.7.2.21-1.el7sat.noarch ruby193-rubygem-foreman_gutterball-0.0.1.9-1.el7sat.noarch foreman-postgresql-1.7.2.21-1.el7sat.noarch ruby193-rubygem-foreman_bootdisk-4.0.2.13-1.el7sat.noarch dell-pem710-01.rhts.eng.bos.redhat.com-foreman-proxy-client-1.0-1.noarch foreman-ovirt-1.7.2.21-1.el7sat.noarch rubygem-hammer_cli_foreman-0.1.4.11-1.el7sat.noarch foreman-selinux-1.7.2.13-1.el7sat.noarch foreman-gce-1.7.2.21-1.el7sat.noarch ruby193-rubygem-foreman-redhat_access-0.1.0-1.el7sat.noarch ruby193-rubygem-foreman-tasks-0.6.12.5-1.el7sat.noarch rubygem-hammer_cli_foreman_tasks-0.0.3.4-1.el7sat.noarch rubygem-hammer_cli_foreman_docker-0.0.3.6-1.el7sat.noarch ruby193-rubygem-foreman_docker-1.2.0.12-1.el7sat.noarch ruby193-rubygem-foreman_hooks-0.3.7-2.el7sat.noarch rubygem-hammer_cli_foreman_bootdisk-0.1.2.7-1.el7sat.noarch foreman-proxy-1.7.2.4-1.el7sat.noarch dell-pem710-01.rhts.eng.bos.redhat.com-foreman-client-1.0-1.noarch dell-pem710-01.rhts.eng.bos.redhat.com-foreman-proxy-1.0-2.noarch foreman-vmware-1.7.2.21-1.el7sat.noarch rubygem-hammer_cli_foreman_discovery-0.0.1.10-1.el7sat.noarch foreman-compute-1.7.2.21-1.el7sat.noarch foreman-debug-1.7.2.21-1.el7sat.noarch steps: 1. Add life-cycle environment to capsule 2. Attempt to sync content to capsule # hammer capsule content synchronize --id 2 --environment KT_Default_Organization_Dev_con_viewA_2 --async [Foreman] Username: admin [Foreman] Password for admin: Capsule content is being synchronized in task f9ef7a50-cecb-4fc1-8bd6-e36080612b15 screenshot attached
Created attachment 1025334 [details] capsule sync
This bug is slated to be released with Satellite 6.1.
This bug was fixed in version 6.1.1 of Satellite which was released on 12 August, 2015.