Bug 1218390 - CertificateVerificationException on Capsule attempting to sync content from Satellite 6.1 server
Summary: CertificateVerificationException on Capsule attempting to sync content from S...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Foreman Proxy
Version: 6.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: Unspecified
Assignee: Justin Sherrill
QA Contact: Tazim Kolhar
URL: http://projects.theforeman.org/issues...
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-04 18:48 UTC by Alex Krzos
Modified: 2017-02-23 20:05 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-08-12 13:56:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
capsule sync (104.27 KB, image/png)
2015-05-14 10:55 UTC, Tazim Kolhar
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 10387 0 None None None 2016-04-22 16:27:07 UTC

Description Alex Krzos 2015-05-04 18:48:07 UTC
Description of problem:
Capsule can not sync content from Satellite 6.1.  Error Message in /var/log/messages on capsule

Version-Release number of selected component (if applicable):
Satellite 6.1 GA Snap 2 on RHEL 7.1

How reproducible:
Perf system + QE System was able to reproduce

Steps to Reproduce:
1. Add life-cycle environment to capsule
2. Attempt to sync content to capsule
3.

Actual results:
capsule sync hammer command fails

/var/log/messages on capsule:
May  4 11:48:39 capsule-r7-001 goferd: [INFO][worker-0] gofer.rmi.dispatcher:600 - call: Content.update() sn=d0bdd1d7-5479-4fc3-9a16-14de8aa0f09d data={'task_id': '8799ddad-51f5-407e-b891-fe8d84e79948', 'consumer_id': 'f5e7ec38-4d71-4a99-8b07-02a8abc45d46'}
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - synchronization failed
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - Traceback (most recent call last):
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/strategies.py", line 116, in synchronize
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     validator.validate(request.bindings)
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 54, in validate
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     self.report.errors.extend(self._validate_plugins(bindings))
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 70, in _validate_plugins
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     child = ChildServer()
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 88, in __init__
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     self.importers = self._importers()
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 99, in _importers
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     http = bindings.server_info.get_importers()
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp/bindings/server_info.py", line 44, in get_importers
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     return self.server.GET(path)
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp/bindings/server.py", line 92, in GET
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     return self._request('GET', path, queries)
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp/bindings/server.py", line 142, in _request
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     response_code, response_body = self.server_wrapper.request(method, url, body)
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp/bindings/server.py", line 330, in request
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     raise exceptions.CertificateVerificationException()
May  4 11:48:40 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - CertificateVerificationException
May  4 11:48:41 capsule-r7-001 goferd: [ERROR][worker-0] pulp_node.handlers.handler:111 - An unexpected error occurred.  repo_id=None

Expected results:
Content to sync.

Additional info:

This is with the workaround from BZ 1217828

Comment 2 RHEL Program Management 2015-05-04 19:03:12 UTC
Since this issue was entered in Red Hat Bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

Comment 4 Randy Barlow 2015-05-04 19:11:43 UTC
Through some IRC discussion, it was determined that the /etc/pki/pulp/ca.crt file was used as the CA for the httpd certificate. This CA is for Pulp's internal usage only (it's used to make client SSL certificates), and should not be used by any outside entities (including Nodes and httpd). I've been lobbying internally for Pulp to stop creating/using this file, but it'll take some time before we can change that. Think of it like a private variable in a C++ class with no getters or setters ☺

Comment 5 Randy Barlow 2015-05-04 20:16:50 UTC
Disregard my last comment, I had misread /etc/pki/pulp/server_ca.crt as /etc/pki/pulp/ca.crt. My mistake!

Comment 6 Randy Barlow 2015-05-04 21:18:16 UTC
jsherril, mhrivnak and I dug into the problem, and we discovered that the issue was that crane's and nodes' httpd config files were using the same name in their ServerName directive[0], but did not specify the (optional) port setting in the ServerName. This caused Pulp to get served with Crane's SSL certificate.

Oddly, Crane's SSL certificate was configured to use /etc/pki/tls/certs/localhost.crt as well, which was also an error.

[0] https://httpd.apache.org/docs/current/mod/core.html#servername

Comment 7 Randy Barlow 2015-05-04 21:25:41 UTC
It is possible to work around this issue one of two ways:

1) Configure Crane to use the correct SSL certificate and key by setting:

SSLCertificateFile      "/etc/pki/pulp/ssl_apache.crt"
SSLCertificateKeyFile      "/etc/pki/pulp/ssl_apache.key"

in /etc/httpd/conf.d/03-crane.conf

-or-

2) Configure Crane and Pulp to use different ServerNames by setting:

ServerName capsule-r7-001.perf.lab.eng.rdu.redhat.com:5000

in /etc/httpd/conf.d/03-crane.conf, and

ServerName capsule-r7-001.perf.lab.eng.rdu.redhat.com:443

in /etc/httpd/conf.d/25-pulp-node-ssl.conf

The latter is the long-term correct fix. The former will solve two issues at once. Or you could do both and have it all the way correct!

Comment 8 Randy Barlow 2015-05-04 21:31:53 UTC
By the way, I suspect that if you don't do solution #2, you might be serving crane on 443 next to Pulp, and you might also be serving Pulp on 5000 next to Crane. I recommend applying both fixes in the installer to ensure they remain separated.

Comment 9 Justin Sherrill 2015-05-05 20:55:33 UTC
Created redmine issue http://projects.theforeman.org/issues/10387 from this bug

Comment 11 Tazim Kolhar 2015-05-14 10:54:37 UTC
VERIFIED:
# rpm -qa | grep foreman
foreman-1.7.2.21-1.el7sat.noarch
ruby193-rubygem-foreman_discovery-2.0.0.13-1.el7sat.noarch
foreman-libvirt-1.7.2.21-1.el7sat.noarch
ruby193-rubygem-foreman_gutterball-0.0.1.9-1.el7sat.noarch
foreman-postgresql-1.7.2.21-1.el7sat.noarch
ruby193-rubygem-foreman_bootdisk-4.0.2.13-1.el7sat.noarch
dell-pem710-01.rhts.eng.bos.redhat.com-foreman-proxy-client-1.0-1.noarch
foreman-ovirt-1.7.2.21-1.el7sat.noarch
rubygem-hammer_cli_foreman-0.1.4.11-1.el7sat.noarch
foreman-selinux-1.7.2.13-1.el7sat.noarch
foreman-gce-1.7.2.21-1.el7sat.noarch
ruby193-rubygem-foreman-redhat_access-0.1.0-1.el7sat.noarch
ruby193-rubygem-foreman-tasks-0.6.12.5-1.el7sat.noarch
rubygem-hammer_cli_foreman_tasks-0.0.3.4-1.el7sat.noarch
rubygem-hammer_cli_foreman_docker-0.0.3.6-1.el7sat.noarch
ruby193-rubygem-foreman_docker-1.2.0.12-1.el7sat.noarch
ruby193-rubygem-foreman_hooks-0.3.7-2.el7sat.noarch
rubygem-hammer_cli_foreman_bootdisk-0.1.2.7-1.el7sat.noarch
foreman-proxy-1.7.2.4-1.el7sat.noarch
dell-pem710-01.rhts.eng.bos.redhat.com-foreman-client-1.0-1.noarch
dell-pem710-01.rhts.eng.bos.redhat.com-foreman-proxy-1.0-2.noarch
foreman-vmware-1.7.2.21-1.el7sat.noarch
rubygem-hammer_cli_foreman_discovery-0.0.1.10-1.el7sat.noarch
foreman-compute-1.7.2.21-1.el7sat.noarch
foreman-debug-1.7.2.21-1.el7sat.noarch

steps:
1. Add life-cycle environment to capsule
2. Attempt to sync content to capsule
# hammer capsule content synchronize --id 2 --environment KT_Default_Organization_Dev_con_viewA_2 --async
[Foreman] Username: admin
[Foreman] Password for admin: 
Capsule content is being synchronized in task f9ef7a50-cecb-4fc1-8bd6-e36080612b15

screenshot attached

Comment 12 Tazim Kolhar 2015-05-14 10:55:44 UTC
Created attachment 1025334 [details]
capsule sync

Comment 13 Bryan Kearney 2015-08-11 13:20:57 UTC
This bug is slated to be released with Satellite 6.1.

Comment 14 Bryan Kearney 2015-08-12 13:56:57 UTC
This bug was fixed in version 6.1.1 of Satellite which was released on 12 August, 2015.


Note You need to log in before you can comment on or make changes to this bug.