Bug 1201802 - Failure to sync capsule occurs, with trace
Summary: Failure to sync capsule occurs, with trace
Keywords:
Status: CLOSED DUPLICATE of bug 1200722
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Foreman Proxy
Version: Unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: Unspecified
Assignee: Stephen Benjamin
QA Contact: Katello QA List
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-03-13 14:24 UTC by Corey Welton
Modified: 2019-04-01 20:26 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-17 14:06:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Welton 2015-03-13 14:24:59 UTC
Description of problem:


Version-Release number of selected component (if applicable):
Satellite-6.1.0-RHEL-7-20150311.1

How reproducible:
I am not sure!  I installed two capsules almost simultaneously, on the same base OS version, and one seems to sync, the other doesn't.

Steps to Reproduce:
1.  Install a capsule and assure there are no errors (normal install)
2.  Add environments to capsule
3. initiate a sync, i.e., 
hammer --username admin --password changeme capsule content synchronize --async --id 2
4.  View logs in capsule

Actual results:


Mar 13 09:09:41 cloud-qe-3 goferd: [INFO][worker-0] gofer.rmi.dispatcher:600 - call: Content.update() sn=a051b315-7b3d-41e6-8436-a65da06aecd5 data={'task_id': 'f09e4783-24f3-4733-aecf-6a09d7568023', 'consumer_id': 'a82fa346-e83b-4670-8a77-6323d5aee784'}
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - synchronization failed
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - Traceback (most recent call last):
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/strategies.py", line 116, in synchronize
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     validator.validate(request.bindings)
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 54, in validate
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     self.report.errors.extend(self._validate_plugins(bindings))
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 70, in _validate_plugins
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     child = ChildServer()
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 88, in __init__
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     self.importers = self._importers()
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 99, in _importers
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     http = bindings.server_info.get_importers()
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp/bindings/server_info.py", line 44, in get_importers
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     return self.server.GET(path)
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp/bindings/server.py", line 92, in GET
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     return self._request('GET', path, queries)
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp/bindings/server.py", line 150, in _request
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     self._handle_exceptions(response_code, response_body)
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp/bindings/server.py", line 186, in _handle_exceptions
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     raise exceptions.ApacheServerException(response_body)
Mar 13 09:09:42 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - ApacheServerException


Expected results:

Sync works

Additional info:

Comment 2 Stephen Benjamin 2015-03-16 20:32:03 UTC
Did you use the wrong oauth token maybe?

I need the foreman-debug tarball from the broken capsule and the Satellite to figure out what's going, or access to the machines.

Comment 3 Jeff Ortel 2015-03-16 20:44:02 UTC
Unfortunately, the exception raised in the pulp API bindings is not very descriptive.  The trace show an exception raised when the binding is doing a GET request to the capsule pulp server to list the installed plugins.  This suggests either an SSL problem or an OAuth problem.  But, without details, I cannot be sure.  I would start troubleshooting there.  The REST call is between the node handler code running the agent and the local httpd.

Comment 4 Stephen Benjamin 2015-03-16 20:52:25 UTC
Upgrade to 7.1 on the capsule that's failing

*** This bug has been marked as a duplicate of bug 1200722 ***

Comment 5 Corey Welton 2015-03-16 21:17:11 UTC
Reopening for the moment

capsule has been upgraded

[root@cloud-qe-3 ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.1 (Maipo)


All presumed relevant capsule-related services have been restarted:
* foreman-proxy
* httpd
* goferd
* pulp_* (all three services)

attempt to sync and the same sorts of errors appear.

Mar 16 17:12:15 cloud-qe-3 goferd: [INFO][worker-0] gofer.agent.rmi:128 - sn=c7a28e11-e776-4f15-8824-836bddc94c33 processed in: 3.000 (seconds)
Mar 16 17:14:52 cloud-qe-3 goferd: [INFO][worker-0] gofer.rmi.dispatcher:600 - call: Content.update() sn=c8032547-37bc-4a40-9ec7-6eb2d6119211 data={'task_id': 'fd276d9b-effe-49b2-8468-8ec9cb001f40', 'consumer_id': 'a82fa346-e83b-4670-8a77-6323d5aee784'}
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - synchronization failed
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - Traceback (most recent call last):
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/strategies.py", line 116, in synchronize
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     validator.validate(request.bindings)
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 54, in validate
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     self.report.errors.extend(self._validate_plugins(bindings))
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 70, in _validate_plugins
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     child = ChildServer()
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 88, in __init__
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     self.importers = self._importers()
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp_node/handlers/validation.py", line 99, in _importers
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     http = bindings.server_info.get_importers()
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp/bindings/server_info.py", line 44, in get_importers
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     return self.server.GET(path)
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp/bindings/server.py", line 92, in GET
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     return self._request('GET', path, queries)
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp/bindings/server.py", line 150, in _request
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     self._handle_exceptions(response_code, response_body)
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -   File "/usr/lib/python2.7/site-packages/pulp/bindings/server.py", line 186, in _handle_exceptions
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 -     raise exceptions.ApacheServerException(response_body)
Mar 16 17:14:53 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.strategies:129 - ApacheServerException
Mar 16 17:14:54 cloud-qe-3 goferd: [ERROR][worker-0] pulp_node.handlers.handler:175 - An unexpected error occurred.  repo_id=None
Mar 16 17:14:54 cloud-qe-3 goferd: [INFO][worker-0] gofer.agent.rmi:128 - sn=c8032547-37bc-4a40-9ec7-6eb2d6119211 processed in: 3.498 (seconds)
Mar 16 17:15:05 cloud-qe-3 systemd-logind: New session 123 of user root.

Comment 6 Corey Welton 2015-03-16 21:41:38 UTC
For what it's worth, I just rebooted said machine, after upgrading to 7.1 (and thus getting new selinux contexts, rule, and/or any kernel stuff that is expected).... same thing.  It appears the pulp policy was not reloading.

That said -

subsequently reinstalling the pkg (yum reinstall  pulp-selinux)  and restarting associated services *did* seem to work.  

Could be a specfile issue and/or something to consider during upgrade testing.

Comment 7 Corey Welton 2015-03-16 21:52:58 UTC
This may really be an issue at upgrade time.

One solution may be that, if the --upgrade flag is passed to capsule-installer (and possibly katello-installer?), `yum reinstall pulp-selinux` from within the installer itself.  But that seems hackish.

Either way, there is a pretty good chance that this could rear its ugly head at inopportune moments.

Comment 9 Stephen Benjamin 2015-03-16 22:19:38 UTC
The upgrade instructions are to run yum update first before you do anything, and this will pull in the new pulp-selinux.  The relabel happens then as well, so it's not during katello-installer --upgrade.

You only had to do the reinstall because you installed the new pulp-selinux on RHEL 7.0.  So, I think this should go back to the pulp team and pulp-selinux add a Require for the right selinux-targeted or redhat-release package?

I'm not sure what the best practice here is.

Comment 10 Lukas Zapletal 2015-03-17 13:21:58 UTC
You can try to fix with

restorecon -Rvv /

But upgrade from RHEL 7.0 to 7.1 is unsupported from SELinux perspective.

Comment 13 Lukas Zapletal 2015-03-17 14:03:14 UTC
I've reopened the original BZ to implemnt the fix:

https://bugzilla.redhat.com/show_bug.cgi?id=1200722

No action needed in this regard here.

Comment 14 Lukas Zapletal 2015-03-17 14:03:55 UTC
Clearing the NEEDINFO flag, Jason see here: https://bugzilla.redhat.com/show_bug.cgi?id=1200722

Close this one when appropriate (I did not read through it).

Comment 15 Stephen Benjamin 2015-03-17 14:06:06 UTC
Thanks, its the same exact bug, so we can handle it over in BZ1200722.

*** This bug has been marked as a duplicate of bug 1200722 ***


Note You need to log in before you can comment on or make changes to this bug.