Bug 1250029
Summary: | capsule sync fails as Host did not respond within 20 seconds | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Satellite | Reporter: | Pradeep Kumar Surisetty <psuriset> | ||||||||
Component: | Foreman Proxy | Assignee: | Stephen Benjamin <stbenjam> | ||||||||
Status: | CLOSED NOTABUG | QA Contact: | Sachin Ghai <sghai> | ||||||||
Severity: | high | Docs Contact: | |||||||||
Priority: | unspecified | ||||||||||
Version: | 6.1.0 | CC: | bbuckingham, omaciel, psuriset, sghai | ||||||||
Target Milestone: | Unspecified | ||||||||||
Target Release: | Unused | ||||||||||
Hardware: | x86_64 | ||||||||||
OS: | Linux | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2015-08-20 16:55:41 UTC | Type: | Bug | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Attachments: |
|
Description
Pradeep Kumar Surisetty
2015-08-04 11:44:14 UTC
Created attachment 1059056 [details]
satelitte server foreman logs
Created attachment 1059058 [details]
/var/log/messages of satelitte server
Created attachment 1059060 [details]
capsule logs
trying to reset dropped connectin multiple times Aug 6 04:17:07 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:10 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:10 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:12 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:17 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:18 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:22 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:23 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:27 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:28 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:30 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:32 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:33 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:33 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:36 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:37 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:41 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:41 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:44 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:45 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:47 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:47 vmcapsule002 pulp: requests.packages.urllib3.connectionpool:INFO: Resetting dropped connection: perfc-380g8-01.perf.lab.eng.rdu.redhat.com Aug 6 04:17:56 vmcapsule002 pulp: celery.worker.job:INFO: Task pulp.server.managers.repo.sync.sync[775eac06-89ce-4524-a08f-5b3d8cb93df8] succeeded in 84.852102816s: <pulp.server.async.tasks.TaskResult object at 0x290d810> Aug 6 04:17:56 vmcapsule002 pulp: celery.worker.strategy:INFO: Received task: pulp.server.async.tasks._queue_reserved_task[528a0a80-f693-4d48-b224-6f9d417e86fb] Aug 6 04:17:56 vmcapsule002 pulp: celery.worker.job:INFO: Task pulp.server.async.tasks._release_resource[e06ae0c2-7b6b-4063-9055-945a01900d58] succeeded in 0.012236169s: None Aug 6 04:17:56 vmcapsule002 pulp: celery.worker.strategy:INFO: Received task: pulp.server.managers.repo.publish.publish[27786258-b1dd-4ddb-b276-353aeb2d1613] Aug 6 04:17:56 vmcapsule002 pulp: celery.worker.strategy:INFO: Received task: pulp.server.async.tasks._release_resource[1fb72430-5cfc-49b5-bc0c-23eadd0d396c] Aug 6 04:17:56 vmcapsule002 pulp: celery.worker.job:INFO: Task pulp.server.async.tasks._queue_reserved_task[528a0a80-f693-4d48-b224-6f9d417e86fb] succeeded in 0.033959336s: None Aug 6 04:17:59 vmcapsule002 goferd: [INFO][worker-0] gofer.agent.rmi:128 - sn=693a4424-9185-498f-9432-cbb6c841dbbc processed in: 1.616 (minutes) Aug 6 04:18:07 vmcapsule002 pulp: celery.worker.job:INFO: Task pulp.server.managers.repo.publish.publish[27786258-b1dd-4ddb-b276-353aeb2d1613] succeeded in 11.123642728s: {'exception': None, 'repo_id':... Aug 6 04:18:07 vmcapsule002 pulp: celery.worker.job:INFO: Task pulp.server.async.tasks._release_resource[1fb72430-5cfc-49b5-bc0c-23eadd0d396c] succeeded in 0.013110976s: None http://pastebin.test.redhat.com/303282 I'm getting same message after disconneted (ISO) upgrade. [root@cloud-qe-6 ~]# hammer -u admin -p changeme capsule content synchronize --id=2 [............................................................................................................................................] [100%] Host did not respond within 20 seconds. Is katello-agent installed and goferd running on the Host? ---------- 2015-08-07 05:46:23 [E] Host did not respond within 20 seconds. Is katello-agent installed and goferd running on the Host? (RuntimeError) /opt/rh/ruby193/root/usr/share/gems/gems/katello-2.2.0.65/app/lib/actions/pulp/consumer/sync_node.rb:35:in `process_timeout' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.7.9/lib/dynflow/action/polling.rb:23:in `run' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.7.9/lib/dynflow/action/cancellable.rb:9:in `run' /opt/rh/ruby193/root/usr/share/gems/gems/katello-2.2.0.65/app/lib/actions/pulp/abstract_async_task.rb:57:in `run' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.7.9/lib/dynflow/action.rb:487:in `block (3 levels) in execute_run' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.7.9/lib/dynflow/middleware/stack.rb:26:in `call' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.7.9/lib/dynflow/middleware/stack.rb:26:in `pass' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.7.9/lib/dynflow/middleware.rb:16:in `pass' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.7.9/lib/dynflow/middleware.rb:25:in `run' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.7.9/lib/dynflow/middleware/stack.rb:22:in `call' ----- Firewall rules were enabled on satellite server. Flushed out the rules and re -ran the sync and this time it was completed successfully without any error: [root@cloud-qe-6 ~]# hammer -u admin -p changeme capsule content synchronize --id=2 [............................................................................................................................................] [100%] [ so removing blocker flag. thanks i have already flushed out firewall rules on satellite server. Still i see this issue. After installing capsule server, if user reboot capsule server user will face this issue for sure. Disabling firewall and restarting all below services helps. Redirecting to /bin/systemctl start foreman-proxy.service Redirecting to /bin/systemctl start goferd.service Redirecting to /bin/systemctl start qdrouterd.service Redirecting to /bin/systemctl start qpidd.service Redirecting to /bin/systemctl start pulp_workers.service Redirecting to /bin/systemctl start pulp_resource_manager.service Redirecting to /bin/systemctl start pulp_celerybeat.service Redirecting to /bin/systemctl start httpd.service Stephen Please help me to understand. is there some thing i need to provide here? --Pradeep I spoke to Pradeep and he told me that in the end he re-enabled the firewall and restarted all services. He did not see the issue anymore. |