Bug 1102763
Summary: | capsule: synchronize command never times out/silently fails. | ||
---|---|---|---|
Product: | Red Hat Satellite | Reporter: | Corey Welton <cwelton> |
Component: | Foreman Proxy | Assignee: | Mike McCune <mmccune> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Tazim Kolhar <tkolhar> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 6.0.3 | CC: | ahumbe, bbuckingham, bkearney, cwelton, jmontleo, jsherril, kabbott, mmccune, nshaik, shughes, tkolhar |
Target Milestone: | Unspecified | Keywords: | Triaged |
Target Release: | Unused | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
URL: | http://projects.theforeman.org/issues/7162 | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Release Note | |
Doc Text: |
In certain cases, the synchronize will fail with no indication on the UI. If this is seen, please run foreman-debug on the server and submit a support request with the output of that command.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2015-08-12 13:58:49 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1115190 |
Description
Corey Welton
2014-05-29 13:57:15 UTC
Created redmine issue http://projects.theforeman.org/issues/7162 from this bug Much has changed in the code since this bug was initially raised and unfortunately I wasn't able to recreate the scenario as described. I did, however, attempt to simulate a scenario where the server could not reach the capsule, resulting in a timeout during the sync. In that case, Satellite would report 'success' to the user; however, the task (from pulp) would actually fail. In order to simulate this scenario: - from the capsule: service pulp_celerybeat stop service pulp_resource_manager stop - from the satellite: hammer> capsule content synchronize --id 3 --environment-id 5 [................................................................. ] [50%] Task 2246bfb5-131f-4171-a7c3-6e16e3276ddd: error The following katello PR has the proposed fix for this scenario: https://github.com/Katello/katello/pull/4595 Bouncing back to dev for 6.1 Connecting redmine issue http://projects.theforeman.org/issues/7162 from this bug *** Bug 1213816 has been marked as a duplicate of this bug. *** New fix upstream: https://github.com/Katello/katello/pull/5278 VERIFIED: # rpm -qa | grep foreman ruby193-rubygem-foreman_discovery-2.0.0.15-1.el7sat.noarch foreman-vmware-1.7.2.27-1.el7sat.noarch rubygem-hammer_cli_foreman_bootdisk-0.1.2.7-1.el7sat.noarch foreman-debug-1.7.2.27-1.el7sat.noarch foreman-libvirt-1.7.2.27-1.el7sat.noarch ruby193-rubygem-foreman_gutterball-0.0.1.9-1.el7sat.noarch foreman-compute-1.7.2.27-1.el7sat.noarch foreman-gce-1.7.2.27-1.el7sat.noarch ruby193-rubygem-foreman-redhat_access-0.2.0-8.el7sat.noarch rubygem-hammer_cli_foreman_tasks-0.0.3.4-1.el7sat.noarch rubygem-hammer_cli_foreman_docker-0.0.3.7-1.el7sat.noarch puppet-foreman_scap_client-0.3.3-9.el7sat.noarch foreman-1.7.2.27-1.el7sat.noarch ruby193-rubygem-foreman_docker-1.2.0.14-1.el7sat.noarch ruby193-rubygem-foreman_hooks-0.3.7-2.el7sat.noarch rubygem-hammer_cli_foreman-0.1.4.14-1.el7sat.noarch foreman-selinux-1.7.2.13-1.el7sat.noarch foreman-proxy-1.7.2.5-1.el7sat.noarch foreman-postgresql-1.7.2.27-1.el7sat.noarch rhsm-qe-2.rhq.lab.eng.bos.redhat.com-foreman-client-1.0-1.noarch rhsm-qe-2.rhq.lab.eng.bos.redhat.com-foreman-proxy-client-1.0-1.noarch rhsm-qe-2.rhq.lab.eng.bos.redhat.com-foreman-proxy-1.0-1.noarch foreman-ovirt-1.7.2.27-1.el7sat.noarch rubygem-hammer_cli_foreman_discovery-0.0.1.10-1.el7sat.noarch ruby193-rubygem-foreman-tasks-0.6.12.8-1.el7sat.noarch ruby193-rubygem-foreman_bootdisk-4.0.2.13-1.el7sat.noarch steps: # hammer -u admin -p changeme capsule content synchronize --id=2 Could not synchronize capsule content: Couldn't find SmartProxy with id=2 [WHERE "features"."name" IN ('Pulp Node')] Appropriate error is displayed I'm going to move this back to 'ON_QA' as I'm not sure it was verified properly. From the error it seems there was no capsule with id 2 or that capsule did not have the pulp node feature. FAILEDQA: # rpm -qa | grep foreman ruby193-rubygem-foreman-tasks-0.6.12.8-1.el7sat.noarch rubygem-hammer_cli_foreman_docker-0.0.3.9-1.el7sat.noarch foreman-debug-1.7.2.29-1.el7sat.noarch foreman-postgresql-1.7.2.29-1.el7sat.noarch foreman-vmware-1.7.2.29-1.el7sat.noarch rubygem-hammer_cli_foreman_bootdisk-0.1.2.7-1.el7sat.noarch foreman-selinux-1.7.2.13-1.el7sat.noarch foreman-1.7.2.29-1.el7sat.noarch foreman-ovirt-1.7.2.29-1.el7sat.noarch ruby193-rubygem-foreman_hooks-0.3.7-2.el7sat.noarch rubygem-hammer_cli_foreman_discovery-0.0.1.10-1.el7sat.noarch foreman-proxy-1.7.2.5-1.el7sat.noarch ibm-x3655-03.ovirt.rhts.eng.bos.redhat.com-foreman-proxy-1.0-2.noarch foreman-compute-1.7.2.29-1.el7sat.noarch foreman-gce-1.7.2.29-1.el7sat.noarch ruby193-rubygem-foreman-redhat_access-0.2.0-8.el7sat.noarch rubygem-hammer_cli_foreman-0.1.4.14-1.el7sat.noarch foreman-libvirt-1.7.2.29-1.el7sat.noarch ruby193-rubygem-foreman_gutterball-0.0.1.9-1.el7sat.noarch ibm-x3655-03.ovirt.rhts.eng.bos.redhat.com-foreman-client-1.0-1.noarch ibm-x3655-03.ovirt.rhts.eng.bos.redhat.com-foreman-proxy-client-1.0-1.noarch ruby193-rubygem-foreman_bootdisk-4.0.2.13-1.el7sat.noarch ruby193-rubygem-foreman_docker-1.2.0.18-1.el7sat.noarch rubygem-hammer_cli_foreman_tasks-0.0.3.4-1.el7sat.noarch ruby193-rubygem-foreman_discovery-2.0.0.15-1.el7sat.noarch steps: on Capsule Server I executed : # service pulp_celerybeat stop celery init v10.0. Using configuration: /etc/default/pulp_workers, /etc/default/pulp_celerybeat Stopping pulp_celerybeat... OK # service pulp_resource_manager stop celery init v10.0. Using config script: /etc/default/pulp_resource_manager celery multi v3.1.11 (Cipater) > Stopping nodes... > resource_manager.eng.bos.redhat.com: QUIT -> 9155 > Waiting for 1 node -> 9155..... > resource_manager.eng.bos.redhat.com: OK # service pulp_celerybeat status celery init v10.0. Using configuration: /etc/default/pulp_workers, /etc/default/pulp_celerybeat pulp_celerybeat is stopped. # service pulp_resource_manager status celery init v10.0. Using config script: /etc/default/pulp_resource_manager node resource_manager is stopped... On Satellite6 Server I executed: # hammer capsule content synchronize --id 2 [Foreman] Username: admin [Foreman] Password for admin: [......................................................................] [100%] It does not shows any error message. It does shows only 100% complete please, let me know if anything else has to be added here VERIFIED: # rpm -qa | grep foreman ruby193-rubygem-foreman-tasks-0.6.12.8-1.el7sat.noarch rubygem-hammer_cli_foreman_docker-0.0.3.9-1.el7sat.noarch foreman-selinux-1.7.2.13-1.el7sat.noarch foreman-ovirt-1.7.2.30-1.el7sat.noarch rubygem-hammer_cli_foreman_bootdisk-0.1.2.7-1.el7sat.noarch foreman-debug-1.7.2.30-1.el7sat.noarch ruby193-rubygem-foreman_bootdisk-4.0.2.13-1.el7sat.noarch foreman-1.7.2.30-1.el7sat.noarch ruby193-rubygem-foreman_docker-1.2.0.18-1.el7sat.noarch ruby193-rubygem-foreman-redhat_access-0.2.0-8.el7sat.noarch rubygem-hammer_cli_foreman_discovery-0.0.1.10-1.el7sat.noarch foreman-proxy-1.7.2.5-1.el7sat.noarch ibm-x3755-02.ovirt.rhts.eng.bos.redhat.com-foreman-client-1.0-1.noarch ibm-x3755-02.ovirt.rhts.eng.bos.redhat.com-foreman-proxy-client-1.0-1.noarch foreman-compute-1.7.2.30-1.el7sat.noarch foreman-vmware-1.7.2.30-1.el7sat.noarch ruby193-rubygem-foreman_hooks-0.3.7-2.el7sat.noarch rubygem-hammer_cli_foreman-0.1.4.14-1.el7sat.noarch foreman-libvirt-1.7.2.30-1.el7sat.noarch ruby193-rubygem-foreman_gutterball-0.0.1.9-1.el7sat.noarch ibm-x3755-02.ovirt.rhts.eng.bos.redhat.com-foreman-proxy-1.0-1.noarch foreman-gce-1.7.2.30-1.el7sat.noarch rubygem-hammer_cli_foreman_tasks-0.0.3.4-1.el7sat.noarch ruby193-rubygem-foreman_discovery-2.0.0.17-1.el7sat.noarch foreman-postgresql-1.7.2.30-1.el7sat.noarch steps: # service pulp_celerybeat stop Redirecting to /bin/systemctl stop pulp_celerybeat.service # service pulp_resource_manager stop Redirecting to /bin/systemctl stop pulp_resource_manager.service # hammer capsule content synchronize --id 2 [Foreman] Username: admin [Foreman] Password for admin: [.......................................................................... ] [95%] Host did not respond within 20 seconds. Is katello-agent installed and goferd running on the Host? This bug is slated to be released with Satellite 6.1. This bug was fixed in version 6.1.1 of Satellite which was released on 12 August, 2015. |