Hide Forgot
Description of problem: satellite-installer --scenario capsule ends up with capsule added to satellite, but fails to add the capsule host to "/hosts" - this seems to be caused by puppet agent being started before the capsule registration occurs: [ WARN 2018-08-23T17:13:49 verbose] /Stage[main]/Puppet::Agent::Service::Daemon/Service[puppet]/ensure: ensure changed 'stopped' to 'running' ... [ WARN 2018-08-23T17:14:02 verbose] /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[zlw1-capsule.redhat.com]/ensure: created - the agent tries to send the host facts and receives 403 errors, which are ignored: 2018-08-23T23:13:58 [I|app|] Started POST "/api/hosts/facts" for 10.8.30.34 at 2018-08-23 23:13:58 +0200 2018-08-23T23:13:58 [I|app|e90be] Processing by Api::V2::HostsController#facts as JSON 2018-08-23T23:13:58 [I|app|e90be] Parameters: {"facts"=>"[FILTERED]", "name"=>"zlw1-capsule.redhat.com", "certname"=>"zlw1-qe-sat64-rhel7-tier3-capsule.lab.eng.rdu2.redhat.com", "apiv"=>"v2", "host"=>{"certname"=>"zlw1-capsule.redhat.com", "name"=>"zlw1-qe-sat64-rhel7-tier3-capsule.lab.eng.rdu2.redhat.com"}} 2018-08-23T23:13:58 [W|app|e90be] No smart proxy server found on ["zlw1-capsule.redhat.com"] and is not in trusted_hosts 2018-08-23T23:13:58 [I|app|e90be] Rendering api/v2/errors/access_denied.json.rabl within api/v2/layouts/error_layout 2018-08-23T23:13:58 [I|app|e90be] Rendered api/v2/errors/access_denied.json.rabl within api/v2/layouts/error_layout (1.7ms) 2018-08-23T23:13:58 [I|app|e90be] Filter chain halted as #<Proc:0x000000000bdfb438@/usr/share/foreman/app/controllers/concerns/foreman/controller/smart_proxy_auth.rb:14> rendered or redirected 2018-08-23T23:13:58 [I|app|e90be] Completed 403 Forbidden in 29ms (Views: 7.4ms | ActiveRecord: 12.5ms) - rerunning the installer helps, since the capsule already exists. Version-Release number of selected component (if applicable): 6.4.0-18 How reproducible: always Steps to Reproduce: 1. just run the satellite-installer --scenario capsule Actual results: - error (warning) in installer log, capsule host not created Expected results: the orchestration is ordered correctly, capsule host is added to foreman properly, no errors in the log
Created attachment 1478459 [details] production log
I was unable to reproduce this on SNAP 22. 1) Create RHEL 7.5 VM 2) register to Satellite 3) install capsule 4) run installer: # satellite-installer --scenario capsule\ > --foreman-proxy-content-parent-fqdn "****"\ > --foreman-proxy-register-in-foreman "true"\ > --foreman-proxy-foreman-base-url "****"\ > --foreman-proxy-trusted-hosts "****"\ > --foreman-proxy-trusted-hosts "****"\ > --foreman-proxy-oauth-consumer-key "****"\ > --foreman-proxy-oauth-consumer-secret "****"\ > --foreman-proxy-content-certs-tar "****"\ > --puppet-server-foreman-url "****" Resetting puppet server version param... Installing Done [100%] [...........................................................................] Success! * Capsule is running at https://cap.example.com:9090 The full log is at /var/log/foreman-installer/capsule.log Upgrade Step: remove_legacy_mongo... yum install -y -q rh-mongodb34-syspaths finished successfully! 5) No errors in log: # grep ERROR /var/log/foreman-installer/satellite.log # 6) Check Satellite: # hammer host list | grep cap.example.com 103 | cap.example.com | RedHat 7.5 | | 172.0.2.45 | fa:16:3e:4b:ed:70 | Default Organization View | Library Roman, can you re-test this on the latest SNAP and ensure that your Capsule was 100% updated to the latest packages in RHEL 7.5 before attempting an install?
@Mike, this is most probably a race condition somewhere in the installer.
@Mike, the requests resulting in 403 can be seen in production.log on satellite side. the capsule installer log in my description are to demonstrate the timestamp of the puppet agent service up and smart proxy create events (agent started up sooner than the capsule was created and wanted to post the host facts, which failed quietly with 403). the request and the 403 response is from the production.log on the satellite side. - i can reproduce this behaviour like every single time. - the reason why you can see the host there is because you register the machine first (which creates the host record). i know one should do this, but that hides the issue => those puppet agent facts are unable to be sent out before the capsule record gets created. - if they were fired after it, the host record would be created as well.
I can actually reproduce this in our pipeline, as we register the capsule to dogfood, not to the newly installed satellite.
Created redmine issue https://projects.theforeman.org/issues/25036 from this bug
Satellite 6.4 is now End of Life. These bus will not be fixed on the 6.4 stream. Users of Satellite should upgrade to the latest version of Satellite to get access to the most current set of bugfixes and feature improvements.