Bug 1622064

Summary: puppet agent starts before installer registers capsule to satellite and fails with registering the host
Product: Red Hat Satellite Reporter: Roman Plevka <rplevka>
Component: InstallerAssignee: Evgeni Golov <egolov>
Status: CLOSED WONTFIX QA Contact: Roman Plevka <rplevka>
Severity: high Docs Contact:
Priority: unspecified    
Version: 6.4CC: egolov, ehelms, ekohlvan, mmccune, rplevka
Target Milestone: UnspecifiedKeywords: Triaged
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: foreman-installer-1.20.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-05-01 13:31:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Description Flags
production log none

Description Roman Plevka 2018-08-24 10:56:23 UTC
Description of problem:
satellite-installer --scenario capsule
ends up with capsule added to satellite, but fails to add the capsule host to "/hosts" - 
this seems to be caused by puppet agent being started before the capsule registration occurs:

[ WARN 2018-08-23T17:13:49 verbose]  /Stage[main]/Puppet::Agent::Service::Daemon/Service[puppet]/ensure: ensure changed 'stopped' to 'running'
[ WARN 2018-08-23T17:14:02 verbose]  /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[zlw1-capsule.redhat.com]/ensure: created

- the agent tries to send the host facts and receives 403 errors, which are ignored:

2018-08-23T23:13:58 [I|app|] Started POST "/api/hosts/facts" for at 2018-08-23 23:13:58 +0200
2018-08-23T23:13:58 [I|app|e90be] Processing by Api::V2::HostsController#facts as JSON
2018-08-23T23:13:58 [I|app|e90be]   Parameters: {"facts"=>"[FILTERED]", "name"=>"zlw1-capsule.redhat.com", "certname"=>"zlw1-qe-sat64-rhel7-tier3-capsule.lab.eng.rdu2.redhat.com", "apiv"=>"v2", "host"=>{"certname"=>"zlw1-capsule.redhat.com", "name"=>"zlw1-qe-sat64-rhel7-tier3-capsule.lab.eng.rdu2.redhat.com"}}
2018-08-23T23:13:58 [W|app|e90be] No smart proxy server found on ["zlw1-capsule.redhat.com"] and is not in trusted_hosts
2018-08-23T23:13:58 [I|app|e90be]   Rendering api/v2/errors/access_denied.json.rabl within api/v2/layouts/error_layout
2018-08-23T23:13:58 [I|app|e90be]   Rendered api/v2/errors/access_denied.json.rabl within api/v2/layouts/error_layout (1.7ms)
2018-08-23T23:13:58 [I|app|e90be] Filter chain halted as #<Proc:0x000000000bdfb438@/usr/share/foreman/app/controllers/concerns/foreman/controller/smart_proxy_auth.rb:14> rendered or redirected
2018-08-23T23:13:58 [I|app|e90be] Completed 403 Forbidden in 29ms (Views: 7.4ms | ActiveRecord: 12.5ms)

- rerunning the installer helps, since the capsule already exists.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. just run the satellite-installer --scenario capsule

Actual results:
- error (warning) in installer log, capsule host not created

Expected results:
the orchestration is ordered correctly, capsule host is added to foreman properly, no errors in the log

Comment 2 Roman Plevka 2018-08-24 11:04:36 UTC
Created attachment 1478459 [details]
production log

Comment 4 Mike McCune 2018-09-20 21:37:39 UTC
I was unable to reproduce this on SNAP 22.

1) Create RHEL 7.5 VM

2) register to Satellite

3) install capsule

4) run installer:

#   satellite-installer --scenario capsule\
>                       --foreman-proxy-content-parent-fqdn           "****"\
>                       --foreman-proxy-register-in-foreman           "true"\
>                       --foreman-proxy-foreman-base-url              "****"\
>                       --foreman-proxy-trusted-hosts                 "****"\
>                       --foreman-proxy-trusted-hosts                 "****"\
>                       --foreman-proxy-oauth-consumer-key            "****"\
>                       --foreman-proxy-oauth-consumer-secret         "****"\
>                       --foreman-proxy-content-certs-tar             "****"\
>                       --puppet-server-foreman-url                   "****"

Resetting puppet server version param...
Installing             Done                                               [100%] [...........................................................................]
  * Capsule is running at https://cap.example.com:9090
  The full log is at /var/log/foreman-installer/capsule.log
Upgrade Step: remove_legacy_mongo...
yum install -y -q rh-mongodb34-syspaths finished successfully!

5) No errors in log:

# grep ERROR /var/log/foreman-installer/satellite.log 

6) Check Satellite:

# hammer host list  | grep cap.example.com
103 | cap.example.com  | RedHat 7.5   |    | | fa:16:3e:4b:ed:70 | Default Organization View | Library

Roman, can you re-test this on the latest SNAP and ensure that your Capsule was 100% updated to the latest packages in RHEL 7.5 before attempting an install?

Comment 5 Evgeni Golov 2018-09-21 09:42:42 UTC
@Mike, this is most probably a race condition somewhere in the installer.

Comment 6 Roman Plevka 2018-09-21 12:29:43 UTC
the requests resulting in 403 can be seen in production.log on satellite side.
the capsule installer log in my description are to demonstrate the timestamp of the puppet agent service up and smart proxy create events (agent started up sooner than the capsule was created and wanted to post the host facts, which failed quietly with 403).

the request and the 403 response is from the production.log on the satellite side.

- i can reproduce this behaviour like every single time.
- the reason why you can see the host there is because you register the machine first (which creates the host record). i know one should do this, but that hides the issue => those puppet agent facts are unable to be sent out before the capsule record gets created.
- if they were fired after it, the host record would be created as well.

Comment 7 Evgeni Golov 2018-09-24 13:35:52 UTC
I can actually reproduce this in our pipeline, as we register the capsule to dogfood, not to the newly installed satellite.

Comment 11 Evgeni Golov 2018-09-25 18:20:43 UTC
Created redmine issue https://projects.theforeman.org/issues/25036 from this bug

Comment 13 Bryan Kearney 2020-05-01 13:31:29 UTC
Satellite 6.4 is now End of Life. These bus will not be fixed on the 6.4 stream. Users of Satellite should upgrade to the latest version of Satellite to get access to the most current set of bugfixes and feature improvements.