Bug 1486760
Summary: | [RFE] Allow bootstrap.py to allow a user to migrate a system from Capsule->Capsule, Satellite->Capsule, or Capsule->Satellite while preserving (most) host information. | ||
---|---|---|---|
Product: | Red Hat Satellite | Reporter: | Mihir Lele <mlele> |
Component: | Bootstrap | Assignee: | Rich Jerrido <rjerrido> |
Status: | CLOSED ERRATA | QA Contact: | Lukas Pramuk <lpramuk> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 6.2.10 | CC: | dlobatog, egolov, fgarciad, mburgerh, mschwabe, rvdwees, sghai |
Target Milestone: | Unspecified | Keywords: | FutureFeature, Triaged |
Target Release: | Unused | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | katello-client-bootstrap-1.5.0-1.el7sat | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-02-21 16:54:17 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Mihir Lele
2017-08-30 13:45:19 UTC
Tested - I will fill a bug in the bootstrap script issue tracker. The current design of the bootstrap script does not preserve the original host record if it is run to move a host from a Satellite to a Capsule. In the usage as described in comment #0, the user effectively recreates the host entry in Satellite, which loses any parameters that were defined. Updating the $SUBJECT of this BZ to reflect the actual request. *** Bug 1527332 has been marked as a duplicate of this bug. *** Moving to POST as https://github.com/Katello/katello-client-bootstrap/pull/227 has been merged. VERIFIED. @satellite-6.3.0-23.0.el7sat.noarch @satellite-capsule-6.3.0-23.0.el7sat.noarch katello-client-bootstrap-1.5.1-1.el7sat.noarch 0. Have a SAT with two Capsules, Puppet CAs on all of them have autosign entry '*' 1. Provision a host @Satellite, with a custom hostgroup param and a host param 2. Migrate to Capsule using bootstrap @HOST # ./bootstrap.py -l admin -p changeme --new-capsule --server cap.example.com [RUNNING], [2018-02-13 17:37:01], [Calling Foreman API to update Puppet master and Puppet CA for h1.example.com to cap.example.com] [WARNING], [2018-02-13 17:37:04], NON-FATAL: [New capsule doesn't have OpenSCAP capability, not switching / configuring openscap_proxy_id] failed to execute properly. [RUNNING], [2018-02-13 17:37:04], [Calling Foreman API to update content source for h1.example.com to cap.example.com] [RUNNING], [2018-02-13 17:37:04], [/usr/bin/systemctl enable rhsmcertd] [SUCCESS], [2018-02-13 17:37:05], [/usr/bin/systemctl enable rhsmcertd], completed successfully. [RUNNING], [2018-02-13 17:37:05], [/usr/bin/systemctl restart rhsmcertd] [SUCCESS], [2018-02-13 17:37:05], [/usr/bin/systemctl restart rhsmcertd], completed successfully. [RUNNING], [2018-02-13 17:37:05], [Stopping the Puppet agent for configuration update] [RUNNING], [2018-02-13 17:37:05], [/usr/bin/systemctl stop puppet] [SUCCESS], [2018-02-13 17:37:05], [/usr/bin/systemctl stop puppet], completed successfully. [RUNNING], [2018-02-13 17:37:05], [Updating Puppet configuration] [RUNNING], [2018-02-13 17:37:05], [sed -i '/^[[:space:]]*server.*/ s/=.*/= cap.example.com/' /etc/puppet/puppet.conf] [SUCCESS], [2018-02-13 17:37:05], [sed -i '/^[[:space:]]*server.*/ s/=.*/= cap.example.com/' /etc/puppet/puppet.conf], completed successfully. [RUNNING], [2018-02-13 17:37:05], [sed -i '/^[[:space:]]*ca_server.*/ s/=.*/= cap.example.com/' /etc/puppet/puppet.conf] [SUCCESS], [2018-02-13 17:37:05], [Removing /var/lib/puppet/ssl], completed successfully. [SUCCESS], [2018-02-13 17:37:05], [Removing /var/lib/puppet/client_data/catalog/h1.example.com.json], completed successfully. [NOTIFICATION], [2018-02-13 17:37:05], [Running Puppet in noop mode to generate SSL certs] [NOTIFICATION], [2018-02-13 17:37:05], [Visit the UI and approve this certificate via Infrastructure->Capsules] [NOTIFICATION], [2018-02-13 17:37:05], [if auto-signing is disabled] [RUNNING], [2018-02-13 17:37:05], [/usr/bin/puppet agent --test --noop --tags no_such_tag --waitforcert 10] Info: Creating a new SSL key for h1.example.com Info: Caching certificate for ca Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml Info: Creating a new SSL certificate request for h1.example.com Info: Certificate Request fingerprint (SHA256): E7:B4:B9:48:B1:E4:85:E6:1F:78:C1:94:8D:E6:DA:EA:89:9D:89:5B:16:BC:14:55:E9:0B:36:1B:3E:4E:0F:16 Info: Caching certificate for ca Info: Caching certificate for h1.example.com Info: Caching certificate_revocation_list for ca Info: Retrieving pluginfacts Info: Retrieving plugin Notice: /File[/var/lib/puppet/lib/facter/rh_certificates.rb]/ensure: removed Info: Loading facts Info: Caching catalog for h1.example.com Info: Applying configuration version '1518543511' Notice: Finished catalog run in 0.28 seconds [SUCCESS], [2018-02-13 17:38:34], [/usr/bin/puppet agent --test --noop --tags no_such_tag --waitforcert 10], completed successfully. [RUNNING], [2018-02-13 17:38:34], [/usr/bin/systemctl enable puppet] [SUCCESS], [2018-02-13 17:38:34], [/usr/bin/systemctl enable puppet], completed successfully. [RUNNING], [2018-02-13 17:38:34], [/usr/bin/systemctl restart puppet] [SUCCESS], [2018-02-13 17:38:34], [/usr/bin/systemctl restart puppet], completed successfully. [NOTIFICATION], [2018-02-13 17:38:34], [Puppet agent is not running; please start manually if required.] [NOTIFICATION], [2018-02-13 17:38:34], [You also need to manually revoke the certificate on the old capsule.] # hammer host info --name h1.example.com ... Environment: KT_Default_Organization_Library_Puppet_View_2 Puppet CA Id: 2 Puppet Master Id: 2 Cert name: h1.example.com ... Parameters: host_param_2 => two All parameters: host_param_2 => two kt_activation_keys => rhel7 hg_param_1 => one ... Content Source: ID: 2 Name: cap.example.com ... >>> @UI check: all params are retained and host is migrated to capsule @HOST # grep -r cap.example.com /etc/rhsm/rhsm.conf /etc/yum.repos.d /etc/puppet /etc/rhsm/rhsm.conf:hostname = cap.example.com /etc/rhsm/rhsm.conf:baseurl= https://cap.example.com/pulp/repos /etc/yum.repos.d/redhat.repo:baseurl = https://cap.example.com/pulp/repos/Default_Organization/Library/content/dist/rhel/server/7/$releasever/$basearch/os /etc/yum.repos.d/redhat.repo:baseurl = https://cap.example.com/pulp/repos/Default_Organization/Library/custom/Internal_RHEL7/Tools_Puppet_4_RHEL7_x86_64 /etc/yum.repos.d/redhat.repo:baseurl = https://cap.example.com/pulp/repos/Default_Organization/Library/custom/Internal_RHEL7/Tools_6_3_RHEL7_x86_64 /etc/yum.repos.d/redhat.repo:baseurl = https://cap.example.com/pulp/repos/Default_Organization/Library/custom/Internal_RHEL7/RHEL_7_4 /etc/puppet/puppet.conf:ca_server = cap.example.com /etc/puppet/puppet.conf:server = cap.example.com # puppet agent -t # yum -y install nc >>> client side check show that the host is registered/reports to capsule Iterated over various migrations S>C, C>S, C>C2 plus upgraded gradually to puppet4 and also tried with p4 upgraded host - all worked Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA.
>
> For information on the advisory, and where to find the updated files, follow the link below.
>
> If the solution does not work for you, open a new bug report.
>
> https://access.redhat.com/errata/RHSA-2018:0336
|