Bug 1547951 - Updating puppet agent from v3 to v4 doesnot migrate the configuration
Summary: Updating puppet agent from v3 to v4 doesnot migrate the configuration
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Docs Upgrading and Updating Red Hat Satellite
Version: 6.3.0
Hardware: All
OS: Linux
high
high vote
Target Milestone: Unspecified
Assignee: Sergei Petrosian
QA Contact: satellite-doc-list
URL:
Whiteboard:
Depends On: 1517624 1554792
Blocks: 1122832
TreeView+ depends on / blocked
 
Reported: 2018-02-22 11:13 UTC by Stefan Meyer
Modified: 2019-11-07 12:11 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1554792 (view as bug list)
Environment:
Last Closed: 2019-08-29 07:37:20 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Bugzilla 1517624 'high' 'CLOSED' '[RFE] Need to automate the manual steps required to upgrade the client.' 2019-11-26 09:58:32 UTC

Internal Links: 1517624

Description Stefan Meyer 2018-02-22 11:13:01 UTC
Description of problem:
After updating the puppet agent from version 3 to version 4 with yum, the client configuration is not migarted to the new filesystem structure of puppet 4 and renders the puppet agent inoperable.

Version-Release number of selected component (if applicable):
- Satellite 6.3
- Puppet 4

How reproducible:
Everytime a puppet agent is updated via yum from v3 to v4

Steps to Reproduce:
1. Enable rhel-7-server-satellite-tools-6.3-puppet4-rpms repo
2. Run yum update
3. Try to do a puppet run (puppet agent -t)

Actual results:
- The puppet 4 agent is not able to connect to the Satellite server
- The configuration in /etc/puppet/puppet.conf was renamed to /etc/puppet/puppet.conf.rpmnew
- The configuration file was not migrated to /etc/puppetlabs/puppet

Expected results:
- The current working puppet configuration and ssl certs should be migrated to /etc/puppetlabs/puppet
- Either by providing a script to run via RME or by changing the puppet pakacge itself to run some migration

Additional info:
This is the script to fix the configuration:

####################################################
# upgrade puppet agent
yum -y upgrade

# copy ssl stuff
cp -rp /var/lib/puppet/ssl /etc/puppetlabs/puppet/

# copy old puppet.conf
cp /etc/puppet/puppet.conf.rpmsave /etc/puppetlabs/puppet/puppet.conf

# replace old paths
sed -i 's|/var/lib/puppet|/opt/puppetlabs/puppet/cache|' /etc/puppetlabs/puppet/puppet.conf
sed -i 's|/var/log/puppet|/var/log/puppetlabs/puppet|' /etc/puppetlabs/puppet/puppet.conf
sed -i 's|/var/run/puppet|/var/run/puppetlabs|' /etc/puppetlabs/puppet/puppet.conf
sed -i 's|$vardir/ssl|/etc/puppetlabs/puppet/ssl|' /etc/puppetlabs/puppet/puppet.conf
####################################################

Comment 1 Brad Buckingham 2018-02-26 21:32:12 UTC
Hi Eric,

Is this expected behavior for the puppet v3 to v4 upgrade?

Comment 2 Eric Helms 2018-02-28 13:49:09 UTC
This appears to be a gap in the docs around Puppet 4 upgrades for clients. The best practice workflow we perceive is for users to spin up a separate Capsule that is Puppet 4 and migrate clients to it. The prescribed migration path being to leverage bootstrap.py:

    bootstrap.py's new --new-capsule option


I expect that we should also add similar information as that provided by Puppetlabs on agent upgrades [1]. I am going to flip this to a docs bug so that we can get this in officially.


[1] https://puppet.com/docs/puppet/4.10/upgrade_major_agent.html

Comment 3 Stephen Wadeley 2018-03-05 09:08:18 UTC
Hello Stefan

In comment 2 Eric provides link to page about using Puppet module called puppet_agent to do the puppet-agent upgrade.

Can you see any reason to prefer a manual method?

What would be a sensible Enterprise way to get that module to the hosts and make it run? Using a Content View seems unnecessary as its just the one time you need the module. 

Does this sound OK:

curl -O to the Sat Server
cp to /var/www/html/pub/
Use a Job to install the module on hosts[1]


I have never used "Config groups", could be another option.

Thank you


[1] 5.2.1. Setting up Job Templates https://access.redhat.com/documentation/en-us/red_hat_satellite/6.3/html/managing_hosts/chap-managing_hosts-running_remote_jobs_on_hosts#sect-Managing_Hosts-Setting_up_Job_Templates

Comment 4 Stephen Wadeley 2018-03-05 09:18:32 UTC
Hello

further to comment 3, Config Groups, ignore that idea.

Config Groups are a collection of Puppet classes that you create to form
building blocks for use in configuring Hosts and Host Groups.[1]

A Job template that can be applied to hosts or a Host Group seems suitable.

[1] https://access.redhat.com/documentation/en-us/red_hat_satellite/6.3/html/puppet_guide/chap-red_hat_satellite-puppet_guide-using_config_groups_to_manage_puppet_classes

Comment 5 Stephen Wadeley 2018-03-05 21:03:41 UTC
Hello

after further reading and testing, seems you cannot use the puppet install command to install a remote Puppet module from a directory, only from a Puppet-type repo.

So, I think the options are:

For hosts with Internet connectivity:

 Use Job template to run puppet install command to install the puppet_agent module directly from Puppet forge, and then trigger a puppet run .

For hosts without Internet connectivity:

 Add puppet_agent module to a yum-type repo in Satellite and push to hosts from Satellite.

I was testing publishing puppet_agent module to /var/www/html/pub/, but then you need to download it, install it, and then run it. That seems not simple enough to give up the advantages of it being in a Satellite repo.

Thank you

Comment 6 Stephen Wadeley 2018-03-06 10:24:20 UTC
Hello Ewoud

Please review the module in comment 2 and advise

Thank you

Comment 7 Ewoud Kohl van Wijngaarden 2018-03-06 11:39:33 UTC
I had a brief look into the puppet_agent module at https://forge.puppet.com/puppetlabs/puppet_agent which is currently at git commit 21f722efea997e59205619321d03934367f42b0e.

First of all, it's a complex module since it supports many operating systems which makes it hard to really follow what it's exactly doing. This is not a downside of the module, but might mean my analysis is incomplete or incorrect in details.

By default it manages the Puppetlabs PC1 yumrepo, but there is manage_repo which can disable this behavior.

There is no parameter to enable the our Puppet repository.

It also imports the GPG keys for the old and new repositories Puppetlabs repositories. This is not needed on Satellite managed repositories (and possibly even in violation of customers' policies) but has no parameter to avoid this. See https://github.com/puppetlabs/puppetlabs-puppet_agent/blob/21f722efea997e59205619321d03934367f42b0e/manifests/osfamily/redhat.pp#L68-L106

By default it starts the puppet and mcollective services, but the services_names parameter can be overridden. AFAIK we don't support mcollective. It's unclear to me if there are other mcollective things that would be a problem.

Moving forward I think this module could be a good basis, but we need to do a few things:

* Add a parameter to make the GPG key importing optional. Should have a good chance to be accepted upstream.
* Find a way to enable the our Puppet repository
* Verify mcollective is not being started and upgrades work well.

To enable our Puppet repository I can think of two ways:

* We create a wrapper module that enables the repository and includes the puppet_agent module with the correct parameters. The benefit for users is that they now only have to include one class that will work. The downside is that the user now has to install two puppet modules. They could accidentally include the wrong module.
* We push the satellite repository management in the puppet_agent module. Upside is that there's just one module. It will need to be called with the correct parameters which means more complex instructions. The's also no guarantee they accept our changes.

Given those pros and cons I'm leaning to the wrapper module, but a second opinion would be highly appreciated.

Comment 8 Stephen Wadeley 2018-03-07 11:23:49 UTC
Hello Ewoud

Thank you for investigating. 

As this is not a quick fix I think I should just go ahead and document what Stefan wrote in comment 0 and get that published.

If a more elegant solution emerges we can raise a new bug to document it. I will wait a day to see if Eric has a better idea.


Thank you

Comment 9 Peter Vreman 2018-03-08 15:50:17 UTC
Is it possible to provide a rpm, e.g. atellite-client-upgrade-to-puppet4, with %post script that does the migration?

Comment 10 Stefan Meyer 2018-03-08 15:53:39 UTC
(In reply to Stephen Wadeley from comment #3) 
> In comment 2 Eric provides link to page about using Puppet module called
> puppet_agent to do the puppet-agent upgrade.
> 
> Can you see any reason to prefer a manual method?

The manual mode executed via rex looks much simpler to implement.
Alternatively I would prefer a rpm package with a %post% script that does
the migration of the puppet.conf.

Comment 11 Marek Hulan 2018-03-08 17:27:55 UTC
resetting the info for Eric, but from my point of view, rex template and rpm with %post seems most straightforward and covers also users who don't have rex available in their infrastructure

Comment 13 Ewoud Kohl van Wijngaarden 2018-03-08 17:58:51 UTC
So far we've avoided managing customer systems (modulo openscap) and I think that's still a good goal. That's why I think we shouldn't ship puppet modules. It would be nice if we could advise users on modules that do the right thing though. That's why I'm a bit torn between the two solutions.

What is good to note is that the puppet module also removes settings that were removed from puppet 4 so you have no dummy values in your puppet.conf. That might be a nice addition, but not that important.

Creating a separate RPM that does the migration in %post sounds incorrect to me. If you want to go the %post route, then the puppet-agent RPM is a better candidate but I haven't thought it fully through yet.

I think short term we should come up with a recommended manual way of updating the agent which could be easily applied via REX.

Comment 15 Stephen Wadeley 2018-03-13 11:48:17 UTC
Hello

As the docs are incomplete and this issue is urgent, I will clone this bug now and add the info from comment 0 to the guide. When more info is available in this bug I can repeat that process.

Thank you

Comment 17 Evgeni Golov 2018-03-19 10:17:01 UTC
The set of commands Stefan describes will get you a working Puppet4 agent, however, it will re-use the old Puppet3 config (minus the updated paths) for Puppet4. While this works, I'd prefer if we would guide the users towards using the latest possible config.

That said I think the upgrade should be:
1. copy the SSL certs to the new path
2. re-gen puppet.conf from the template as it is done on a fresh install (https://github.com/theforeman/community-templates/blob/develop/provisioning_templates/snippet/puppet.conf.erb)
3. tell the user to re-apply any changes they had made to the config

- having this in %post of any RPM seems cumbersome
- having this as a REX task sounds do-able

Comment 18 Craig Donnelly 2018-05-08 23:00:45 UTC
It seems to me to make more sense to offer the rpm-script route, as this is one that /anyone/ could use.

This is not the case with REx.

Comment 19 Bryan Kearney 2018-11-01 20:20:21 UTC
Clearing the needinfo

Comment 20 Andrew Dahms 2018-12-04 13:04:03 UTC
Returning to the default assignee to be re-triaged as the schedule allows.

Comment 26 Sergei Petrosian 2019-08-29 07:37:20 UTC
Stephen added the available workaround to the Upgrading guide in BZ#1554792.

Docs team cannot do more without clear direction from Engineering.

Feel free to reopen or raise a new documentation bug if required.


Note You need to log in before you can comment on or make changes to this bug.