Description of problem:
After updating the puppet agent from version 3 to version 4 with yum, the client configuration is not migarted to the new filesystem structure of puppet 4 and renders the puppet agent inoperable.
Version-Release number of selected component (if applicable):
- Satellite 6.3
- Puppet 4
Everytime a puppet agent is updated via yum from v3 to v4
Steps to Reproduce:
1. Enable rhel-7-server-satellite-tools-6.3-puppet4-rpms repo
2. Run yum update
3. Try to do a puppet run (puppet agent -t)
- The puppet 4 agent is not able to connect to the Satellite server
- The configuration in /etc/puppet/puppet.conf was renamed to /etc/puppet/puppet.conf.rpmnew
- The configuration file was not migrated to /etc/puppetlabs/puppet
- The current working puppet configuration and ssl certs should be migrated to /etc/puppetlabs/puppet
- Either by providing a script to run via RME or by changing the puppet pakacge itself to run some migration
This is the script to fix the configuration:
# upgrade puppet agent
yum -y upgrade
# copy ssl stuff
cp -rp /var/lib/puppet/ssl /etc/puppetlabs/puppet/
# copy old puppet.conf
cp /etc/puppet/puppet.conf.rpmsave /etc/puppetlabs/puppet/puppet.conf
# replace old paths
sed -i 's|/var/lib/puppet|/opt/puppetlabs/puppet/cache|' /etc/puppetlabs/puppet/puppet.conf
sed -i 's|/var/log/puppet|/var/log/puppetlabs/puppet|' /etc/puppetlabs/puppet/puppet.conf
sed -i 's|/var/run/puppet|/var/run/puppetlabs|' /etc/puppetlabs/puppet/puppet.conf
sed -i 's|$vardir/ssl|/etc/puppetlabs/puppet/ssl|' /etc/puppetlabs/puppet/puppet.conf
Is this expected behavior for the puppet v3 to v4 upgrade?
This appears to be a gap in the docs around Puppet 4 upgrades for clients. The best practice workflow we perceive is for users to spin up a separate Capsule that is Puppet 4 and migrate clients to it. The prescribed migration path being to leverage bootstrap.py:
bootstrap.py's new --new-capsule option
I expect that we should also add similar information as that provided by Puppetlabs on agent upgrades . I am going to flip this to a docs bug so that we can get this in officially.
In comment 2 Eric provides link to page about using Puppet module called puppet_agent to do the puppet-agent upgrade.
Can you see any reason to prefer a manual method?
What would be a sensible Enterprise way to get that module to the hosts and make it run? Using a Content View seems unnecessary as its just the one time you need the module.
Does this sound OK:
curl -O to the Sat Server
cp to /var/www/html/pub/
Use a Job to install the module on hosts
I have never used "Config groups", could be another option.
 5.2.1. Setting up Job Templates https://access.redhat.com/documentation/en-us/red_hat_satellite/6.3/html/managing_hosts/chap-managing_hosts-running_remote_jobs_on_hosts#sect-Managing_Hosts-Setting_up_Job_Templates
further to comment 3, Config Groups, ignore that idea.
Config Groups are a collection of Puppet classes that you create to form
building blocks for use in configuring Hosts and Host Groups.
A Job template that can be applied to hosts or a Host Group seems suitable.
after further reading and testing, seems you cannot use the puppet install command to install a remote Puppet module from a directory, only from a Puppet-type repo.
So, I think the options are:
For hosts with Internet connectivity:
Use Job template to run puppet install command to install the puppet_agent module directly from Puppet forge, and then trigger a puppet run .
For hosts without Internet connectivity:
Add puppet_agent module to a yum-type repo in Satellite and push to hosts from Satellite.
I was testing publishing puppet_agent module to /var/www/html/pub/, but then you need to download it, install it, and then run it. That seems not simple enough to give up the advantages of it being in a Satellite repo.
Please review the module in comment 2 and advise
I had a brief look into the puppet_agent module at https://forge.puppet.com/puppetlabs/puppet_agent which is currently at git commit 21f722efea997e59205619321d03934367f42b0e.
First of all, it's a complex module since it supports many operating systems which makes it hard to really follow what it's exactly doing. This is not a downside of the module, but might mean my analysis is incomplete or incorrect in details.
By default it manages the Puppetlabs PC1 yumrepo, but there is manage_repo which can disable this behavior.
There is no parameter to enable the our Puppet repository.
It also imports the GPG keys for the old and new repositories Puppetlabs repositories. This is not needed on Satellite managed repositories (and possibly even in violation of customers' policies) but has no parameter to avoid this. See https://github.com/puppetlabs/puppetlabs-puppet_agent/blob/21f722efea997e59205619321d03934367f42b0e/manifests/osfamily/redhat.pp#L68-L106
By default it starts the puppet and mcollective services, but the services_names parameter can be overridden. AFAIK we don't support mcollective. It's unclear to me if there are other mcollective things that would be a problem.
Moving forward I think this module could be a good basis, but we need to do a few things:
* Add a parameter to make the GPG key importing optional. Should have a good chance to be accepted upstream.
* Find a way to enable the our Puppet repository
* Verify mcollective is not being started and upgrades work well.
To enable our Puppet repository I can think of two ways:
* We create a wrapper module that enables the repository and includes the puppet_agent module with the correct parameters. The benefit for users is that they now only have to include one class that will work. The downside is that the user now has to install two puppet modules. They could accidentally include the wrong module.
* We push the satellite repository management in the puppet_agent module. Upside is that there's just one module. It will need to be called with the correct parameters which means more complex instructions. The's also no guarantee they accept our changes.
Given those pros and cons I'm leaning to the wrapper module, but a second opinion would be highly appreciated.
Thank you for investigating.
As this is not a quick fix I think I should just go ahead and document what Stefan wrote in comment 0 and get that published.
If a more elegant solution emerges we can raise a new bug to document it. I will wait a day to see if Eric has a better idea.
Is it possible to provide a rpm, e.g. atellite-client-upgrade-to-puppet4, with %post script that does the migration?
(In reply to Stephen Wadeley from comment #3)
> In comment 2 Eric provides link to page about using Puppet module called
> puppet_agent to do the puppet-agent upgrade.
> Can you see any reason to prefer a manual method?
The manual mode executed via rex looks much simpler to implement.
Alternatively I would prefer a rpm package with a %post% script that does
the migration of the puppet.conf.
resetting the info for Eric, but from my point of view, rex template and rpm with %post seems most straightforward and covers also users who don't have rex available in their infrastructure
So far we've avoided managing customer systems (modulo openscap) and I think that's still a good goal. That's why I think we shouldn't ship puppet modules. It would be nice if we could advise users on modules that do the right thing though. That's why I'm a bit torn between the two solutions.
What is good to note is that the puppet module also removes settings that were removed from puppet 4 so you have no dummy values in your puppet.conf. That might be a nice addition, but not that important.
Creating a separate RPM that does the migration in %post sounds incorrect to me. If you want to go the %post route, then the puppet-agent RPM is a better candidate but I haven't thought it fully through yet.
I think short term we should come up with a recommended manual way of updating the agent which could be easily applied via REX.
As the docs are incomplete and this issue is urgent, I will clone this bug now and add the info from comment 0 to the guide. When more info is available in this bug I can repeat that process.
The set of commands Stefan describes will get you a working Puppet4 agent, however, it will re-use the old Puppet3 config (minus the updated paths) for Puppet4. While this works, I'd prefer if we would guide the users towards using the latest possible config.
That said I think the upgrade should be:
1. copy the SSL certs to the new path
2. re-gen puppet.conf from the template as it is done on a fresh install (https://github.com/theforeman/community-templates/blob/develop/provisioning_templates/snippet/puppet.conf.erb)
3. tell the user to re-apply any changes they had made to the config
- having this in %post of any RPM seems cumbersome
- having this as a REX task sounds do-able
It seems to me to make more sense to offer the rpm-script route, as this is one that /anyone/ could use.
This is not the case with REx.
Clearing the needinfo
Returning to the default assignee to be re-triaged as the schedule allows.
Stephen added the available workaround to the Upgrading guide in BZ#1554792.
Docs team cannot do more without clear direction from Engineering.
Feel free to reopen or raise a new documentation bug if required.